Oracle Data Pump - EXPDP and IMPDP utility

posted Sep 8, 2010, 6:18 AM by Sachchida Ojha   [ updated Feb 6, 2012, 6:02 AM by Sachchida Ojha ]
Oracle introduced Data Pump in Oracle Database 10g Release 1. This new oracle technology enables very high transfer of data from one database to another. The oracle Data Pump provides two utilities namely:
1) Data Pump Export - expdp command. 
2) Data Pump Import  - impdp command. 


A common technique used with exp/imp was compressing through a Pipe. Apparently this cannot be done with data pump "the ... dumpfile ... is no longer processed in a sequential manner.


Dump files generated by the original export utility (exp) cannot be imported by the new data pump import utility (impdp) and vice-versa.

Data Pump Export (expdp) and Data Pump Import (impdp) are server-based rather than client-based as is the case for the original export (exp) and import (imp). Because of this, dump files, log files, and sql files are accessed relative to the server-based directory paths.

Data Pump requires that directory objects mapped a file system directory be specified in the invocation of the data pump import or export. It for this reason and for convenience that a directory object be created before using the data pump export or import utilities.

For example to create a directory object named expdp_dir located at /u01/backup/exports enter the following sql statement:
SQL> create directory expdp_dir as '/backup/db_name/exports/dpump' ;
then grant read and write permissions to the users who will be performing the data pump export and import.
SQL> grant read,write on directory dpexp_dir to system, user1, user2, user3;

Data Pump Export

Full Export Mode You can invoke the data pump export using a command line. Export parameters can be specified directly in the command line. A full export is specified using the FULL parameter. In a full database export, the entire database is unloaded. This mode requires that you have the EXP_FULL_DATABASE role.Shown below is an example

$ expdp system/<password> DIRECTORY=exp_dir DUMPFILE=expfull.dmp FULL=y LOGFILE=expfull.og

chema Export Mode The schema export mode is invoked using the SCHEMAS parameter. If you have no EXP_FULL_DATABASE role, you can only export your own schema. If you have EXP_FULL_DATABASE role, you can export several schemas in one go. Optionally, you can include the system privilege grants as well.

$ expdp hr/hr DIRECTORY=exp_dir DUMPFILE=schema_exp.dmp SCHEMAS=hr,finance

Table Export Mode This export mode is specified using the TABLES parameter. In this mode, only the specified tables, partitions and their dependents are exported. If you do not have the EXP_FULL_DATABASE role, you can export only tables in your own schema. You can only specify tables in the same schema.

$ expdp hr/hr DIRECTORY=exp_dir DUMPFILE=tables_exp.dmp TABLES=employees,jobs,departments

Invoking Data Pump Import The data pump import can be invoked in the command line. The export parameters can be specified directly in the command line.

Full Import Mode The full import mode loads the entire contents of the source (export) dump file to the target database. However, you must have been granted the IMP_FULL_DATABASE role on the target database. The data pump import is invoked using the impdp command in the command line with the FULL parameter specified in the same command line.

$ impdp system/<password> DIRECTORY=exp_dir DUMPFILE=expfull.dmp FULL=y LOGFILE=impfull.og

Schema Import Mode The schema import mode is invoked using the SCHEMAS parameter. Only the contents of the specified schemas are load into the target database. The source dump file can be a full, schema-mode, table, or tablespace mode export files. If you have a IMP_FULL_DATABASE role, you can specify a list of schemas to load into the target database.

$ impdp hr/hr DIRECTORY=exp_dir DUMPFILE=expfull.dmp SCHEMAS=hr

Table Import Mode This export mode is specified using the TABLES parameter. In this mode, only the specified tables, partitions and their dependents are exported. If you do not have the EXP_FULL_DATABASE role, you can import only tables in your own schema.


$ impdp hr/hr DIRECTORY=exp_dir DUMPFILE=expfull.dmp TABLES=employees,jobs,departments

All data pump actions are performed by multiple jobs (server processes not DBMS_JOB jobs).These jobs

are controlled by a master control process which uses Advanced Queuing. At runtime an advanced queue table, named after the job name, is created and used by the master control process.The table is dropped on completion of the data pump job. The job and the advanced queue can be named using the JOB_NAME parameter. Cancelling the client process does not stop the associated data pump job.Issuing "ctrl+c" on the client during a job stops the client output and presents a command prompt. Typing "status" at this prompt allows you to monitor the current job.


Metadata repository using exp and expdp

$ exp system/password para=emdev.par

Export:
DIRECT=y
FILE=exp_emdev_meta_04282009.dmp
FULL=y
COMPRESS=n
ROWS=n
INDEXES=y
CONSTRAINTS=y
LOG=/metadata/emdb/weekly/exp_emdev_meta.log

$expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp CONTENT=METADATA_ONLY

$ expdp para=emdev.par
Datapump:
$OHOME/bin/expdp user/pwd
content=metadata_only
full=y
directory=DATA_PUMP_DIR
dumpfile=em%U.dmp
parallel=4

expdp help=y
Keyword Description (Default)
------------------------------------------------------------------------------
ATTACH Attach to existing job, e.g. ATTACH [=job name].
CONTENT Specifies data to unload where the valid keywords are:
(ALL), DATA_ONLY, and METADATA_ONLY.
DIRECTORY Directory object to be used for dumpfiles and logfiles.
DUMPFILE List of destination dump files (expdat.dmp),
e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ESTIMATE Calculate job estimates where the valid keywords are:
(BLOCKS) and STATISTICS.
ESTIMATE_ONLY Calculate job estimates without performing the export.
EXCLUDE Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.
FILESIZE Specify the size of each dumpfile in units of bytes.
FLASHBACK_SCN SCN used to set session snapshot back to.
FLASHBACK_TIME Time used to get the SCN closest to the specified time.
FULL Export entire database (N).
HELP Display Help messages (N).
INCLUDE Include specific object types, e.g. INCLUDE=TABLE_DATA.
JOB_NAME Name of export job to create.
LOGFILE Log file name (export.log).
NETWORK_LINK Name of remote database link to the source system.
NOLOGFILE Do not write logfile (N).
PARALLEL Change the number of active workers for current job.
PARFILE Specify parameter file.
QUERY Predicate clause used to export a subset of a table.
SCHEMAS List of schemas to export (login schema).
STATUS Frequency (secs) job status is to be monitored where
the default (0) will show new status when available.
TABLES Identifies a list of tables to export - one schema only.
TABLESPACES Identifies a list of tablespaces to export.
TRANSPORT_FULL_CHECK Verify storage segments of all tables (N).
TRANSPORT_TABLESPACES List of tablespaces from which metadata will be unloaded.
VERSION Version of objects to export where valid keywords are:
(COMPATIBLE), LATEST, or any valid database version.


impdp help=y
Keyword               Description (Default)
------------------------------------------------------------------------------
ATTACH Attach to existing job, e.g. ATTACH [=job name].
CONTENT Specifies data to load where the valid keywords are:
(ALL), DATA_ONLY, and METADATA_ONLY.
DIRECTORY Directory object to be used for dump, log, and sql files.
DUMPFILE List of dumpfiles to import from (expdat.dmp),
e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ESTIMATE Calculate job estimates where the valid keywords are:
(BLOCKS) and STATISTICS.
EXCLUDE Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.
FLASHBACK_SCN SCN used to set session snapshot back to.
FLASHBACK_TIME Time used to get the SCN closest to the specified time.
FULL Import everything from source (Y).
HELP Display help messages (N).
INCLUDE Include specific object types, e.g. INCLUDE=TABLE_DATA.
JOB_NAME Name of import job to create.
LOGFILE Log file name (import.log).
NETWORK_LINK Name of remote database link to the source system.
NOLOGFILE Do not write logfile.
PARALLEL Change the number of active workers for current job.
PARFILE Specify parameter file.
QUERY Predicate clause used to import a subset of a table.
REMAP_DATAFILE Redefine datafile references in all DDL statements.
REMAP_SCHEMA Objects from one schema are loaded into another schema.
REMAP_TABLESPACE Tablespace object are remapped to another tablespace.
REUSE_DATAFILES Tablespace will be initialized if it already exists (N).
SCHEMAS List of schemas to import.
SKIP_UNUSABLE_INDEXES Skip indexes that were set to the Index Unusable state.
SQLFILE Write all the SQL DDL to a specified file.
STATUS Frequency (secs) job status is to be monitored where
the default (0) will show new status when available.
STREAMS_CONFIGURATION Enable the loading of Streams metadata
TABLE_EXISTS_ACTION Action to take if imported object already exists.
Valid keywords: (SKIP), APPEND, REPLACE and TRUNCATE.
TABLES Identifies a list of tables to import.
TABLESPACES Identifies a list of tablespaces to import.
TRANSFORM Metadata transform to apply (Y/N) to specific objects.
Valid transform keywords: SEGMENT_ATTRIBUTES and STORAGE.
ex. TRANSFORM=SEGMENT_ATTRIBUTES:N:TABLE.
TRANSPORT_DATAFILES List of datafiles to be imported by transportable mode.
TRANSPORT_FULL_CHECK Verify storage segments of all tables (N).
TRANSPORT_TABLESPACES List of tablespaces from which metadata will be loaded.
Only valid in NETWORK_LINK mode import operations.
VERSION Version of objects to export where valid keywords are:
(COMPATIBLE), LATEST, or any valid database version.
Only valid for NETWORK_LINK and SQLFILE.
Comments