Académique Documents
Professionnel Documents
Culture Documents
You can control how Export runs by entering the 'expdp' command followed by various
parameters. To specify parameters, you use keywords:
Format: expdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
Example: expdp scott/tiger DUMPFILE=scott.dmp DIRECTORY=dmpdir SCHEMAS=scott
or TABLES=(T1:P1,T1:P2), if T1 is partitioned table
USERID must be the first parameter on the command line.
Command Description
ADD_FILE Add dumpfile to dumpfile set.
CONTINUE_CLIENT Return to logging mode. Job will be re-started if idle.
EXIT_CLIENT Quit client session and leave job running.
FILESIZE Default filesize (bytes) for subsequent ADD_FILE commands.
HELP Summarize interactive commands.
KILL_JOB Detach and delete job.
PARALLEL Change the number of active workers for current job.
PARALLEL=<number of workers>.
START_JOB Start/resume current job.
STATUS Frequency (secs) job status is to be monitored where the default (0)
will show new status when available. STATUS[=interval]
STOP_JOB Orderly shutdown of job execution and exits the client.
STOP_JOB=IMMEDIATE performs an immediate shutdown of
the Data Pump job.
The above two utilities have similar look and feel with the pre-Oracle 10g import and export
utilities (e.g., imp and exp, respectively) but are completely separate. Meaning dump files
generated by the original export utility (exp) cannot be imported by the new data pump import
utility (impdp) and vice-versa.
Data Pump Export (expdp) and Data Pump Import (impdp) are server-based rather than client-
based as is the case for the original export (exp) and import (imp). Because of this, dump files,
log files, and sql files are accessed relative to the server-based directory paths. Data Pump
requires that directory objects mapped a file system directory be specified in the invocation of
the data pump import or export.
It for this reason and for convenience that a directory object be created before using the data
pump export or import utilities.
For example to create a directory object named expdp_dir located at /u01/backup/exports enter
the following sql statement:
SQL> create directory expdp_dir as '/u01/backup/exports'
then grant read and write permissions to the users who will be performing the data pump export
and import.
SQL> grant read, write on directory pexpd_dir to system, user1, user2, user3;
Schema Export Mode The schema export mode is invoked using the
SCHEMAS parameter. If you have no EXP_FULL_DATABASE role, you can
only export your own schema. If you have EXP_FULL_DATABASE role, you
can export several schemas in one go. Optionally, you can include the
system privilege grants as well.
Table Export Mode This export mode is specified using the TABLES
parameter. In this mode, only the specified tables, partitions and their
dependents are exported. If you do not have the EXP_FULL_DATABASE
role, you can export only tables in your own schema. You can only specify
tables in the same schema.
Invoking Data Pump Import The data pump import can be invoked in the command line. The
export parameters can be specified directly in the command line.
Full Import Mode The full import mode loads the entire contents of the source (export) dump
file to the target database. However, you must have been granted the IMP_FULL_DATABASE
role on the target database. The data pump import is invoked using the impdp command in the
command line with the FULL parameter specified in the same command line.
Schema Import Mode The schema import mode is invoked using the SCHEMAS
parameter. Only the contents of the specified schemas are load into the target database. The
source dump file can be a full, schema-mode, table, or tablespace mode export files. If you have
a IMP_FULL_DATABASE role, you can specify a list of schemas to load into the target
database.
Table Import Mode This export mode is specified using the TABLES parameter. In this
mode, only the specified tables, partitions and their dependents are exported. If you do not have
the EXP_FULL_DATABASE role, you can import only tables in your own schema.
$ impdp hr/hr DIRECTORY=exp_dir DUMPFILE=expfull.dmp
TABLES=employees,jobs,departments
What is the performance gain of Data Pump Export versus Original Export?
Using the Direct Path method of unloading, a single stream of data unload is about 2 times faster
than original Export because the Direct Path API has been modified to be even more efficient.
Depending on the level of parallelism, the level of improvement can be much more.
What is the performance gain of Data Pump Import versus Original Import?
A single stream of data load is 15-45 times faster Original Import. The reason it is so much faster
is that Conventional Import uses only conventional mode inserts, whereas Data Pump Import
uses the Direct Path method of loading. As with Export, the job can be parallelized for even more
improvement.
Can you adjust the level of parallelism dynamically for more or less resource consumption?
Yes, you can dynamically throttle the number of threads of execution throughout the lifetime of
the job. There is an interactive command mode where you can adjust the level of parallelism. So,
for example, you can start up a job during the day with a PARALLEL=2, and then increase it at
night to a higher level.
Is it necessary to use the Command line interface or is there a GUI that you can use?
You can either use the Command line interface or the Oracle Enterprise Manager web-based GUI
interface.
Can I move a dump file set across platforms, such as from Sun to HP?
Yes, Data Pump handles all the necessary compatibility issues between hardware platforms and
operating systems.
Can I take 1 dump file set from my source database and import it into multiple databases?
Yes, a single dump file set can be imported into multiple databases.
You can also just import different subsets of the data out of that single dump file set.
Is Oracle Data Pump certified against Apps11i?
Yes, Oracle Data Pump supports Apps11i..
Is there a way to estimate the size of an export job before it gets underway?
Yes, you can use the “ESTIMATE ONLY” command to see how much disk space is required for
the job’s dump file set before you start the operation.
Can I monitor a Data Pump Export or Import job while the job is in progress?
Yes, jobs can be monitored from any location is going on. Clients may also detach from an
executing job without affecting it.
Does Data Pump give me the ability to manipulate the Data Definition Language (DDL)?
Yes, with Data Pump, it is now possible to change the definition of some objects as they are
created at import time. For example, you can remap the source datafile name to the target datafile
name in all DDL statements where the source datafile is referenced. This is really useful if you
are moving across platforms with different file system syntax.
Can I still use Original Export? Do I have to convert to Data Pump Export?
An Oracle9i compatible Export that operates against Oracle Database 11g will ship with Oracle
11g, but it does not export Oracle Database 11g features. Also, Data Pump Export has new
syntax and a new client executable, so Original Export scripts will need to change. Oracle
recommends that customers convert to use the Oracle Data Pump Export.
How do I import an old dump file into Oracle 10g? Can I use Original Import or do I have
to convert to Data Pump Import?
Original Import will be maintained and shipped forever, so that Oracle Version 5.0 through
Oracle9i dump files will be able to be loaded into Oracle 10g and later. Data Pump Import can
only read Oracle Database 11g (and later) Data Pump Export dump files. Data Pump Import has
new syntax and a new client executable, so Original Import scripts will need to change. Oracle
recommends that customers convert to use the Oracle Data Pump Import.
When would I use SQL*Loader instead of Data Pump Export and Import?
You would use SQL*Loader to load data from external files into tables of an Oracle database.
Many customers use SQL*Loader on a daily basis to load files (e.g. financial feeds) into their
databases. Data Pump Export and Import may be used less frequently, but for very important
tasks, such as migrating between platforms, moving data between development, test, and
production databases, logical database backup, and for application deployment throughout a
corporation.
When would I use Transportable Tablespaces instead of Data Pump Export and Import?
You would use Transportable Tablespaces when you want to move an entire tablespace of data
from one Oracle database to another. Transportable Tablespaces allows Oracle data files to be
unplugged from a database, moved or copied to another location, and then plugged into another
database. Moving data using Transportable Tablespaces can be much faster than performing
either an export or import of the same data, because transporting a tablespace only requires the
copying of datafiles and integrating the tablespace dictionary information. Even when
transporting a tablespace, Data Pump Export and Import are still used to handle the extraction
and recreation of the metadata for that tablespace.
Various parameters are available to control what objects are exported or imported. To get a list of
available parameters, run the exp or imp utilities with the help=yes parameter.
The export/import utilities are commonly used to perform the following tasks:
Backup and recovery (small databases only, say < +50GB, if bigger, use RMAN instead)
Move data between Oracle databases on different platforms (for example from Solaris to
Windows)
Reorganization of data/ eliminate database fragmentation (export, drop and re-import
tables)
Upgrade databases from extremely old versions of Oracle (when in-place upgrades are
not supported by the Database Upgrade Assistant anymore)
Detect database corruption. Ensure that all the data can be read
Transporting tablespaces between databases
Etc.
NOTE: It is generally advised not to use exports as the only means of backing-up a database.
Physical backup methods (for example, when you use RMAN) are normally much quicker and
supports point in time based recovery (apply archivelogs after recovering a database). Also,
exp/imp is not practical for large database environments.
The following examples demonstrate how the imp/exp utilities can be used:
BUFFER=100000
FILE=account.dmp
FULL=n
OWNER=scott
GRANTS=y
COMPRESS=y
NOTE: If you do not like command line utilities, you can import and export data with the
"Schema Manager" GUI that ships with Oracle Enterprise Manager (OEM).
Method 1:
For this to work one needs to be on Oracle 7.3 or higher (7.2 might also be OK). If the import
has more than one table, this statement will only show information about the current table being
imported.
Method 2:
Use the FEEDBACK=N import parameter. This parameter will tell IMP to display a dot for
every N rows imported. For example, FEEDBACK=1000 will show a dot after every 1000 row.
Note: It is also advisable to drop indexes before importing to speed up the import process.
Indexes can easily be recreated after the data was successfully imported.
Oracle also ships some previous catexpX.sql scripts that can be executed as user SYS enabling
older imp/exp versions to work (for backwards compatibility). For example, one can run
$ORACLE_HOME/rdbms/admin/catexp7.sql on an Oracle 8 database to allow the Oracle 7.3
exp/imp utilities to run against an Oracle 8 database.
Can one export to multiple files?/ Can one beat the Unix 2
Gig limit?
From Oracle8i, the export utility supports multiple output files. This feature enables large exports
to be divided into files whose sizes will not exceed any operating system limits (FILESIZE=
parameter). When importing from multi-file export you must provide the same filenames in the
same sequence in the FILE= parameter. Look at this example:
Use the following technique if you use an Oracle version prior to 8i:
Create a compressed export on the fly. Depending on the type of data, you probably can export
up to 10 gigabytes to a single file. This example uses gzip. It offers the best compression I know
of, but you can also substitute it with zip, compress or whatever.
Set the BUFFER parameter to a high value (e.g. 2Mb -- entered as an integer "2000000")
Set the RECORDLENGTH parameter to a high value (e.g. 64Kb -- entered as an integer
"64000")
Use DIRECT=yes (direct mode export)
Stop unnecessary applications to free-up resources for your job.
If you run multiple export sessions, ensure they write to different physical disks.
DO NOT export to an NFS mounted filesystem. It will take forever.
IMPORT:
Create an indexfile so that you can create indexes AFTER you have imported data. Do
this by setting INDEXFILE to a filename and then import. No data will be imported but a
file containing index definitions will be created. You must edit this file afterwards and
supply the passwords for the schemas on all CONNECT statements.
Place the file to be imported on a separate physical disk from the oracle data files
Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i) considerably in the
init$SID.ora file
Set the LOG_BUFFER to a big value and restart oracle.
Stop redo log archiving if it is running (ALTER DATABASE NOARCHIVELOG;)
Create a BIG tablespace with a BIG rollback segment inside. Set all other rollback
segments offline (except the SYSTEM rollback segment of course). The rollback
segment must be as big as your biggest table (I think?)
Use COMMIT=N in the import parameter file if you can afford it
Use STATISTICS=NONE in the import parameter file to avoid time consuming to import
the statistics
Remember to run the indexfile previously created
You are importing duplicate rows. Use IGNORE=YES to skip tables that already exist
(imp will give an error if the object is re-created).
ORA-01555: Snapshot too old
Ask your users to STOP working while you are exporting or try using parameter
CONSISTENT=NO
ORA-01562: Failed to extend rollback segment
Use the IGNORE=Y import parameter to ignore these errors, but be careful as you might
end up with duplicate rows.
Retrieved from "http://www.orafaq.com/wiki/Import_Export_FAQ"
Category: Frequently Asked Questions