Vous êtes sur la page 1sur 17

DATAPUMP EXPORT (expdp) Parameters:

[oracle2@www 10.2.0]$ expdp help=y


The Data Pump export utility provides a mechanism for transferring data objects between Oracle
databases. The utility is invoked with the following command:
Example: expdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

You can control how Export runs by entering the 'expdp' command followed by various
parameters. To specify parameters, you use keywords:
Format: expdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
Example: expdp scott/tiger DUMPFILE=scott.dmp DIRECTORY=dmpdir SCHEMAS=scott
or TABLES=(T1:P1,T1:P2), if T1 is partitioned table
USERID must be the first parameter on the command line.

Keyword Description (Default)


ATTACH Attach to existing job, e.g. ATTACH [=job name].
COMPRESSION Reduce size of dumpfile contents where valid keyword values are:
(METADATA_ONLY) and NONE.
CONTENT Specifies data to unload where the valid keywords are:
(ALL), DATA_ONLY, and METADATA_ONLY.
DIRECTORY Directory object to be used for dumpfiles and logfiles.
DUMPFILE List of destination dump files (expdat.dmp),
e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ENCRYPTION_PASSWORD Password key for creating encrypted column data.
ESTIMATE Calculate job estimates where the valid keywords are:
(BLOCKS) and STATISTICS.
ESTIMATE_ONLY Calculate job estimates without performing the export.
EXCLUDE Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.
FILESIZE Specify the size of each dumpfile in units of bytes.
FLASHBACK_SCN SCN used to set session snapshot back to.
FLASHBACK_TIME Time used to get the SCN closest to the specified time.
FULL Export entire database (N).
HELP Display Help messages (N).
INCLUDE Include specific object types, e.g. INCLUDE=TABLE_DATA.
JOB_NAME Name of export job to create.
LOGFILE Log file name (export.log).
NETWORK_LINK Name of remote database link to the source system.
NOLOGFILE Do not write logfile (N).
PARALLEL Change the number of active workers for current job.
PARFILE Specify parameter file.
QUERY Predicate clause used to export a subset of a table.
SAMPLE Percentage of data to be exported;
SCHEMAS List of schemas to export (login schema).
STATUS Frequency (secs) job status is to be monitored where the default (0)
will show new status when available.
TABLES Identifies a list of tables to export - one schema only.
TABLESPACES Identifies a list of tablespaces to export.
TRANSPORT_FULL_CHECK Verify storage segments of all tables (N).
TRANSPORT_TABLESPACES List of tablespaces from which metadata will be unloaded.
VERSION Version of objects to export where valid keywords are:
(COMPATIBLE), LATEST, or any valid database version.

The following commands are valid while in interactive mode.


Note: abbreviations are allowed

Command Description
ADD_FILE Add dumpfile to dumpfile set.
CONTINUE_CLIENT Return to logging mode. Job will be re-started if idle.
EXIT_CLIENT Quit client session and leave job running.
FILESIZE Default filesize (bytes) for subsequent ADD_FILE commands.
HELP Summarize interactive commands.
KILL_JOB Detach and delete job.
PARALLEL Change the number of active workers for current job.
PARALLEL=<number of workers>.
START_JOB Start/resume current job.
STATUS Frequency (secs) job status is to be monitored where the default (0)
will show new status when available. STATUS[=interval]
STOP_JOB Orderly shutdown of job execution and exits the client.
STOP_JOB=IMMEDIATE performs an immediate shutdown of
the Data Pump job.

DATAPUMP IMPORT (impdp) Parameters:


[oracle2@www 10.2.0]$ impdp help=y
The Data Pump Import utility provides a mechanism for transferring data objects between Oracle
databases. The utility is invoked with the following command:
Example: impdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp
You can control how Import runs by entering the 'impdp' command followed by various
parameters. To specify parameters, you use keywords:
Format: impdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
Example: impdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

USERID must be the first parameter on the command line.

Keyword Description (Default)


ATTACH Attach to existing job, e.g. ATTACH [=job name].
CONTENT Specifies data to load where the valid keywords are:
(ALL), DATA_ONLY, and METADATA_ONLY.
DIRECTORY Directory object to be used for dump, log, and sql files.
DUMPFILE List of dumpfiles to import from (expdat.dmp),
e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ENCRYPTION_PASSWORD Password key for accessing encrypted column data.
This parameter is not valid for network import jobs.
ESTIMATE Calculate job estimates where the valid keywords are:
(BLOCKS) and STATISTICS.
EXCLUDE Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.
FLASHBACK_SCN SCN used to set session snapshot back to.
FLASHBACK_TIME Time used to get the SCN closest to the specified time.
FULL Import everything from source (Y).
HELP Display help messages (N).
INCLUDE Include specific object types, e.g. INCLUDE=TABLE_DATA.
JOB_NAME Name of import job to create.
LOGFILE Log file name (import.log).
NETWORK_LINK Name of remote database link to the source system.
NOLOGFILE Do not write logfile.
PARALLEL Change the number of active workers for current job.
PARFILE Specify parameter file.
QUERY Predicate clause used to import a subset of a table.
REMAP_DATAFILE Redefine datafile references in all DDL statements.
REMAP_SCHEMA Objects from one schema are loaded into another schema.
REMAP_TABLESPACE Tablespace object are remapped to another tablespace.
REUSE_DATAFILES Tablespace will be initialized if it already exists (N).
SCHEMAS List of schemas to import.
SKIP_UNUSABLE_INDEXES Skip indexes that were set to the Index Unusable state.
SQLFILE Write all the SQL DDL to a specified file.
STATUS Frequency (secs) job status is to be monitored where the default (0)
will show new status when available.
STREAMS_CONFIGURATION Enable the loading of Streams metadata
TABLE_EXISTS_ACTION Action to take if imported object already exists.
Valid keywords: (SKIP), APPEND, REPLACE and TRUNCATE.
TABLES Identifies a list of tables to import.
TABLESPACES Identifies a list of tablespaces to import.
TRANSFORM Metadata transform to apply to applicable objects.
Valid transform keywords: SEGMENT_ATTRIBUTES,
STORAGE OID, and PCTSPACE.
TRANSPORT_DATAFILES List of datafiles to be imported by transportable mode.
TRANSPORT_FULL_CHECK Verify storage segments of all tables (N).
TRANSPORT_TABLESPACES List of tablespaces from which metadata will be loaded.
Only valid in NETWORK_LINK mode import operations.
VERSION Version of objects to export where valid keywords are:
COMPATIBLE), LATEST, or any valid database version.
Only valid for NETWORK_LINK and SQLFILE.

The following commands are valid while in interactive mode.


Note: abbreviations are allowed

Command Description (Default)


CONTINUE_CLIENT Return to logging mode. Job will be re-started if idle.
EXIT_CLIENT Quit client session and leave job running.
HELP Summarize interactive commands.
KILL_JOB Detach and delete job.
PARALLEL Change the number of active workers for current job.
PARALLEL=<number of workers>.
START_JOB Start/resume current job.
START_JOB=SKIP_CURRENT will start the job after skipping
any action which was in progress when job was stopped.
STATUS Frequency (secs) job status is to be monitored where
the default (0) will show new status when available.
STATUS[=interval]
STOP_JOB Orderly shutdown of job execution and exits the client.
STOP_JOB=IMMEDIATE performs an immediate shutdown of
the Data Pump job.

EXPORT (exp) Parameters:


[oracle2@www 10.2.0]$ exp help=y
You can let Export prompt you for parameters by entering the EXP command followed by your
username/password:
Example: EXP SCOTT/TIGER
Or, you can control how Export runs by entering the EXP command followed by various
arguments. To specify parameters, you use keywords:

Format: EXP KEYWORD=value or KEYWORD=(value1,value2,...,valueN)


Example: EXP SCOTT/TIGER GRANTS=Y TABLES=(EMP,DEPT,MGR)
or TABLES=(T1:P1,T1:P2), if T1 is partitioned table

USERID must be the first parameter on the command line.

Keyword Description (Default) Keyword Description (Default)


USERID username/password FULL export entire file (N)
BUFFER size of data buffer OWNER list of owner usernames
FILE output files (EXPDAT.DMP) TABLES list of table names
COMPRESS import into one extent (Y) RECORDLENGTH length of IO record
GRANTS export grants (Y) INCTYPE incremental export type
INDEXES export indexes (Y) RECORD track incr. export (Y)
DIRECT direct path (N) TRIGGERS export triggers (Y)
LOG log file of screen output STATISTICS analyze objects (ESTIMATE)
ROWS export data rows (Y) PARFILE parameter filename
CONSISTENT cross-table consistency (N) CONSTRAINTS export constraints (Y)

OBJECT_CONSISTENT transaction set to read only during object export (N)


FEEDBACK display progress every x rows (0)
FILESIZE maximum size of each dump file
FLASHBACK_SCN SCN used to set session snapshot back to
FLASHBACK_TIME time used to get the SCN closest to the specified time
QUERY select clause used to export a subset of a table
RESUMABLE suspend when a space related error is encountered(N)
RESUMABLE_NAME text string used to identify resumable statement
RESUMABLE_TIMEOUT wait time for RESUMABLE
TTS_FULL_CHECK perform full or partial dependency check for TTS
VOLSIZE number of bytes to write to each tape volume
TABLESPACES list of tablespaces to export
TRANSPORT_TABLESPACE export transportable tablespace metadata (N)
TEMPLATE template name which invokes iAS mode export

IMPORT (imp) Parameters:


[oracle2@www 10.2.0]$ imp help=y
You can let Import prompt you for parameters by entering the IMP command followed by your
username/password:
Example: IMP SCOTT/TIGER
Or, you can control how Import runs by entering the IMP command followed by various
arguments. To specify parameters, you use keywords:
Format: IMP KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
Example: IMP SCOTT/TIGER IGNORE=Y TABLES=(EMP,DEPT) FULL=N
or TABLES=(T1:P1,T1:P2), if T1 is partitioned table

USERID must be the first parameter on the command line.

Keyword Description (Default) Keyword Description (Default)


USERID username/password FULL import entire file (N)
BUFFER size of data buffer FROMUSER list of owner usernames
FILE input files (EXPDAT.DMP) TOUSER list of usernames
SHOW just list file contents (N) TABLES list of table names
IGNORE ignore create errors (N) RECORDLENGTH length of IO record
GRANTS import grants (Y) INCTYPE incremental import type
INDEXES import indexes (Y) COMMIT commit array insert (N)
ROWS import data rows (Y) PARFILE parameter filename
LOG log file of screen output CONSTRAINTS import constraints (Y)
DESTROY overwrite tablespace data file (N)
INDEXFILE write table/index info to specified file
SKIP_UNUSABLE_INDEXES skip maintenance of unusable indexes (N)
FEEDBACK display progress every x rows(0)
TOID_NOVALIDATE skip validation of specified type ids
FILESIZE maximum size of each dump file
STATISTICS import precomputed statistics (always)
RESUMABLE suspend when a space related error is encountered(N)
RESUMABLE_NAME text string used to identify resumable statement
RESUMABLE_TIMEOUT wait time for RESUMABLE
COMPILE compile procedures, packages, and functions (Y)
STREAMS_CONFIGURATION import streams general metadata (Y)
STREAMS_INSTANTIATION import streams instantiation metadata (N)
VOLSIZE number of bytes in file on each volume of a file on tape

The following keywords only apply to transportable tablespaces


TRANSPORT_TABLESPACE import transportable tablespace metadata (N)
TABLESPACES tablespaces to be transported into database
DATAFILES datafiles to be transported into database
TTS_OWNERS users that own data in the transportable tablespace set

USERID must be the first parameter on the command line.

Keyword Description (Default)


USERID username/password
BUFFER size of data buffer
FILE input files (EXPDAT.DMP)
SHOW just list file contents (N)
IGNORE ignore create errors (N)
GRANTS import grants (Y)
INDEXES import indexes (Y)
ROWS import data rows (Y)
LOG log file of screen output
FULL import entire file (N)
FROMUSER list of owner usernames
TOUSER list of usernames
TABLES list of table names
RECORDLENGTH length of IO record
INCTYPE incremental import type
COMMIT commit array insert (N)
PARFILE parameter filename
CONSTRAINTS import constraints (Y)
DESTROY overwrite tablespace data file (N)
INDEXFILE write table/index info to specified file
SKIP_UNUSABLE_INDEXES skip maintenance of unusable indexes (N)
FEEDBACK display progress every x rows(0)
TOID_NOVALIDATE skip validation of specified type ids
FILESIZE maximum size of each dump file
STATISTICS import precomputed statistics (always)
RESUMABLE suspend when a space related error is encountered(N)
RESUMABLE_NAME text string used to identify resumable statement
RESUMABLE_TIMEOUT wait time for RESUMABLE
COMPILE compile procedures, packages, and functions (Y)
STREAMS_CONFIGURATION import streams general metadata (Y)
STREAMS_INSTANITATION import streams instantiation metadata (N)
Data Pump Export (expdp) and Data Pump Import(impdp)
Oracle introduced Data Pump in Oracle Database 10g Release 1. This new oracle technology
enables very high transfer of data from one database to another. The oracle Data Pump provides
two utilities namely:
 Data Pump Export which is invoked with the expdp command.
 Data Pump Import which is invoked with the impdp command.

The above two utilities have similar look and feel with the pre-Oracle 10g import and export
utilities (e.g., imp and exp, respectively) but are completely separate. Meaning dump files
generated by the original export utility (exp) cannot be imported by the new data pump import
utility (impdp) and vice-versa.
Data Pump Export (expdp) and Data Pump Import (impdp) are server-based rather than client-
based as is the case for the original export (exp) and import (imp). Because of this, dump files,
log files, and sql files are accessed relative to the server-based directory paths. Data Pump
requires that directory objects mapped a file system directory be specified in the invocation of
the data pump import or export.
It for this reason and for convenience that a directory object be created before using the data
pump export or import utilities.
For example to create a directory object named expdp_dir located at /u01/backup/exports enter
the following sql statement:
SQL> create directory expdp_dir as '/u01/backup/exports'
then grant read and write permissions to the users who will be performing the data pump export
and import.
SQL> grant read, write on directory pexpd_dir to system, user1, user2, user3;

Invoking Data Pump Export


Full Export Mode You can invoke the data pump export using a command line. Export
parameters can be specified directly in the command line. A full export is specified using the
FULL parameter. In a full database export, the entire database is unloaded. This mode requires
that you have the EXP_FULL_DATABASE role.Shown below is an example

$ expdp system/<password> DIRECTORY=exp_dir DUMPFILE=expfull.dmp FULL=y


LOGFILE=expfull.og

Schema Export Mode The schema export mode is invoked using the
SCHEMAS parameter. If you have no EXP_FULL_DATABASE role, you can
only export your own schema. If you have EXP_FULL_DATABASE role, you
can export several schemas in one go. Optionally, you can include the
system privilege grants as well.

$ expdp hr/hr DIRECTORY=exp_dir DUMPFILE=schema_exp.dmp SCHEMAS=hr,sh,oe

Table Export Mode This export mode is specified using the TABLES
parameter. In this mode, only the specified tables, partitions and their
dependents are exported. If you do not have the EXP_FULL_DATABASE
role, you can export only tables in your own schema. You can only specify
tables in the same schema.

$ expdp hr/hr DIRECTORY=exp_dir DUMPFILE=tables_exp.dmp


TABLES=employees,jobs,departments

Invoking Data Pump Import The data pump import can be invoked in the command line. The
export parameters can be specified directly in the command line.

Full Import Mode The full import mode loads the entire contents of the source (export) dump
file to the target database. However, you must have been granted the IMP_FULL_DATABASE
role on the target database. The data pump import is invoked using the impdp command in the
command line with the FULL parameter specified in the same command line.

$ impdp system/<password> DIRECTORY=exp_dir DUMPFILE=expfull.dmp FULL=y


LOGFILE=impfull.og

Schema Import Mode The schema import mode is invoked using the SCHEMAS
parameter. Only the contents of the specified schemas are load into the target database. The
source dump file can be a full, schema-mode, table, or tablespace mode export files. If you have
a IMP_FULL_DATABASE role, you can specify a list of schemas to load into the target
database.

$ impdp hr/hr DIRECTORY=exp_dir DUMPFILE=expfull.dmp SCHEMAS=hr,sh,oe

Table Import Mode This export mode is specified using the TABLES parameter. In this
mode, only the specified tables, partitions and their dependents are exported. If you do not have
the EXP_FULL_DATABASE role, you can import only tables in your own schema.
$ impdp hr/hr DIRECTORY=exp_dir DUMPFILE=expfull.dmp
TABLES=employees,jobs,departments

Oracle Data Pump

What is Oracle Data Pump?


Oracle Data Pump is a new feature of Oracle Database 11g that provides high speed, parallel,
bulk data and metadata movement of Oracle database contents. A new public interface package,
DBMS_DATAPUMP, provides a server-side infrastructure for fast data and metadata movement.
In Oracle Database 11g, new Export (expdp) and Import (impdp) clients that use this interface
have been provided. Oracle recommends that customers use these new Data Pump Export and
Import clients rather than the Original Export and Import clients, since the new utilities have
vastly improved performance and greatly enhanced functionality.

Is Data Pump a feature or an option of Oracle 11g?


Data Pump is a fully integrated feature of Oracle Database 11g. Data Pump is installed
automatically during database creation and database upgrade.

What platforms is Data Pump provided on?


Data Pump is available on the Oracle Database 11g Standard Edition, Enterprise Edition, and
Personal Edition. However, the parallel capability is only available on Oracle10g Enterprise
Edition. Data Pump is included on all the same platforms supported by Oracle 11g, including
Unix, Linux, Windows NT, Windows 2000, and Windows XP.

What are the system requirements for Data Pump?


The Data Pump system requirements are the same as the standard Oracle Database 11g
requirements. Data Pump doesn’t need a lot of additional system or database resources, but the
time to extract and treat the information will be dependent on the CPU and memory available on
each machine. If system resource consumption becomes an issue while a Data Pump job is
executing, the job can be dynamically throttled to reduce the number of execution threads.

What is the performance gain of Data Pump Export versus Original Export?
Using the Direct Path method of unloading, a single stream of data unload is about 2 times faster
than original Export because the Direct Path API has been modified to be even more efficient.
Depending on the level of parallelism, the level of improvement can be much more.

What is the performance gain of Data Pump Import versus Original Import?
A single stream of data load is 15-45 times faster Original Import. The reason it is so much faster
is that Conventional Import uses only conventional mode inserts, whereas Data Pump Import
uses the Direct Path method of loading. As with Export, the job can be parallelized for even more
improvement.

Does Data Pump require special tuning to attain performance gains?


No, Data Pump requires no special tuning. It runs optimally “out of the box”. Original Export
and (especially) Import require careful tuning to achieve optimum results.

Can you adjust the level of parallelism dynamically for more or less resource consumption?
Yes, you can dynamically throttle the number of threads of execution throughout the lifetime of
the job. There is an interactive command mode where you can adjust the level of parallelism. So,
for example, you can start up a job during the day with a PARALLEL=2, and then increase it at
night to a higher level.

Does Data Pump support all data types?


Yes, all the Oracle database data types are supported via Data Pump’s two data movement
mechanisms, Direct Path and External Tables.

What kind of object selection capability is available with Data Pump?


With Data Pump, there is much more flexibility in selecting objects for unload and load
operations. You can now unload any subset of database objects (such as functions, packages, and
procedures) and reload them on the target platform. Almost all database object types can be
excluded or included in an operation using the new Exclude and Include parameters.

Is it necessary to use the Command line interface or is there a GUI that you can use?
You can either use the Command line interface or the Oracle Enterprise Manager web-based GUI
interface.

Can I move a dump file set across platforms, such as from Sun to HP?
Yes, Data Pump handles all the necessary compatibility issues between hardware platforms and
operating systems.

Can I take 1 dump file set from my source database and import it into multiple databases?
Yes, a single dump file set can be imported into multiple databases.
You can also just import different subsets of the data out of that single dump file set.
Is Oracle Data Pump certified against Apps11i?
Yes, Oracle Data Pump supports Apps11i..

Is there a way to estimate the size of an export job before it gets underway?
Yes, you can use the “ESTIMATE ONLY” command to see how much disk space is required for
the job’s dump file set before you start the operation.

Can I monitor a Data Pump Export or Import job while the job is in progress?
Yes, jobs can be monitored from any location is going on. Clients may also detach from an
executing job without affecting it.

If a job is stopped either voluntarily or involuntarily, can I restart it?


Yes, every Data Pump job creates a Master Table in which the entire record of the job is
maintained. The Master Table is the directory to the job, so if a job is stopped for any reason, it
can be restarted at a later point in time, without losing any data.

Does Data Pump give me the ability to manipulate the Data Definition Language (DDL)?
Yes, with Data Pump, it is now possible to change the definition of some objects as they are
created at import time. For example, you can remap the source datafile name to the target datafile
name in all DDL statements where the source datafile is referenced. This is really useful if you
are moving across platforms with different file system syntax.

Is Network Mode supported on Data Pump?


Yes, Data Pump Export and Import both support a network mode in which the job’s source is a
remote Oracle instance. This is an overlap of unloading the data, using Export, and loading the
data, using Import, so those processes don’t have to be serialized. A database link is used for the
network. You don’t have to worry about allocating file space because there are no intermediate
dump files.

Does Data Pump support Flashback?


Yes, Data Pump supports the Flashback infrastructure, so you can perform an export and get a
dump file set that is consistent with a specified point in time or SCN.

Can I still use Original Export? Do I have to convert to Data Pump Export?
An Oracle9i compatible Export that operates against Oracle Database 11g will ship with Oracle
11g, but it does not export Oracle Database 11g features. Also, Data Pump Export has new
syntax and a new client executable, so Original Export scripts will need to change. Oracle
recommends that customers convert to use the Oracle Data Pump Export.
How do I import an old dump file into Oracle 10g? Can I use Original Import or do I have
to convert to Data Pump Import?
Original Import will be maintained and shipped forever, so that Oracle Version 5.0 through
Oracle9i dump files will be able to be loaded into Oracle 10g and later. Data Pump Import can
only read Oracle Database 11g (and later) Data Pump Export dump files. Data Pump Import has
new syntax and a new client executable, so Original Import scripts will need to change. Oracle
recommends that customers convert to use the Oracle Data Pump Import.

When would I use SQL*Loader instead of Data Pump Export and Import?
You would use SQL*Loader to load data from external files into tables of an Oracle database.
Many customers use SQL*Loader on a daily basis to load files (e.g. financial feeds) into their
databases. Data Pump Export and Import may be used less frequently, but for very important
tasks, such as migrating between platforms, moving data between development, test, and
production databases, logical database backup, and for application deployment throughout a
corporation.

When would I use Transportable Tablespaces instead of Data Pump Export and Import?
You would use Transportable Tablespaces when you want to move an entire tablespace of data
from one Oracle database to another. Transportable Tablespaces allows Oracle data files to be
unplugged from a database, moved or copied to another location, and then plugged into another
database. Moving data using Transportable Tablespaces can be much faster than performing
either an export or import of the same data, because transporting a tablespace only requires the
copying of datafiles and integrating the tablespace dictionary information. Even when
transporting a tablespace, Data Pump Export and Import are still used to handle the extraction
and recreation of the metadata for that tablespace.

Import Export FAQ


 1 What is import/export and why does one need it?
 2 How does one use the import/export utilities?
 3 Can one export a subset of a table?
 4 Can one monitor how fast a table is imported?
 5 Can one import tables to a different tablespace?
 6 Does one need to drop/ truncate objects before importing?
 7 Can one import/export between different versions of Oracle?
 8 Can one export to multiple files?/ Can one beat the Unix 2 Gig limit?
 9 How can one improve Import/ Export performance?
 10 What are the common Import/ Export problems?

What is import/export and why does one need it?


Oracle's export (exp) and import (imp) utilities are used to perform logical database backup and
recovery. When exporting, database objects are dumped to a binary file which can then be
imported into another Oracle database.
These utilities can be used to move data between different machines, databases or schema.
However, as they use a proprietary binary file format, they can only be used between Oracle
databases. One cannot export data and expect to import it into a non-Oracle database.

Various parameters are available to control what objects are exported or imported. To get a list of
available parameters, run the exp or imp utilities with the help=yes parameter.

The export/import utilities are commonly used to perform the following tasks:

 Backup and recovery (small databases only, say < +50GB, if bigger, use RMAN instead)
 Move data between Oracle databases on different platforms (for example from Solaris to
Windows)
 Reorganization of data/ eliminate database fragmentation (export, drop and re-import
tables)
 Upgrade databases from extremely old versions of Oracle (when in-place upgrades are
not supported by the Database Upgrade Assistant anymore)
 Detect database corruption. Ensure that all the data can be read
 Transporting tablespaces between databases
 Etc.

NOTE: It is generally advised not to use exports as the only means of backing-up a database.
Physical backup methods (for example, when you use RMAN) are normally much quicker and
supports point in time based recovery (apply archivelogs after recovering a database). Also,
exp/imp is not practical for large database environments.

How does one use the import/export utilities?


Look for the "imp" and "exp" executables in your $ORACLE_HOME/bin directory. One can run
them interactively, using command line parameters, or using parameter files. Look at the imp/exp
parameters before starting. These parameters can be listed by executing the following
commands: "exp help=yes" or "imp help=yes".

The following examples demonstrate how the imp/exp utilities can be used:

exp scott/tiger file=emp.dmp log=emp.log tables=emp rows=yes indexes=no


exp scott/tiger file=emp.dmp tables=(emp,dept)
imp scott/tiger file=emp.dmp full=yes
imp scott/tiger file=emp.dmp fromuser=scott touser=scott tables=dept

Using a parameter file:

exp userid=scott/tiger@orcl parfile=export.txt

... where export.txt contains:

BUFFER=100000
FILE=account.dmp
FULL=n
OWNER=scott
GRANTS=y
COMPRESS=y

NOTE: If you do not like command line utilities, you can import and export data with the
"Schema Manager" GUI that ships with Oracle Enterprise Manager (OEM).

Can one export a subset of a table?


From Oracle 8i one can use the QUERY= export parameter to selectively unload a subset of the
data from a table. You may need to escape special chars on the command line, for example:
query=\"where deptno=10\". Look at these examples:

exp scott/tiger tables=emp query="where deptno=10"


exp scott/tiger file=abc.dmp tables=abc query=\"where sex=\'f\'\" rows=yes

Can one monitor how fast a table is imported?


If you need to monitor how fast rows are imported from a running import job, try one of the
following methods:

Method 1:

select substr(sql_text,instr(sql_text,'INTO "'),30) table_name,


rows_processed,
round((sysdate-to_date(first_load_time,'yyyy-mm-dd
hh24:mi:ss'))*24*60,1) minutes,
trunc(rows_processed/((sysdate-to_date(first_load_time,'yyyy-mm-dd
hh24:mi:ss'))*24*60)) rows_per_min
from sys.v_$sqlarea
where sql_text like 'INSERT %INTO "%'
and command_type = 2
and open_versions > 0;

For this to work one needs to be on Oracle 7.3 or higher (7.2 might also be OK). If the import
has more than one table, this statement will only show information about the current table being
imported.

Contributed by Osvaldo Ancarola, Bs. As. Argentina.

Method 2:

Use the FEEDBACK=N import parameter. This parameter will tell IMP to display a dot for
every N rows imported. For example, FEEDBACK=1000 will show a dot after every 1000 row.

Can one import tables to a different tablespace?


Oracle offers no parameter to specify a different tablespace to import data into. Objects will be
re-created in the tablespace they were originally exported from. One can alter this behaviour by
following one of these procedures:

Pre-create the table(s) in the correct tablespace:

 Import the dump file using the INDEXFILE= option


 Edit the indexfile. Remove remarks and specify the correct tablespaces.
 Run this indexfile against your database, this will create the required tables in the
appropriate tablespaces
 Import the table(s) with the IGNORE=Y option.

Change the default tablespace for the user:

 Revoke the "UNLIMITED TABLESPACE" privilege from the user


 Revoke the user's quota from the tablespace from where the object was exported. This
forces the import utility to create tables in the user's default tablespace.
 Make the tablespace to which you want to import the default tablespace for the user
 Import the table

Does one need to drop/ truncate objects before importing?


Before one imports rows into already populated tables, one needs to truncate or drop these tables
to get rid of the old data. If not, the new data will be appended to the existing tables. One must
always DROP existing Sequences before re-importing. If the sequences are not dropped, they
will generate numbers inconsistent with the rest of the database.

Note: It is also advisable to drop indexes before importing to speed up the import process.
Indexes can easily be recreated after the data was successfully imported.

Can one import/export between different versions of Oracle?


Different versions of the import utility are upwards compatible. This means that one can take an
export file created from an old export version, and import it using a later version of the import
utility. This is quite an effective way of upgrading a database from one release of Oracle to the
next.

Oracle also ships some previous catexpX.sql scripts that can be executed as user SYS enabling
older imp/exp versions to work (for backwards compatibility). For example, one can run
$ORACLE_HOME/rdbms/admin/catexp7.sql on an Oracle 8 database to allow the Oracle 7.3
exp/imp utilities to run against an Oracle 8 database.

Can one export to multiple files?/ Can one beat the Unix 2
Gig limit?
From Oracle8i, the export utility supports multiple output files. This feature enables large exports
to be divided into files whose sizes will not exceed any operating system limits (FILESIZE=
parameter). When importing from multi-file export you must provide the same filenames in the
same sequence in the FILE= parameter. Look at this example:

exp SCOTT/TIGER FILE=D:F1.dmp,E:F2.dmp FILESIZE=10m LOG=scott.log

Use the following technique if you use an Oracle version prior to 8i:

Create a compressed export on the fly. Depending on the type of data, you probably can export
up to 10 gigabytes to a single file. This example uses gzip. It offers the best compression I know
of, but you can also substitute it with zip, compress or whatever.

# create a named pipe


mknod exp.pipe p
# read the pipe - output to zip file in the background
gzip < exp.pipe > scott.exp.gz &
# feed the pipe
exp userid=scott/tiger file=exp.pipe ...[/code]

Contributed by Jared Still

Import directly from a compressed export:

# create a name pipe


mknod imp_pipe p
# read the zip file and output to pipe
gunzip < exp_file.dmp.gz > imp_pipe &
# feed the pipe
imp system/pwd@sid file=imp_pipe log=imp_pipe.log ...

Contributed by Blaise BIBOUE.

How can one improve Import/ Export performance?


EXPORT:

 Set the BUFFER parameter to a high value (e.g. 2Mb -- entered as an integer "2000000")
 Set the RECORDLENGTH parameter to a high value (e.g. 64Kb -- entered as an integer
"64000")
 Use DIRECT=yes (direct mode export)
 Stop unnecessary applications to free-up resources for your job.
 If you run multiple export sessions, ensure they write to different physical disks.
 DO NOT export to an NFS mounted filesystem. It will take forever.

IMPORT:
 Create an indexfile so that you can create indexes AFTER you have imported data. Do
this by setting INDEXFILE to a filename and then import. No data will be imported but a
file containing index definitions will be created. You must edit this file afterwards and
supply the passwords for the schemas on all CONNECT statements.
 Place the file to be imported on a separate physical disk from the oracle data files
 Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i) considerably in the
init$SID.ora file
 Set the LOG_BUFFER to a big value and restart oracle.
 Stop redo log archiving if it is running (ALTER DATABASE NOARCHIVELOG;)
 Create a BIG tablespace with a BIG rollback segment inside. Set all other rollback
segments offline (except the SYSTEM rollback segment of course). The rollback
segment must be as big as your biggest table (I think?)
 Use COMMIT=N in the import parameter file if you can afford it
 Use STATISTICS=NONE in the import parameter file to avoid time consuming to import
the statistics
 Remember to run the indexfile previously created

Contributed by Petter Henrik Hansen.

What are the common Import/ Export problems?


 ORA-00001: Unique constraint (...) violated

You are importing duplicate rows. Use IGNORE=YES to skip tables that already exist
(imp will give an error if the object is re-created).
 ORA-01555: Snapshot too old

Ask your users to STOP working while you are exporting or try using parameter
CONSISTENT=NO
 ORA-01562: Failed to extend rollback segment

Create bigger rollback segments or set parameter COMMIT=Y while importing


 IMP-00015: Statement failed ... object already exists...

Use the IGNORE=Y import parameter to ignore these errors, but be careful as you might
end up with duplicate rows.
Retrieved from "http://www.orafaq.com/wiki/Import_Export_FAQ"
Category: Frequently Asked Questions

What's the relationship between database and instance?


 An instance can mount and open one and only one database.
 Normally a database is mounted and opened by one instance.
 When using RAC, a database may be mounted and opened many instances.

Vous aimerez peut-être aussi