Académique Documents
Professionnel Documents
Culture Documents
Oracle Data Pump is a new feature in Oracle Database 10g that enables very high-
speed movement of data and metadata between databases. This technology is the
basis for Oracle's new data movement utilities, Data Pump Export and Data Pump
Import.
One very prominent feature of Data Pump is the ability to restart jobs. The ability to
restart a Data Pump Export or Import job is extremely valuable to the DBA who
might be responsible for moving large amounts of data, especially for big jobs that
take a long time to complete. The Data Pump job is restarted with no data loss or
corruption after an unexpected failure or after a STOP_JOB parameter is issued from
the Import or Export interactive mode.
A very common reason to restart a Data Pump job is when a failure such as a power
failure, an internal error, or an accidental instance bounce, prevents the job from
succeeding. Typical reasons for failure might also be due to system resource issues,
such as insufficient dump file space (in the Data Pump Export case), or insufficient
Tablespace resources (in the Data Pump Import case). Upon Data Pump job failure,
the DBA or user has the ability to intervene to correct a problem. A Data Pump
restart command (START_JOB) can then be issued to continue the job from the point
of failure.
This Technical Note describes Data Pump restart capability with two examples, using
Data Pump Export and Import command line utilities, respectively. In both examples,
it is necessary to define a directory object, DATA_PUMP_DIR, for the dump files.
Furthermore, the Data Pump user, which in our examples is SYSTEM, needs to hold
the exp_full_database and imp_full_database roles. Restart also works for
unprivileged users. (See Oracle Database Utilities 10g Release 1 (10.1) for additional
information about Data Pump and its use of directory objects.)
ORA-39095: Dump file space has been exhausted: Unable to allocate 217088 bytes
Job "SYSTEM"."EXAMPLE1" stopped due to fatal error at 06:38
>
Our Export job (EXAMPLE1) has encountered a fatal error and the client has returned
to the operating system prompt (>). We can examine the job state by invoking the
following query:
JOB_NAME STATE
------------------------------ ------------------------------
EXAMPLE1 NOT RUNNING
In this simple example, it's quite obvious what the problem is. The dump file we
specified is too small for the HR schema. We can determine the reason for the error
by looking at the client output that was displayed on our screen or the Data Pump
log file.
To fix this problem, we need to add a second dump file. Let's attach to our job using
the "EXAMPLE1" name. When we successfully attach to our job, the job status and
other interesting information about the job is displayed.
Export>add_file=hr1.dmp
We can next perform the status command and see that the additional dump file is
now being displayed.
Export>status
Job: EXAMPLE1
Operation: EXPORT
Mode: SCHEMA
State: IDLING
Bytes Processed: 55,944
Percent Done: 99
Current Parallelism: 1
Job Error Count: 0
Dump File: /work1/private/oracle/rdbms/log/example1.dmp
size: 303,104
bytes written: 163,840
Dump File: /work1/private/oracle/rdbms/log/hr1.dmp
bytes written: 4,096
Finally, we issue the CONTINUE_CLIENT command. The job EXAMPLE1 will now
resume.
Export>continue_client
Export> Job EXAMPLE1 has been reopened at Tuesday, 06 July, 2004 6:38
Restarting "SYSTEM"."EXAMPLE1": system/******** schemas=hr
directory=data_pump_dir logfile=example1.log filesize=300000
dumpfile=example1.dmp job_name=EXAMPLE1
Master table "SYSTEM"."EXAMPLE1" successfully loaded/unloaded
***************************************************************************
Dump file set for SYSTEM.EXAMPLE1 is:
/work1/private/oracle/rdbms/log/example1.dmp
/work1/private/oracle/rdbms/log/hr1.dmp
Job "SYSTEM"."EXAMPLE1" completed with 1 error(s) at 06:38
Now that our target tablespace has been created, we are ready to perform the Data
Pump Import job by using this command:
Our Import job has entered the resumable wait state and is hung. This job will stay
in a resumable wait until the job is stopped or until the resumable wait period
expires, which by default is two hours. At this juncture, the DBA can intervene by
adding an additional file to the EXAMPLE2 tablespace. One very good reason to stop
the job is if the DBA has to do maintenance to the disk subsystem in conjunction
with adding the second dump file. In the general case it may not be necessary to
stop the job.
In our example, we will stop the job with a Control-C prior to the resumable wait
expiration.
^C
Import>stop_job=immediate
Step 4: Add a File to the Tablespace
We can invoke SQL*Plus and add a file to the EXAMPLE2 tablespace.
SQL>alter tablespace example2 add datafile '/work1/private/rdbms/dbs/example2b.f'
size 1m autoextend on maxsize 50m;
Step 5: Attach to the Job
We are now ready to attach to our job and restart our import. Note that we attach to
the job by job_name; in this case EXAMPLE2.
Worker 1 Status:
State: UNDEFINED
Object Schema: HR
Object Name: COUNTRIES
Object Type: SCHEMA_EXPORT/TABLE/TABLE
Completed Objects: 15
Worker Parallelism: 1
Now we can start the job again. This time, we'll use START_JOB.
Import> start_job
Import> status
Job: EXAMPLE2
Operation: IMPORT
Mode: SCHEMA
State: EXECUTING
Bytes Processed: 2,791,768
Percent Done: 99
Current Parallelism: 1
Job Error Count: 0
Dump File: /work1/private/oracle/rdbms/log/example2.dmp
Worker 1 Status:
State: EXECUTING
Object Type: SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Worker Parallelism: 1
When the job completes, you will be able to check the example2imp.log file for job
status and other information.
In Example 2, we demonstrated how to restart a Data Pump import job. It's
important to note that normally it would not be necessary to stop the job (in Step 3)
in order to add the second dump file. We could have simply added the file to the
tablespaces, from another session. In other words, we could have skipped over steps
3, 5,6,7, and 8. The job would have automatically resumed in this case.
Summary
If you use Data Pump and experience a failure, you may be able to easily correct the
problem and then use the Data Pump restart capability without any loss of data, and
without having to completely redo the operation.