Vous êtes sur la page 1sur 182

RMANBasic

Oracle provide a tool for Database backup and restore operation is called RMAN.
Recovery Manager is a client/server application that uses database server sessions to perform backup
and recovery. It stores metadata about its operations in the control file of the target database and,
optionally, in a recovery catalog schema in an Oracle database.
Difference between RMAN and Traditional backup methods
RMAN is Oracle's backup and recovery utility. With RMAN, backups become as easy as:
BACKUP DATABASE;
RMAN reduces the complexity of backup and recovery. RMAN can determine what needs to be backed
up or restored.
Why Should we use RMAN
Ability to perform incremental backups.Ability to recover one block of a datafile.
Ability to perform the backup and restore with parallelization.
Ability to automatically delete archived redo logs after they are backed up.
Ability to automatically backup the control file and the SPFILE.
Ability to restart a failed backup without having to start from the beginning.
Ability to verify the integrity of the backup.
Ability to test the restore process without having to actually perform the restore.
Comparison of RMAN Automated and User-Managed Procedures
By using operating system commands for User-Managed Backup and Recovery , a DBA manually keeps
track of all database files and backups. But RMAN performs these same tasks automatically.
Understanding the RMAN Architecture
An oracle RMAN comprises of RMAN EXECUTABLE This could be present and fired even through client
side, TARGET DATABASE (This is the database which needs to be backed up) and RECOVERY
CATALOG (Recovery catalog is optional otherwise backup details are stored in target database controlfile
.)
About the RMAN Repository
The RMAN repository is a set of metadata that RMAN uses to store information about the target database
and its backup and recovery operations. RMAN stores information about:
Backup sets and pieces
Image copies (including archived redo logs)
Proxy copies
The target database schema
Persistent configuration settings
If you start RMAN without specifying either CATALOG or NOCATALOG on the command line, then RMAN
makes no connection to a repository. If you run a command that requires the repository, and if no
CONNECT CATALOG command has been issued yet, then RMAN automatically connects in the default
NOCATALOG mode. After that point, the CONNECT CATALOG command is not valid in the session.
Types of Database Connections
You can connect to the following types of databases.

Target database
RMAN connects you to the target database with the SYSDBA privilege. If you do not have this privilege,
then the connection fails.
Recovery catalog database
This database is optional: you can also use RMAN with the default NOCATALOG option.
Auxiliary database
You can connect to a standby database, duplicate database, or auxiliary instance (standby instance or
tablespace point-in-time recovery instance
Note:
That a SYSDBA privilege is not required when connecting to the recovery catalog. The only requirement
is that the RECOVERY_CATALOG_OWNER role be granted to the schema owner.
Using Basic RMAN Commands
After you have learned how to connect to a target database, you can immediately begin performing
backup and recovery operations. Use the examples in this section to go through a basic backup and
restore scenario using a test database. These examples assume the following:
The test database is in ARCHIVELOG mode.
You are running in the default NOCATALOG mode.
The RMAN executable is running on the same host as the test database.
Connecting to the Target Database
rman TARGET /
If the database is already mounted or open, then RMAN displays output similar to the following:
Recovery Manager: Release 9.2.0.0.0
connected to target database: RMAN (DBID=1237603294)
Reporting the Current Schema of the Target Database
In this example, you generate a report describing the target datafiles. Run the report schema command
as follows:
RMAN> REPORT SCHEMA; (RMAN displays the datafiles currently in the target database.
Backing Up the Database
In this task, you back up the database to the default disk location. Because you do not specify the format
parameter in this example, RMAN assigns the backup a unique filename.
You can make two basic types of backups: full and incremental.
Making a Full Backup
Run the backup command at the RMAN prompt as follows to make a full backup of the datafiles, control
file, and current server parameter file (if the instance is started with a server parameter file) to the default
device type:
RMAN> BACKUP DATABASE;
Making an Incremental Backup

Incremental backups are a convenient way to conserve storage space because they back up only
database blocks that have changed. RMAN compares the current datafiles to a base backup, also called
a level 0 backup, to determine which blocks to back up.
RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE;
Backing Up Archived Logs
Typically, database administrators back up archived logs on disk to a third-party storage medium such as
tape. You can also back up archived logs to disk. In either case, you can delete the input logs
automatically after the backup completes.To back up all archived logs and delete the input logs (from the
primary archiving destination only), run the backup command at the RMAN prompt as follows:
RMAN> BACKUP ARCHIVELOG ALL DELETE INPUT;
Listing Backups and Copies
To list the backup sets and image copies that you have created, run the list command as follows:
RMAN> LIST BACKUP;
To list image copies, run the following command:
RMAN> LIST COPY;
Validating the Restore of a Backup
Check that you are able to restore the backups that you created without actually restoring them. Run the
RESTORE ... VALIDATE command as follows:
RMAN> RESTORE DATABASE VALIDATE;

TypeofRMANBackupTutorial
Full Backups
A full backup reads the entire file and copies all blocks into the backup set, only skipping datafile blocks
that have never been used.
About Incremental Backups
Rman create backup only changed block since a previous backup. You can use RMAN to create
incremental backups of datafiles, tablespaces, or the whole database.
How Incremental Backups Work
Each data block in a datafile contains a system change number (SCN), which is the SCN at which the
most recent change was made to the block. During an incremental backup, RMAN reads the SCN of each
data block in the input file and compares it to the checkpoint SCN of the parent incremental backup.
RMAN reads the entire file every time whether or not the blocks have been used.
The parent backup is the backup that RMAN uses for comparing the SCNs. If the current incremental is a
differential backup at level n, then the parent is the most recent incremental of level n or less. If the
current incremental is a cumulative backup at level n, then the parent is the most recent incremental of
level n-1 or less. If the SCN in the input data block is greater than or equal to the checkpoint SCN of the
parent, then RMAN copies the block.
Multilevel Incremental Backups
RMAN can create multilevel incremental backups. Each incremental level is denoted by an integer, for
example, 0, 1, 2, and so forth. A level 0 incremental backup, which is the base for subsequent incremental
backups, copies all blocks containing data. The only difference between a level 0 backup and a full
backup is that a full backup is never included in an incremental strategy.

If no level 0 backup exists when you run a level 1 or higher backup, RMAN makes a level 0 backup
automatically to serve as the base.
The benefit of performing multilevel incremental backups is that RMAN does not back up all blocks all of
the time.
Differential Incremental Backups
In a differential level n incremental backup, RMAN backs up all blocks that have changed since the most
recent backup at level n or lower.
For example, in a differential level 2 backups, RMAN determines which level 2 or level 1 backup occurred
most recently and backs up all blocks modified after that backup. If no level 1 is available, RMAN copies
all blocks changed since the base level 0 backup. If no level 0 backup is available, RMAN makes a new
base level 0 backup for this file.
Case 1: if you want to implement incremental backup strategy as a DBA in your organization:
Use Command for incremental Level Backup
RMAN> backup incremental level 0 database tag="SUNDAY";
RMAN> backup incremental level 3 database tag="MONDAY";
RMAN> backup incremental level 3 database tag="TUESDAY";
RMAN> backup incremental level 3 database tag="WEDNESDAY";
RMAN> backup incremental level 2 database tag="THURSDAY";
RMAN> backup incremental level 3 database tag="FRIDAY";
RMAN> backup incremental level 3 database tag="SATURDAY";
Backup Example ( You can view your incremental Backup Details by using following Query)
select incremental_level, incremental_change#, checkpoint_change#, blocks from v$backup_datafile;
Result of above Query:
INC_LEVEL
INC_CHANGE#
CHECKPOINT_CHANGE#
BLOCKS
0
0
271365
59595
3
271365
271369
2
3
271369
271371
1
3
271371
271374
2
2
271365
271378
2
3
271378
271380
1
3
271380
271383
2
Cumulative Incremental Backups
RMAN provides an option to make cumulative incremental backups at level 1 or greater. In a cumulative
level n backup, RMAN backs up all the blocks used since the most recent backup at level n-1 or lower.
For example, in cumulative level 2 backups, RMAN determines which level 1 backup occurred most
recently and copies all blocks changed since that backup. If no level 1 backups is available, RMAN copies
all blocks changed since the base level 0 backup.
Cumulative incremental backups reduce the work needed for a restore by ensuring that you only need
one incremental backup from any particular level. Cumulative backups require more space and time than
differential backups, however, because they duplicate the work done by previous backups at the same
level.
Case 1: if you want to implement Cumulative backup strategy as a DBA in your organization:
Use Command for Cumulative Level Backup
backup incremental level=0 database tag='base';
backup incremental level=2 cumulative database tag='monday';
backup incremental level=2 cumulative database tag='tuesday';
backup incremental level=2 cumulative database tag='wednesday';
backup incremental level=2 cumulative database tag='thursday';
backup incremental level=2 cumulative database tag='friday';

backup incremental level=2 cumulative database tag='saturday';


backup incremental level=1 cumulative database tag='weekly';
Incremental backup implementation
RMAN will determine the incremental SCN for each datafile
Find the backup with highest checkpoint scn that

belongs to the incarnation of datafile

matches the given file#

is an incremental backup/copy at level N or less if noncumulative or

is an incremental backup/copy at level N-1 or less if cumulative

belongs to an available backup set if backup


Incremental Backup Strategy
You can implement a three-level backup scheme so that a full or level 0 backup is taken monthly, a
cumulative level 1 backup is taken weekly, and a cumulative level 2 is taken daily. In this scheme, you
never have to apply more than a day's worth of redo for complete recovery. When deciding how often to
take full or level 0 backups, a good rule of thumb is to take a new level 0 whenever 50% or more of the
data has changed. If the rate of change to your database is predictable, then you can observe the size of
your incremental backups to determine when a new level 0 is appropriate. The following query displays
the number of blocks written to a backup set for each datafile with at least 50% of its blocks backed up:
SELECT FILE#, INCREMENTAL_LEVEL, COMPLETION_TIME, BLOCKS, DATAFILE_BLOCKS
FROM V$BACKUP_DATAFILE
WHERE INCREMENTAL_LEVEL > 0 AND BLOCKS / DATAFILE_BLOCKS > .5
ORDER BY COMPLETION_TIME;
Compare the number of blocks in differential or cumulative backups to a base level 0 backup. For
example, if you only create level 1 cumulative backups, then when the most recent level 1 backup is
about half of the size of the base level 0 backup, take a new level 0.

RMAN:RESTOREConcept
Use the RMAN RESTORE command to restore the following types of files from copies on disk or backups
on other media:
Database (all datafiles)
Tablespaces
Control files
Archived redo logs
Server parameter files
Process of Restore Operations
RMAN

automates the procedure for restoring files. When you issue a RESTORE command, RMAN restore
the correct backups and copies to either:
The default location, overwriting the old files with the same name
A new location, which you can specify with the SET NEWNAME command
For example:
If
you restore datafile 'C:_DATA.DBF to its default location, then RMAN restores the
file C:_DTAA.DBF and overwrites any file that it finds with the same filename.
if you run a SET NEWNAME command before you restore a file, then RMAN creates a datafile copy with
the name that you specify. For example, assume that you run the following commands:
Run
{
SET NEWNAME FOR DATAFILE 'C:_DATA.DBF TO C:_DATA.DBF;
RESTORE DATAFILE 'C:_DTAA.DBF;
SWITCH DATAFILE 'C:_DATA.DBF' TO DATAFILECOPY 'C:_DATA.DBF;
}
In this case, RMAN creates a datafile copy of 'C:_DATA.DBF named 'C:_DATA.DBF and records it in
the repository. To change the name for datafile 'C:_DATA.DBF to 'C:_DATA.DBF in the control file, run a
SWITCH command so that RMAN considers the restored file as the current database file.
RMAN Recovery: Basic Steps
If possible, make the recovery catalog available to perform the media recovery. If it is not available, then
RMAN uses metadata from the target database control file. Assuming that you have backups of the
datafiles and at least one autobackup of the control file.
The generic steps for media recovery using RMAN are as follows:
Place the database in the appropriate state: mounted or open. For example, mount the database when
performing whole database recovery, or open the database when performing online tablespace recovery.
Restore the necessary files using the RESTORE command.
Recover the datafiles using the RECOVER command.

Place the database in its normal state.


Mechanism of Restore and Recovery operation:
The DBA runs the following commands:
RESTORE DATABASE;
RECOVER DATABASE;
The RMAN recovery catalog obtains its metadata from the target database control file. RMAN decides
which backup sets to restore, and which incremental backups and archived logs to use for recovery. A
server session on the target database instance performs the actual work of restore and recovery.
Mechanics of Recovery: Incremental Backups and Redo Logs
RMAN
does not need to apply incremental backups to a restored level 0 incremental backup: it can also apply
archived logs. RMAN simply restores the datafiles that it needs from available backups and copies, and
then applies incremental backups to the datafiles if it can and if not applies logs.
How RMAN Searches for Archived Redo Logs During Recovery
If RMAN cannot find an incremental backup, then it looks in the repository for the names of archived redo
logs to use for recovery. Oracle records an archived log in the control file whenever one of the following
occurs:
The archiver process archives a redo log
RMAN restores an archived log
The RMAN COPY command copies a log
The RMAN CATALOG command catalogs a user-managed backup of an archived log
RMAN propagates archived log data into the recovery catalog during resynchronization, classifying
archived logs as image copies. You can view the log information through:
The LIST command
The V$ARCHIVED_LOG control file view
The RC_ARCHIVED_LOG recovery catalog view
During recovery, RMAN looks for the needed logs using the filenames specified in the
V$ARCHIVED_LOG view. If the logs were created in multiple destinations or were generated by the

COPY, CATALOG, or RESTORE commands, then multiple, identical copies of each log sequence number
exist on disk.
If the RMAN repository indicates that a log has been deleted or uncataloged, then RMAN ceases to
consider it as available for recovery. For example, assume that the database archives log 100 to
directories /dest1 and /dest2. The RMAN repository indicates that /dest1/log100.arc and /dest2/log100.arc
exist. If you delete /dest1/log100.arc with the DELETE command, then the repository indicates that only
/dest2/log100.arc is available for recovery.
If the RMAN repository indicates that no copies of a needed log sequence number exist on disk, then
RMAN looks in backups and restores archived redo logs as needed to perform the media recovery. By
default, RMAN restores the archived redo logs to the first local archiving destination specified in the
initialization parameter file. You can run the SET ARCHIVELOG DESTINATION command to specify a
different restore location. If you specify the DELETE ARCHIVELOG option on RECOVER, then RMAN
deletes the archived logs after restoring and applying them. If you also specify MAXSIZE integer on the
RECOVER command, then RMAN staggers the restores so that they consume no more
than integer amount of disk space at a time.
Incomplete Recovery
RMAN can perform either complete or incomplete recovery. You can specify a time, SCN, or log
sequence number as a limit for incomplete recovery with the SET UNTIL command or with an UNTIL
clause specified directory on the RESTORE and RECOVER commands. After performing incomplete
recovery, you must open the database with the RESETLOGS option.
Disaster Recovery with a Control File Autobackup
Assume that you lose both the target database and the recovery catalog. All that you have remaining is a
tape with RMAN backups of the target database and archived redo logs. Can you still recover the
database? Yes, assuming that you enabled the control file autobackup feature. In a disaster recovery
situation, RMAN can determine the name of a control file autobackup even without a repository available.
You can then restore this control file, mount the database, and perform media recovery.
About Block Media Recovery
You can also use the RMAN BLOCKRECOVER command to perform block media recovery. Block
media recovery recovers an individual corrupt datablock or set of datablocks within a datafile. In cases
when a small number of blocks require media recovery, you can selectively restore and recover damaged
blocks rather than whole datafiles.
Note: Restrictions of block media recovery:

You can only perform block media recovery with Recovery Manager. No SQL*Plus recovery
interface is available.

You can only perform complete recovery of individual blocks. In other words, you cannot stop
recovery before all redo has been applied to the block.

You can only recover blocks marked media corrupt. The V$DATABASE_BLOCK_CORRUPTION
view indicates which blocks in a file were marked corrupt since the most recent BACKUP, BACKUP ...
VALIDATE, or COPY command was run against the file.
You must have a full RMAN backup. Incremental backups are not allowed.
Blocks that are marked media corrupt are not accessible to users until recovery is complete. Any
attempt to use a block undergoing media recovery results in an error message indicating that the block is
media corrupt.
When Block Media Recovery Should Be Used
For example, you may discover the following messages in a user trace file:
ORA-01578: ORACLE data block corrupted (file # 7, block # 3)
ORA-01110: data file 7: '/oracle/oradata/trgt/tools01.dbf'
ORA-01578: ORACLE data block corrupted (file # 2, block # 235)
ORA-01110: data file 2: '/oracle/oradata/trgt/undotbs01.dbf'
You can then specify the corrupt blocks in the BLOCKRECOVER command as follows:
BLOCKRECOVER DATAFILE 7 BLOCK 3 DATAFILE 2 BLOCK 235;
Block Media Recovery When Redo Is Missing
Like datafile media recovery, block media recovery cannot survive a missing or inaccessible archived log.
Where is datafile recovery requires an unbroken series of redo changes from the beginning of recovery to
the end, block media recovery only requires an unbroken set of redo changes for the blocks being
recovered.
When RMAN first detects missing or corrupt redo records during block media recovery, it does not
immediately signal an error because the block undergoing recovery may become a newed block later in
the redo stream. When a block is newed all previous redo for that block becomes irrelevant because the
redo applies to an old incarnation of the block. For example, Oracle can new a block when users delete
all the rows recorded in the block or drop a table.

RMANwithaRecoveryCatalog(Benefits,features,Costsof
UsingtheRecoveryCatalog)
Deciding Whether to Use RMAN with a Recovery Catalog
By default, RMAN connects to the target database in NOCATALOG mode, meaning that it uses the
control file in the target database as the sole repository of RMAN metadata. Perhaps the most important
decision you make when using RMAN is whether to create a recovery catalog as the RMAN repository for

normal production operations. A recovery catalog is a schema created in a separate database that
contains metadata obtained from the target control file.
Benefits of Using the Recovery Catalog as the RMAN Repository
When you use a recovery catalog, RMAN can perform a wider variety of automated backup and recovery
functions than when you use the control file in the target database as the sole repository of metadata.
The following features are available only with a catalog:

You can store metadata about multiple target databases in a single catalog.

You can store metadata about multiple incarnations of a single target database in the catalog.
Hence, you can restore backups from any incarnation.

Resynchronizing the recovery catalog at intervals less than the


CONTROL_FILE_RECORD_KEEP_TIME setting, you can keep historical metadata.

You can report the target database schema at a noncurrent time.

You can store RMAN scripts in the recovery catalog.

When restoring and recovering to a time when the database files that exist in the database are
different from the files recorded in the mounted control file, the recovery catalog specifies which files that
are needed. Without a catalog, you must first restore a control file backup that lists the correct set of
database files.

If the control file is lost and must be restored from backup, and if persistent configurations have
been made to automate the tape channel allocation, these configurations are still available when the
database is not mounted.
Costs of Using the Recovery Catalog as the RMAN Repository
The main cost of using a catalog is the maintenance overhead required for this additional database.
For example, you have to:Find a database other than the target database to store the recovery catalog
(otherwise, the benefits of maintaining the catalog are lost), or create a new database Create enough
space on the database for the RMAN metadata.

Back up the recovery catalog metadata

Upgrade the recovery catalog when necessary


Types of Files That RMAN Can Back Up
The BACKUP command can back up the following types of files:
Database, which includes all datafiles as well as the current control file and current server parameter
file:

Tablespaces (except for locally-managed temporary tablespaces)

Current datafiles

Current control file

Archived redo logs

Current server parameter file

Backup sets
RMAN does not back up the following:

Online redo logs

Transported tablespaces before they have been made read/write

Client-side initialization parameter files or noncurrent server parameter files

HowtoConfigureRMAN
RMAN can invoked from the command line on the database host machine like so:
C:\>rman target sys/sys_password
Recovery Manager: Release 9.2.0.1.0 - Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
Connected to target database: ORCL (DBID=1036216947)
RMAN> show all;

RMAN configuration parameters are:


CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO
'e:\backup\ctl_sp_bak_%F';
CONFIGURE DEVICE TYPE DISK PARALLELISM 2;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE CHANNEL 1 DEVICE TYPE DISK FORMAT 'e:\backup\%U.bak' MAXPIECESIZE 4G;
CONFIGURE CHANNEL 2 DEVICE TYPE DISK FORMAT 'e:\backup\%U.bak' MAXPIECESIZE 4G;
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO
'C:\ORACLE\ORA92\DATABASE\SNCFORCL.ORA'; #
default
RMAN>
Retention Policy:
This instructs RMAN on the backups that are eligible for deletion.
For example: A retention policy with redundancy 2 would mean that two backups - the latest and the one

prior to that - should be retained. All other backups are candidates for deletion.
Default Device Type:
This can be "disk" or "sbt" (system backup to tape). We will backup to disk and then have our OS backup
utility copy the completed backup, and other supporting files, to tape.
Controlfile Autobackup:
This can be set to "on" or "off". When set to "on", RMAN takes a backup of the controlfile AND server
parameter file each time a backup is performed. Note that "off" is the default.
Controlfile Autobackup Format:
This tells RMAN where the controlfile backup is to be stored. The "%F" in the file name instructs RMAN to
append the database identifier and backup timestamp to the backup filename. The database identifier, or
DBID, is a unique integer identifier for the database.
Parallelism:
This tells RMAN how many server processes you want dedicated to performing the backups.
Device Type Format:
This specifies the location and name of the backup files. We need to specify the
format for each channel. The "%U" ensures that Oracle appends a unique identifier to the backup file
name. The MAXPIECESIZE attribute sets a maximum file size for each file in the backup set.
Any of the above parameters can be changed using the commands displayed by the "show all" command.
For example, one can turn off controlfile autobackups by issuing:
RMAN> configure controlfile autobackup off;
using target database controlfile instead of recovery catalog
old RMAN configuration parameters:
CONFIGURE CONTROLFILE AUTOBACKUP ON;
new RMAN configuration parameters:
CONFIGURE CONTROLFILE AUTOBACKUP OFF;
new RMAN configuration parameters are successfully stored
RMAN>

CompleteStepsforUsingRMANthroughCatalog
Recovery manager is a platform independent utility for coordinating your backup and restoration
procedures across multiple servers.
Create Recovery Catalog
First create a user to hold the recovery catalog:
CONNECT sys/password@w2k1 AS SYSDBA

-- Create tablepsace to hold repository


CREATE TABLESPACE "RMAN"
DATAFILE 'C:\ORACLE\ORADATA\W2K1\RMAN01.DBF' SIZE 6208K REUSE
AUTOEXTEND ON NEXT 64K MAXSIZE 32767M
EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO;
-- Create rman schema owner
CREATE USER rman IDENTIFIED BY rman
TEMPORARY TABLESPACE temp
DEFAULT TABLESPACE rman
QUOTA UNLIMITED ON rman;
GRANT connect, resource, recovery_catalog_owner TO rman;
Then create the recovery catalog:
C:>rman catalog=rman/rman@w2k1
Recovery Manager: Release 9.2.0.1.0 - Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
Connected to recovery catalog database
Recovery catalog is not installed
RMAN> create catalog tablespace "RMAN";
Recovery catalog created
RMAN> exit
Recovery Manager complete.
C:>
Register Database

Each database to be backed up by RMAN must be registered:


C:>rman catalog=rman/rman@w2k1 target=sys/password@w2k2\ <mailto:target=sys/password@w2k2\>
Recovery Manager: Release 9.2.0.1.0 - Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
connected to target database: W2K2 (DBID=1371963417)
connected to recovery catalog database
RMAN> register database;
database registered in recovery catalog
starting full resync of recovery catalog
full resync complete
RMAN>
Full Backup
First we configure several persistent parameters for this instance:
RMAN> configure retention policy to recovery window of 7 days;
RMAN> configure default device type to disk;
RMAN> configure controlfile autobackup on;
RMAN> configure channel device type disk format 'C:\Oracle\Admin\W2K2\Backup%d_DB_%u_%s_%p';
Next we perform a complete database backup using a single command:
RMAN> run
{backup database plus archivelog;
delete noprompt obsolete;
}
The recovery catalog should be resyncronized on a regular basis so that changes to the database
structure and presence of new archive logs is recorded. Some commands perform partial and full resyncs
implicitly, but if you are in doubt you can perform a full resync using the follwoing command:

RMAN> resync catalog;

RestoringanRMANBackuptoAnotherNode
In certain circumstances, it may be desirable to restore a database from an RMAN backup onto a
machine other than the original host. For example, to recover data at a given point in time, or to duplicate
a production instance.
The example assumes:
The target database is on host A
The database is to be restored onto host B
The directory structure of host B is different to host A
The ORACLE_SID will not change for the restored database
A recovery catalog is being used
The backups were carried out to disk (for illustrative purposes, and to disassociate from any media
manager specific issues)
The following steps are required:

Backup the target on host A


List the datafile locations on host A
Make the backup available to host B
Make a copy of the init.ora available to host B
Edit the init.ora to reflect directory structure changes
Configure SQL*Net connectivity from host to the recovery catalog and duplicated database
Set up a password file for the duplicated database
Startup nomount the duplicated database
RMAN restore the controlfile(s)
Mount the database
Restore and rename the datafiles
Recover and open the database
Step:1 Backup the Target on Host A
The target database needs to be backed up using RMAN.
The following is one example of RMAN doing an online database backup. In this
example, the backup sets are written to disk.
run {
allocate channel t1 type disk;
backup
tag whole_database_open
format '/oracle/backups/BFS/df_%u'
database;

# switch out of the current logfile


sql 'alter system archive log current';
#backup the archived logs
backup archivelog all
format '/oracle/backups/BFS/al_%u';
# backup a copy of the controlfile that contains records for the
# other backups just made
backup current controlfile tag = cf1 format '/oracle/backups/BFS/cf_%u';
}
Step: 2 List Datafile Locations on Host A

The datafile numbers and location on host A are required. These datafile locations will change on host B
SQL> select file#, name from v$datafile;
file# name
----- -----------------------------1 /oracle/OFA_base/u01/oradata/V805X/system01.dbf
2 /oracle/OFA_base/u01/oradata/V805X/rbs01.dbf
3 /oracle/OFA_base/u01/oradata/V805X/temp01.dbf
4 /oracle/OFA_base/u01/oradata/V805X/tools01.dbf
5 /oracle/OFA_base/u01/oradata/V805X/users01.dbf
6 /oracle/OFA_base/u01/oradata/V805X/users02.dbf
7 /oracle/OFA_base/u01/oradata/V805X/rbs02.dbf
8 /oracle/OFA_base/u01/oradata/V805X/rcvcat.dbf
The log file names should also be recorded.
SQL> select group#, member from v$logfile;
group# member
----- -----------------------------1 /oracle/OFA_base/u01/oradata/V805X/redo01.log
2 /oracle/OFA_base/u01/oradata/V805X/redo02.log
3 /oracle/OFA_base/u01/oradata/V805X/redo03.log
Step: 3 Make the Backups Available to Host B

Disk Backups
During restore, RMAN will expect the backup sets to be located in the same directory as written to during

the backup.
Tape Backups
The media management software must be configured such that host B is a media manager client, and
can read the backup sets. The media management vendor should be consulted for support on this issue.
Step: 4 init.ora on host B

The "init.ora" needs to be made available on host B. Any location specific parameters must be amended.
For example, ifile, *_dump_dest, log_archive_dest*, control_files
Step: 5 SQL*Net configuration

If running RMAN from host A:


A. Connectivity to the catalog remains unchanged
B. Configure tnsnames.ora on host A to connect to duplicated db on host B
C. Configure listener.ora on host B to accept connections for duplicated database
If running RMAN from host B:
A Configure tnsnames.ora on host B to connect to catalog listener.ora on catalog host remains
unchanged
B. Configure tnsnames.ora on host B to connect to duplicated db on host B configure listener.ora on host
B to accept connections for duplicated database
If running RMAN from host C (ie, neither host A or host B):
A. Connectivity to the catalog remains unchanged
B. Configure tnsnames.ora on host C to connect to duplicated db on host B configure listener.ora on host
B to accept connections for duplicated database

Step: 6 Setup Password File

In order to allow RMAN remote connections, a password file must be setup for the duplicated database.
For example,
orapwd file=$ORACLE_HOME/dbs/orapw$ORACLE_SID password=kernel

Step: 7 Recover Duplicated Database

Startup nomount the database


SQL> startup nomount pfile=
Restore the controlfile(s)
For example,
run{
allocate channel c1 type disk;
restore controlfile;
}
Mount the database
SQL> alter database mount;
Rename and restore the datafiles, and perform database recovery
RMAN can be used to change the location of the datafiles from the location on host A to the new location
on host B.
For example,
run {
allocate channel c1 type disk;
allocate channel c2 type disk;
allocate channel c3 type disk;
set newname for datafile 1 to '/oracle/datafiles/system01.dbf';
set newname for datafile 2 to '/oracle/datafiles/rbs01.dbf';
set newname for datafile 3 to '/oracle/datafiles/temp01.dbf';
set newname for datafile 4 to '/oracle/datafiles/tools01.dbf';
set newname for datafile 5 to '/oracle/datafiles/users01.dbf';
set newname for datafile 6 to '/oracle/datafiles/users02.dbf';
set newname for datafile 7 to '/oracle/datafiles/rbs02.dbf';
set newname for datafile 8 to '/oracle/datafiles/rcvcat.dbf';
restore database;
switch datafile all;
}
Recover and open the database

MonitoringRecoveryManagerJobs

Sometimes it is useful to identify what a server session performing a backup or copy operation is doing.
You have access to several views that can assist in monitoring the progress of or obtaining information
about RMAN jobs:
View
V$PROCESS
Identifies currently active processes.
V$SESSION
Identifies currently active sessions. Use this view to determine which Oracle database server sessions
correspond to which RMAN allocated channels.
V$SESSION_LONGOPS
Provides progress reports on long-running operations.
V$SESSION_WAIT
Lists the events or resources for which sessions are waiting.

Correlating Server Sessions with Channels

To identify which server sessions correspond to which RMAN channels, use the set command with the
command id parameter. The command id parameter enters the specified string into the CLIENT_INFO
column of the V$SESSION dynamic performance view. Join V$SESSION with V$PROCESS to correlate
the server session with the channel.
To correlate a process with a channel during a backup:
Step:1 Start RMAN and connect to the target database .

Step:2 Set the command id parameter after allocating the channels and then back up the
desired object.

run {
allocate channel t1 type disk;
allocate channel t2 type disk;

set command id to 'rman';


backup
incremental level 0
filesperset 5
tablespace 'SYSTEM';
# optionally, issue a host command to access the operating system prompt
host;
sql 'ALTER SYSTEM ARCHIVE LOG ALL';
}
Step:3 Start a SQL*Plus session and then query the joined V$SESSION and V$PROCESS
views while the RMAN job is executing.
SELECT sid, spid, client_info
FROM v$process p, v$session s
WHERE p.addr = s.paddr
AND client_info LIKE '%id=rman%';
SID SPID CLIENT_INFO
8 21973 id=rman
16 22057 id=rman
17 22068 id=rman,ch=t1
18 22070 id=rman,ch=t2
Monitoring Job Progress

Each server session performing a backup, restore, or copy reports its progress compared to the total
amount of work required for that particular part of the restore.
For example, if you perform a restore using two channels, and each channel has two backup sets to
restore (a total of 4 sets), then each server session reports its progress through a single set. When that
set is completely restored, RMAN starts reporting progress on the next set to restore.
Step:1 Start RMAN and connect to the target database and, optionally, the recovery catalog
database.

Step:2 Start an RMAN job.:


run {
allocate channel t1 type disk;
backup database;
}
Step:3 While the job is running, execute a script containing the following SQL statement:
SELECT sid, serial#, context, sofar, totalwork,

round(sofar/totalwork*100,2) "% Complete"


FROM v$session_longops
WHERE opname LIKE 'RMAN%'
AND opname NOT LIKE '%aggregate%'
AND totalwork != 0
AND sofar <> totalwork
/
If you repeat the query while the backup progresses, then you see output such as the following:
SQL>
SID SERIAL# CONTEXT SOFAR TOTALWORK % Complete
8 19 1 10377 36617 28.34
SQL> /
SID SERIAL# CONTEXT SOFAR TOTALWORK % Complete
8 19 1 21513 36617 58.75
SQL> /
SQL> /
no rows selected
NOTE: If you run the script at intervals of two minutes or more and the % Complete column does not
increase, then RMAN is encountering a problem.
SELECT sid, seconds_in_wait AS sec_wait, event FROM v$session_wait WHERE wait_time = 0
ORDER BY sid;
SID SEC_WAIT EVENT
1 368383335 pmon timer
2 1097 rdbms ipc message
3 387928 rdbms ipc message
4 0 rdbms ipc message
5 1408 smon timer
6 386114 rdbms ipc message
7 387626 rdbms ipc message
8 1060 SQL*Net message from client
9 1060 SQL*Net message from client
12 1060 SQL*Net message from client
13 2366 SQL*Net message from client
14 2757 SQL*Net message from client
12 rows selected.
Note: The V$SESSION_WAIT view shows only Oracle events, not media manager events.
Another Query:
COLUMN EVENT FORMAT a10

COLUMN SECONDS_IN_WAIT FORMAT 999


COLUMN STATE FORMAT a20
COLUMN CLIENT_INFO FORMAT a30
SELECT p.SPID, EVENT, SECONDS_IN_WAIT AS SEC_WAIT,
sw.STATE, CLIENT_INFO
FROM V$SESSION_WAIT sw, V$SESSION s, V$PROCESS p
WHERE sw.EVENT LIKE '%disk%'
AND s.SID=sw.SID
AND s.PADDR=p.ADDR

DatafileRecovery
Recovery from missing or corrupted datafile(s):
Case 1: Recovery from corrupted or missing datafile
This scenario deals with a situation where a datafile has gone missing, or is corrupted beyond repair. For
concreteness, we look at a case where a datafile is missing. Below is a transcript of an SQL Plus session
that attempts to open a database with a missing datafile (typed commands in bold, lines in italics are my
comments, all other lines are feedback from SQL Plus):
--open SQL Plus from the command line without
--logging on to database
C:\>sqlplus /nolog
SQL*Plus: Release 9.2.0.4.0 - Production on Tue Jan 25 14:52:41 2005
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
--Connect to the idle Oracle process as a privileged user and start up instance
SQL> connect / as sysdba
Connected to an idle instance.
SQL> startup
ORACLE instance started.
Total System Global Area 131555128 bytes
Fixed Size 454456 bytes
Variable Size 88080384 bytes

Database Buffers 41943040 bytes


Redo Buffers 1077248 bytes
Database mounted.

ORA-01157: cannot identify/lock data file 4 - see DBWR trace file


ORA-01110: data file 4: 'D:\ORACLE_DATA\DATAFILES\ORCL\USERS01.DBF'
SQL>
The error message tells us that file# 4 is missing. Note that although the startup command has failed, the
database is in the mount state. Thus, the database control file, which is also the RMAN repository can be
accessed by the instance and by RMAN. We now recover the missing file using RMAN. The transcript of
the recovery session is reproduced below (bold lines are typed commands, comments in italics, the rest is
feedback from RMAN):
--logon to RMAN
C:\>rman target /
Recovery Manager: Release 9.2.0.4.0 - Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
connected to target database: ORCL (DBID=1507972899)

--restore missing datafile


RMAN> restore datafile 4;

Starting restore at 26/JAN/05


using target database controlfile instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=14 devtype=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: sid=15 devtype=DISK

channel ORA_DISK_1: starting datafile backupset restore


channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00004 to D:\ORACLE_DATA\DATAFILES\ORCL\USERS01.DBF
channel ORA_DISK_1: restored backup piece 1
piece handle=D:\BACKUP\0QGB0UEC_1_1.BAK tag=TAG20050124T152708 params=NULL
channel ORA_DISK_1: restore complete
Finished restore at 26/JAN/05
--recover restored datafile - RMAN applies all logs automatically
RMAN> recover datafile 4;
Starting recover at 26/JAN/05 using channel ORA_DISK_1
using channel ORA_DISK_2
starting media recovery
archive log thread 1 sequence 4 is already on disk as file E:\ORACLE_ARCHIVE\ORCL\1_4.ARC
archive log thread 1 sequence 5 is already on disk as file C:\ORACLE_ARCHIVE\ORCL\1_5.ARC
archive log thread 1 sequence 6 is already on disk as file E:\ORACLE_ARCHIVE\ORCL\1_6.ARC
archive log thread 1 sequence 7 is already on disk as file E:\ORACLE_ARCHIVE\ORCL\1_7.ARC
archive log filename=E:\ORACLE_ARCHIVE\ORCL\1_4.ARC thread=1 sequence=4
archive log filename=C:\ORACLE_ARCHIVE\ORCL\1_5.ARC thread=1 sequence=5
media recovery complete
Finished recover at 26/JAN/05
--open database for general use
RMAN> alter database open;
database opened
RMAN>

In the above scenario, the database is already in the mount state before the RMAN session is initiated. If
the database is not mounted, you should issue a "startup mount" command before attempting to restore
the missing datafile. The database must be mounted before any datafile recovery can be done.
If the database is already open when datafile corruption is detected, you can recover the datafile without
shutting down the database. The only additional step is to take the relevant tablespace offline before
starting recovery. In this case you would perform recovery at the tablespace level. The commands are:
C:\>rman target /
Recovery Manager: Release 9.2.0.4.0 - Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
connected to target database: ORCL (DBID=1507972899)
--offline affected tablespace
RMAN> sql 'alter tablespace USERS offline immediate';
using target database controlfile instead of recovery catalog
sql statement: alter tablespace USERS offline immediate
--recover offlined tablespace
RMAN> recover tablespace USERS;
Starting recover at 26/JAN/05
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=14 devtype=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: sid=12 devtype=DISK
starting media recovery
media recovery complete
Finished recover at 26/JAN/05
--online recovered tablespace
RMAN> sql 'alter tablespace USERS online';
sql statement: alter tablespace USERS online

RMAN>
Here we have used the SQL command, which allows us to execute arbitrary SQL from within RMAN.
Case 2: Recovery from block corruption
It is possible to recover corrupted blocks using RMAN backups. This is a somewhat exotic scenario, but it
can be useful in certain circumstances, as illustrated by the following example. Here's the situation: a user
connected to SQLPlus gets a data block corruption error when she queries a table. Here's a part of the
session transcript:
SQL> connect testuser/testpassword
Connected.
SQL> select count(*) from test_table;
select count(*) from test_table
*
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 4, block # 2015)
ORA-01110: data file 4: 'D:\ORACLE_DATA\DATAFILES\ORCL\USERS01.DBF'
Since we know the file and block number, we can perform block level recovery using RMAN. This is best
illustrated by example:
C:\>rman target /
Recovery Manager: Release 9.2.0.4.0 - Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
connected to target database: ORCL (DBID=1507972899)
--restore AND recover specific block
RMAN> blockrecover datafile 4 block 2015;
Starting blockrecover at 26/JAN/05
using target database controlfile instead of recovery catalog
allocated channel: ORA_DISK_1

channel ORA_DISK_1: sid=19 devtype=DISK


allocated channel: ORA_DISK_2
channel ORA_DISK_2: sid=20 devtype=DISK
channel ORA_DISK_1: restoring block(s)
channel ORA_DISK_1: specifying block(s) to restore from backup set
restoring blocks of datafile 00004
channel ORA_DISK_1: restored block(s) from backup piece 1
piece handle=E:\BACKUP\0QGB0UEC_1_1.BAK tag=TAG20050124T152708 params=NULL
channel ORA_DISK_1: block restore complete
starting media recovery
media recovery complete
Finished blockrecover at 26/JAN/05
RMAN>
Now our user should be able to query the table from her SQLPlus session. Here's her session transcript
after block recovery.
SQL> select count(*) from test_table;
COUNT(*)
---------217001
SQL>
A couple of important points regarding block recovery:
1. Block recovery can only be done using RMAN.
2. The entire database can be open while performing block recovery.
3. Check all database files for corruption. This is important - there could be other corrupted blocks.
Verification of database files can be done using RMAN or the dbverify utility. To verify using RMAN simply
do a complete database backup with default settings. If RMAN detects block corruption, it will exit with an
error message pointing out the guilty file/block.

ControlfileRecovery
Recovery from missing or corrupted control file
Case 1: A multiplexed copy of the control file is available.
On startup Oracle must read the control file in order to find out where the datafiles and online logs are
located. Oracle expects to find control files at locations specified in the CONTROL_FILE initialisation
parameter. The instance will fail to mount the database if any one of the control files are missing or
corrupt. Here's an example:
SQL> startup
ORACLE instance started.
Total System Global Area 135338868 bytes
Fixed Size 453492 bytes
Variable Size 109051904 bytes
Database Buffers 25165824 bytes
Redo Buffers 667648 bytes
ORA-00205: error in identifying controlfile, check alert log for more info
SQL>
On checking the alert log, as suggested, we find the following:
ORA-00202: controlfile: 'e:\oracle_dup_dest\controlfile\ORCL\control02.ctl'
ORA-27046: file size is not a multiple of logical block size
OSD-04012: file size mismatch (OS 5447783)
The above corruption was introduced by manually editing the control file when the database was closed.
The solution is simple, provided you have at least one uncorrupted control file - replace the corrupted
control file with a copy using operating system commands. Remember to rename the copied file. The
database should now start up without any problems.
Case 2: All control files lost
What if you lose all your control files? In that case you have no option but to use a backup control file. The
recovery needs to be performed from within RMAN, and requires that all logs (archived and current online
logs) since the last backup are available. The logs are required because all datafiles must also be

restored from backup. The database will then have to be recovered up to the time the control files went
missing. This can only be done if all intervening logs are available. Here's an annotated transcript of a
recovery session (as usual, lines in bold are commands to be typed, lines in italics are explanatory
comments, other lines are RMAN feedback):
-- Connect to RMAN
C:\rman
Recovery Manager: Release 9.0.1.1.1 - Production
(c) Copyright 2001 Oracle Corporation. All rights reserved.
RMAN> set dbid 4102753520
executing command: SET DBID
set DBID - get this from the name of the controlfile autobackup. For example, if autobackup name is
CTL_SP_BAK_C-1507972899-20050124-00 the the DBID is 1507972899. This step will not be required if
the instance is
RMAN> connect target sys/change_on_install
connected to target database: (not mounted)
Recovery Manager: Release 9.2.0.4.0 - Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
connected to target database: ORCL (not mounted)
RMAN> restore controlfile from autobackup;
Starting restore at 26/JAN/05
using channel ORA_DISK_1
channel ORA_DISK_1: restoring controlfile
channel ORA_DISK_1: restore complete
replicating controlfile
input filename=D:\ORACLE_DATA\CONTROLFILE\ORCL\CONTROL01.CTL
output filename=E:\ORACLE_DUP_DEST\CONTROLFILE\ORCL\CONTROL02.CTL
output filename=C:\ORACLE_DUP_DEST\CONTROLFILE\ORCL\CONTROL03.CTL
Finished restore at 26/JAN/05

-- Now that control files have been restored, the instance can mount the
-- database.
RMAN> mount database;
database mounted
-- All datafiles must be restored, since the controlfile is older than the current
-- datafiles. Datafile restore must be followed by recovery up to the current log.
RMAN> restore database;
Starting restore at 26/JAN/05
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to D:\ORACLE_DATA\DATAFILES\ORCL\SYSTEM01.DBF
restoring datafile 00004 to D:\ORACLE_DATA\DATAFILES\ORCL\USERS01.DBF
channel ORA_DISK_1: restored backup piece 1
piece handle=E:\BACKUP\0DGB0I79_1_1.BAK tag=TAG20050124T115832 params=NULL
channel ORA_DISK_1: restore complete
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00002 to D:\ORACLE_DATA\DATAFILES\ORCL\UNDOTBS01.DBF
restoring datafile 00003 to D:\ORACLE_DATA\DATAFILES\ORCL\TOOLS01.DBF
channel ORA_DISK_1: restored backup piece 1
piece handle=E:\BACKUP\0CGB0I78_1_1.BAK tag=TAG20050124T115832 params=NULL
channel ORA_DISK_1: restore complete
Finished restore at 26/JAN/05

--Database must be recovered because all datafiles have been restored from
-- backup
RMAN> recover database;
Starting recover at 26/JAN/05
using channel ORA_DISK_1
starting media recovery
archive log thread 1 sequence 2 is already on disk as file E:\ORACLE_ARCHIVE\ORCL\1_2.ARC
archive log thread 1 sequence 4 is already on disk as file
D:\ORACLE_DATA\LOGS\ORCL\REDO02A.LOG
archive log thread 1 sequence 5 is already on disk as file
D:\ORACLE_DATA\LOGS\ORCL\REDO01A.LOG
archive log thread 1 sequence 6 is already on disk as file
D:\ORACLE_DATA\LOGS\ORCL\REDO03A.LOG
archive log filename=E:\ORACLE_ARCHIVE\ORCL\1_2.ARC thread=1 sequence=2
archive log filename=E:\ORACLE_ARCHIVE\ORCL\1_3.ARC thread=1 sequence=3
archive log filename=E:\ORACLE_DATA\LOGS\ORCL\REDO02A.LOG thread=1 sequence=4
archive log filename=E:\ORACLE_DATA\LOGS\ORCL\REDO01A.LOG thread=1 sequence=5
archive log filename=E:\ORACLE_DATA\LOGS\ORCL\REDO03A.LOG thread=1 sequence=6
media recovery complete
Finished recover at 26/JAN/05
-- Recovery completed. The database must be opened with RESETLOGS
-- because a backup control file was used. Can also use
-- "alter database open resetlogs" instead.
RMAN> open resetlogs database;
database opened
Several points are worth emphasizing.

1. Recovery using a backup controlfile should be done only if a current control file is unavailable.
2. All datafiles must be restored from backup. This means the database will need to be recovered using
archived and online redo logs. These MUST be available for recovery until the time of failure.
3. As with any database recovery involving RESETLOGS, take a fresh backup immediately.
4. Technically the above is an example of complete recovery - since all committed transactions were
recovered. However, some references consider this to be incomplete recovery because the database log
sequence had to be reset.
After recovery using a backup controlfile, all temporary files associated with locally-managed tablespaces
are no longer available. You can check that this is so by querying the view V$TEMPFILE - no rows will be
returned. Therefore tempfiles must be added (or recreated) before the database is made available for
general use. In the case at hand, the tempfile already exists so we merely add it to the temporary
tablespace. This can be done using SQLPlus or any tool of your choice:
SQL> alter tablespace temp add tempfile
'D:\oracle_data\datafiles\ORCL\TEMP01.DBF';
Tablespace altered.
SQL>
Check that the file is available by querying v$TEMPFILE.

RedologfileRecovery
Recovery from missing or corrupted redo log group
Case 1: A multiplexed copy of the missing log is available.
If a redo log is missing, it should be restored from a multiplexed copy, if possible. Here's an example,
where I attempt to startup from SQLPLUS when a redo log is missing:
SQL> startup
ORACLE instance started.
Total System Global Area 131555128 bytes
Fixed Size 454456 bytes
Variable Size 88080384 bytes
Database Buffers 41943040 bytes
Redo Buffers 1077248 bytes
Database mounted.
ORA-00313: open failed for members of log group 3 of thread 1
ORA-00312: online log 3 thread 1: 'D:\ORACLE_DATA\LOGS\ORCL\REDO03A.LOG'
SQL>
To fix this we simply copy REDO03A.LOG from its multiplexed location on E: to the above location on D:.

SQL> alter database open;


Database altered.
SQL>
That's it - the database is open for use.
Case 2: All members of a log group lost.
In this case an incomplete recovery is the best we can do. We will lose all transactions from the missing
log and all subsequent logs. We illustrate using the same example as above. The error message
indicates that members of log group 3 are missing. We don't have a copy of this file, so we know that an
incomplete recovery is required. The first step is to determine how much can be recovered. In order to do
this, we query the V$LOG view (when in the mount state) to find the system change number (SCN) that
we can recover to (Reminder: the SCN is a monotonically increasing number that is incremented
whenever a commit is issued)
--The database should be in the mount state for v$log access
SQL> select first_change# from v$log whnhi.ere group#=3 ;
FIRST_CHANGE#
------------370255
SQL>
The FIRST_CHANGE# is the first SCN stamped in the missing log. This implies that the last SCN
stamped in the previous log is 370254 (FIRST_CHANGE#-1). This is the highest SCN that we can
recover to. In order to do the recovery we must first restore ALL datafiles to this SCN, followed by
recovery (also up to this SCN). This is an incomplete recovery, so we must open the database resetlogs
after we're done. Here's a transcript of the recovery session (typed commands in bold, comments in
italics, all other lines are RMAN feedback):
C:\>rman target /
Recovery Manager: Release 9.2.0.4.0 - Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
connected to target database: ORCL (DBID=1507972899)
--Restore ENTIRE database to determined SCN
RMAN> restore database until scn 370254;
Starting restore at 26/JAN/05
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to D:\ORACLE_DATA\DATAFILES\ORCL\SYSTEM01.DBF
restoring datafile 00004 to D:\ORACLE_DATA\DATAFILES\ORCL\USERS01.DBF
channel ORA_DISK_2: starting datafile backupset restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
restoring datafile 00002 to D:\ORACLE_DATA\DATAFILES\ORCL\UNDOTBS01.DBF
restoring datafile 00003 to D:\ORACLE_DATA\DATAFILES\ORCL\TOOLS01.DBF
channel ORA_DISK_2: restored backup piece 1

piece handle=E:\BACKUP\13GB14IB_1_1.BAK tag=TAG20050124T171139 params=NUL


channel ORA_DISK_2: restore complete
channel ORA_DISK_1: restored backup piece 1
piece handle=E:\BACKUP\14GB14IB_1_1.BAK tag=TAG20050124T171139 params=NUL
channel ORA_DISK_1: restore complete
Finished restore at 26/JAN/05
--Recover database
RMAN> recover database until scn 370254;
Starting recover at 26/JAN/05
using channel ORA_DISK_1
using channel ORA_DISK_2
starting media recovery
archive log thread 1 sequence 9 is already on disk as file E:\ORACLE_ARCHIVE\ORCL\1_9.ARC
archive log thread 1 sequence 10 is already on disk as file E:\ORACLE_ARCHIVE\ORCL\1_10.ARC
archive log thread 1 sequence 11 is already on disk as file E:\ORACLE_ARCHIVE\ORCL\1_11.ARC
archive log thread 1 sequence 12 is already on disk as file E:\ORACLE_ARCHIVE\ORCL\1_12.ARC
archive log filename=E:\ORACLE_ARCHIVE\ORCL\1_9.ARC thread=1 sequence=9
archive log filename=E:\ORACLE_ARCHIVE\ORCL\1_10.ARC thread=1 sequence=10
media recovery complete
Finished recover at 26/JAN/05
--open database with RESETLOGS (see comments below)
RMAN> alter database open resetlogs;
database opened
RMAN>
The following points should be noted:
1. The entire database must be restored to the SCN that has been determined by querying v$log.
2. All changes beyond that SCN are lost. This method of recovery should be used only if you are sure that
you cannot do better. Be sure to multiplex your redo logs, and (space permitting) your archived logs!
3. The database must be opened with RESETLOGS, as a required log has not been applied. This resets
the log sequence to zero, thereby rendering all prior backups worthless. Therefore, the first step after
opening a database RESETLOGS is to take a fresh backup. Note that the RESETLOGS option must be
used for any incomplete recovery.

DisasterRecovery
Disaster Recovery
Introduction:
- i.e. a situation in which your database server has been destroyed and has taken all your database files
(control files, logs and data files) with it. Obviously, recovery from a disaster of this nature is dependent on
what you have in terms of backups and hardware resources. We assume you have the following available
after the disaster:
* A server with the same disk layout as the original.
* The last full hot backup on tape.

With the above items at hand, it is possible to recover all data up to the last full backup. One can do better
if subsequent archive logs (after the last backup) are available. In our case these aren't available, since
our only archive destination was on the destroyed server ). Oracle provides methods to achieve better
data protection. We will discuss some of these towards the end of the article.
Now on with the task at hand. The high-level steps involved in disaster recovery are:
* Build replacement server.
* Restore backup from tape.
* Install database software.
* Create Oracle service.
* Restore and recover database.
Step:1 Build the server

You need a server to host the database, so the first step is to acquire or build the new machine. This is
not strictly a DBA task, so we won't delve into details here. The main point to keep in mind is that the
replacement server should, as far as possible, be identical to the old one. In particular, pay attention to
the following areas:
* Ideally the server should have the same number of disks as the original. The new disks should also
have enough space to hold all software and data that was on the original server.
* The operating system environment should be the same as the original, right up to service pack and
patch level.
* The new server must have enough memory to cater to Oracle and operating system / other software
requirements. Oracle memory structures (Shared pool, db buffer caches etc) will be sized identically to
the original database instance. Use of the backup server parameter file will ensure this.
Step:2 Restore backup from tape

The next step is to get your backup from tape on to disk.


Step:3 Install Oracle Software

The next step is to install Oracle software on the machine. The following points should be kept in mind
when installing the software:
* Install the same version of Oracle as was on the destroyed server. The version number should match
right down to the patch level, so this may be a multi-step process involving installation followed by the
application of one or more patch sets and patches.
* Do not create a new database at this stage.
* Create a listener using the Network Configuration Assistant. Ensure that it has the same name and
listening ports as the original listener. Relevant listener configuration information can be found in the
backed up listener.ora file.
Step:4 Create directory structure for database files

After software installation is completed, create all directories required for datafiles, (online and archived)
logs, control files and backups. All directory paths should match those on the original server.
Don't worry if you do not know where the database files should be located. You can obtain the required
information from the backup spfile and control file at a later stage. Continue reading - we'll come back to
this later.
Step: 5 Create Oracle service
An Oracle service must be exist before a database is created. The service is created using the oradim
utility, which must be run from the command line. The following commands show how to create and
modify a service (comments in italics, typed commands in bold):
--create a new service with auto startup
C:\>oradim -new -sid ORCL -intpwd ORCL -startmode a
Unfortunately oradim does not give any feedback, but you can check that the service exists via the
Services administrative panel. The service has been configured to start automatically when the computer
is powered up.
Step: 6 Restore and recover database
Now it is time to get down to the nuts and bolts of database recovery. There are several steps, so we'll list
them in order:
* Copy PASSWORD and TNSNAMES file from backup: The backed up password file and tnsnames.ora
files should be copied from the backup directory to the proper locations. Default location for password and
tnsnames files are ORACLE_HOME\database ORACLE_HOME\network\admin respectively.
* Set ORACLE_SID environment variable: ORACLE_SID should be set to the proper SID name (ORCL in
our case). This can be set either in the registry (registry key:
HKLM\Software\Oracle\HOME\ORACLE_SID) or from the system applet in the control panel.
* Invoke RMAN and set the DBID: We invoke rman and connect to the target database as usual. No login
credentials are required since we connect from an OS account belonging to ORA_DBA. Note that RMAN
accepts a connection to the database although the database is yet to be recovered. RMAN doesn't as yet
"know" which database we intend to connect to. We therefore need to identify the (to be restored)
database to RMAN. This is done through the database identifier (DBID). The DBID can be figured out
from the name of the controlfile backup. Example: if you use the controlfile backup format , your controlfile
backup name will be something like "CTL_SP_BAK_C-1507972899-20050228-00". In this case the DBID
is 1507972899. Here's a transcript illustrating the process of setting the DBID:
C:\>rman
Recovery Manager: Release 9.2.0.4.0 - Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
RMAN> set dbid 1507972899
executing command: SET DBID

RMAN>connect target /
connected to target database (not started)

RMAN>
Restore spfile from backup: To restore the spfile, you first need to startup the database in the nomount
state. This starts up the database using a dummy parameter file. After that you can restore the spfile from
the backup (which has been restored from tape ). Finally you restart the database in nomount state. Here
is an example RMAN transcript for the foregoing procedure. Note the difference in SGA size and
components between the two startups:
RMAN> startup nomount
startup failed: ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file 'C:\ORACLE\ORA92\DATABASE\INITORCL.ORA'
trying to start the Oracle instance without parameter files ...
Oracle instance started
Total System Global Area 97590928 bytes
Fixed Size 454288 bytes
Variable Size 46137344 bytes
Database Buffers 50331648 bytes
Redo Buffers 667648 bytes
RMAN> restore spfile from 'e:\backup\CTL_SP_BAK_C-1507972899-20050228-00';
Starting restore at 01/MAR/05
using target database controlfile instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=9 devtype=DISK
channel ORA_DISK_1: autobackup found: e:\backup\CTL_SP_BAK_C-1507972899-20050228-00
channel ORA_DISK_1: SPFILE restore from autobackup complete
Finished restore at 01/MAR/05
RMAN> startup force nomount
Oracle instance started
Total System Global Area 1520937712 bytes
Fixed Size 457456 bytes
Variable Size 763363328 bytes
Database Buffers 754974720 bytes
Redo Buffers 2142208 bytes
RMAN>
The instance is now started up with the correct initialization parameters.
We are now in a position to determine the locations of control file and archive destination, as this
information sits in the spfile. This is done via SQL Plus as follows:
C:\>sqlplus /nolog
SQL>connect / as sysdba
Connected.

SQL> show parameter control_file


SQL> show parameter log_archive_dest
The directories listed in the CONTROL_FILES and LOG_ARCHIVE_DEST_N parameters should be
created at this stage if they haven't been created earlier.
Restore control file from backup: The instance now "knows" where the control files should be restored, as
this is listed in the CONTROL_FILES initialization parameter. Therefore, the next step is to restore these
files from backup. Once the control files are restored, the instance should be restarted in mount mode. A
restart is required because the instance must read the initialization parameter file in order to determine
the control file locations. At the end of this step RMAN also has its proper configuration parameters, as
these are stored in the control file.
Here is a RMAN session transcript showing the steps detailed here:
RMAN> restore controlfile from 'e:\backup\CTL_SP_BAK_C-1507972899-20050228-00';
Starting restore at 01/MAR/05
allocated channel: ORA_DISK_1
hannel ORA_DISK_1: sid=13 devtype=DISK
channel ORA_DISK_1: restoring controlfile
channel ORA_DISK_1: restore complete
replicating controlfile
input filename=D:\ORACLE_DATA\CONTROLFILE\ORCL\CONTROL01.CTL
output filename=E:\ORACLE_DATA\CONTROLFILE\ORCL\CONTROL02.CTL
output filename=C:\ORACLE_DUP_DEST\CONTROLFILE\ORCL\CONTROL03.CTL
Finished restore at 01/MAR/05
RMAN> shutdown
Oracle instance shut down
RMAN> exit
Recovery Manager complete.
C:\>rman target /
Recovery Manager: Release 9.2.0.4.0 - Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
connected to target database (not started)
RMAN>startup mount;
Oracle instance started
database mounted
Total System Global Area 1520937712 bytes
Fixed Size 457456 bytes
Variable Size 763363328 bytes
Database Buffers 754974720 bytes
Redo Buffers 2142208 bytes

RMAN> show all;


using target database controlfile instead of recovery catalog
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO
'e:\backup\ctl_sp_bak_%F';
CONFIGURE DEVICE TYPE DISK PARALLELISM 2;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE CHANNEL 1 DEVICE TYPE DISK FORMAT 'e:\backup\%U.bak' MAXPIECESIZE 4G;
CONFIGURE CHANNEL 2 DEVICE TYPE DISK FORMAT 'e:\backup\%U.bak' MAXPIECESIZE 4G;
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO
'C:\ORACLE\ORA92\DATABASE\SNCFORCL.ORA'; # default
RMAN>
At this stage we can determine the locations of data files and redo logs if we don't know where they
should go. This is done from SQL Plus as follows:
C:\>sqlplus /nolog
SQL>connect / as sysdba
Connected.
SQL>select name from v$datafile;
SQL>select member from v$logfile;
SQL>
The directories shown in the output should be created manually if this hasn't been done earlier.
Restore all datafiles: This is easy. Simply issue a "restore database" command from RMAN, and it will do
all the rest for you:
RMAN> restore database;
Starting restore at 01/MAR/05
using target database controlfile instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=11 devtype=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: sid=8 devtype=DISK
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to D:\ORACLE_DATA\DATAFILES\ORCL\SYSTEM01.DBF
restoring datafile 00003 to D:\ORACLE_DATA\DATAFILES\ORCL\USERS01.DBF
restoring datafile 00004 to D:\ORACLE_DATA\DATAFILES\ORCL\USERS02.DBF
channel ORA_DISK_2: starting datafile backupset restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
restoring datafile 00002 to D:\ORACLE_DATA\DATAFILES\ORCL\UNDOTBS01.DBF

restoring datafile 00005 to D:\ORACLE_DATA\DATAFILES\ORCL\TOOLS01.DBF


restoring datafile 00006 to D:\ORACLE_DATA\DATAFILES\ORCL\TOOLS02.DBF
channel ORA_DISK_2: restored backup piece 1
piece handle=E:\BACKUP\80G6E1TT_1_1.BAK tag=TAG20041130T222501 params=NULL
channel ORA_DISK_1: restored backup piece 1
piece handle=E:\BACKUP\81G6E1TU_1_1.BAK tag=TAG20041130T222501 params=NULL
channel ORA_DISK_2: restored backup piece 2
piece handle=E:\BACKUP\80G6E1TT_2_1.BAK tag=TAG20041130T222501 params=NULL
channel ORA_DISK_1: restored backup piece 2
piece handle=E:\BACKUP\81G6E1TU_2_1.BAK tag=TAG20041130T222501 params=NULL
channel ORA_DISK_1: restored backup piece 3
piece handle=E:\BACKUP\81G6E1TU_3_1.BAK tag=TAG20041130T222501 params=NULL
channel ORA_DISK_1: restore complete
channel ORA_DISK_2: restored backup piece 3
piece handle=E:\BACKUP\80G6E1TT_3_1.BAK tag=TAG20041130T222501 params=NULL
channel ORA_DISK_2: restore complete
Finished restore at 01/MAR/05
RMAN>
Recover database: The final step is to recover the database. Obviously recovery is dependent on the
available archived (and online) redo logs. Since we have lost our database server and have no remote
archive destination, we can recover only up to the time of the backup. Further, since this is an incomplete
recovery, we will have to open the database with resetlogs. Here's a sample RMAN session illustrating
this:
RMAN> recover database;
Starting recover at 01/MAR/05
using channel ORA_DISK_1
using channel ORA_DISK_2
starting media recovery
unable to find archive log archive log thread=1 sequence=1388
RMAN-00571: ==============================
RMAN-00569: =ERROR MESSAGE STACK FOLLOWS =
RMAN-00571: ===============================
RMAN-03002: failure of recover command at 04/01/2005 14:14:43
RMAN-06054: media recovery requesting unknown log: thread 1 scn 32230460
RMAN>alter database open resetlogs;
database opened
RMAN>
Note that RMAN automatically applies all available archive logs. It first applies the backed up log and then
searches for subsequent logs in the archive destination. This opens the door for further recovery if the
necessary logs are available. In our case, however, we have no more redo so we open the database with
resetlogs. The error message above simply indicates that RMAN has searched, unsuccessfully, for
subsequent logs.

UserManagedBackupTerminology

(Operating system command are used to make backups when database is closed or open in this
terminology)
Whole database backup refer to a backup of all data file, control file and log file of the database. whole
database backup can be perform when database open or closed.
The backup takes when database is closed called consistent backup. (Because database file header are
consistent with the control file and when restore completely the database can be opened without any
recovery.)
The backup takes when database is opened and operational called inconsistent backup. (Because
database file header are not consistent with the control file.)
Physical Backup Method
Database Operation Mode Recovery Senerio
Archive log mode recover to the point of failure
No Archive log mode recover to the point of the last backup
Querying View to obtain Database file Information
V$database( use for obtaining data file information)
v$controlfile( user for obtaining control file information)
v$logfile ( user for obtaining log file information)
Use the v$tablespace and v$datafile data dictonery view to obtain a list of all datafiles and there
respective tablespace.
SQL> SELECT T.NAME TABLESPACE,F.NAME DATAFILE
FROM V$TABLESPACE T, V$DATAFILE F
WHERE T.TS# = F.TS#
ORADER BY T.NAME;
Making a consistent whole Database Backup
* Shutdown the database.
* Backup all data file, control file and log file by using an operating system command. we can also include
password file and parameter file.
* Restart the oracle database/Instance.
Making a inconsistent whole database backup
Requirement for inconstant database backup:
* The database is set to ARCHIVELOG mode.
* You ensure that the online redo logs are archived, either by enabling the Oracle automatic archiving
(ARCn) process.
Making a Backup of an Online teblespace or Data file
* Set the datafile or tablespace is backup mode by issuing following command:

SQL> ALTER TABLESPACE BEGIN BACKUP;


(Note:This prevent the sequence number in the datafile header from changing.)
* Use an operating system backup utility to copy all database in the tablespace to backup storage.
Copy c:\datafile_path e:\datafilepath
* After the datafile of the tablespace have been backed up, set them into mode by issuing the following
command:
SQL> ALTER TABLESPACE END BACKUP;
* Archive the unarchive redo logs;
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
Repeat these steps for all tablespaces.
Mechanism of Open database backup
When a datafile is placed in backup mode, more redo log entries may be generated because the log
writer writes block image of changes block of the datafile in backup mode to the redo log instead of just
the row information
Backup Status Information( When performing open database)
select * from v$backup; (view to determine which file are in backup mode, when alter tablespace begin
backup command is issued the status change to ACTIVE.)
Manual Control File Backups
*
Creating a binary image:
ALTER DATABASE BACKUP CONTROLFILE TO 'control.bak';
*
Creating a taxt trace file:
ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
Backing Up the Initilization Parameter File
CREATE PFILE FROM SPFILE;( For Default location)
CREATE PFILE ='C:\BACKUP\INIT.ORA' FRoM SPFILE;
Backup Verification (Command line Interface)
Use to ensure that a backup database or datafile is valid before a restore.
>dbv file='path of file location' start=1 logfile='enter path for log file generation'

Backup Issue with Logging and nologging Option


Tablespace, table, index may be use to set to NOLOGGING mode for Faster load of data when using
direct load operation like SQL LOADER.( Because the redo logs do not contain the values that were
inserted when the table was in NOLOGGING mode.

TypeofRecoveryandDifferencebetweenResetlogandNo
resetlogsOption
omplete Recovery
Complete recovery can be done with the database OPEN unless the SYSTEM or UNDO tablespaces are
damaged (this will terminate the instance).
When your database is runing in no archive log mode and the entire database is restored to the point of
the last whole closed backup, called Complete recovery.
When your database is runing archive log mode and Only need to restore lost files and recovers all data
to the time of failure, called Complete recovery.
Short step for complete recovery:
* Datafiles/Tablesdpace for restore must be offline.
* Restore only lost or damaged datafiles.
* Recover the datafiles using the Recover command.
* Bring recovered datafiles online
Incomplete Recovery
Incomplete Recovery occurs when complete recovery is impossible or you want to lose some information
that was entered by mistake.
You can say, you do not apply all of the redo records generated after the most recent backup. You usually
perform incomplete recovery of the whole database in the following situations:
* Media failure destroys some or all of the online redo logs.
* A user error causes data loss, for example, a user inadvertently drops a table.
* You cannot perform complete recovery because an archived redo log is missing.
* You lose your current control file and must use a backup control file to open the database.
To perform incomplete media recovery, you must restore all datafiles from backups created prior to the
time to which you want to recover and then open the database with the RESETLOGS option when
recovery completes.
Difference between ResetLogs and NoResetLogs option?
After incomplete recovery (where the entire redo stream wasn't applied) we use RESETLOGS option.
RESETLOGS will initialize the logs, reset your log sequence number, and start a new "incarnation" of the
database.
After complete recovery (when the entire redo stream was applied) we use NORESETLOGS option.
Oracle will continue using the existing (valid) log files.
What is a cancel-based recovery?
A cancel-based recovery is a user-managed incomplete recovery that is performed by specifying the

UNTIL CANCEL clause with the RECOVER command. UNTIL CANCEL clause to perform recovery until
the user manually cancels the recovery process. Oracle Cancel-Based Recovery is usually performed
when there is a requirement to recover up to a particular archived redo log file.
If the user does not specify CANCEL then the recovery process will automatically stop when all redo has
been applied to the database.
When Cancel Based Recovery required (Scenario)?
* For example consider a situation where someone dropped a table and one of the online redo logs is
missing and is not archived and the table needs to be recovered.
* Another case is where your backup control file does not know anything about the arhivelogs that got
created after your last backup.
* Another scenario can be where you have lost all logs pass a specific sequence say X (For example, you
may know that you have lost all logs past sequence 1234, so you want to cancel recovery after log 1233
is applied) and you want to control which archived log terminates recovery. Or a scenario where one of
the archived redo log files required for the complete recovery is corrupt or missing and the only recovery
option is to recover up to the missing archived redo log file.
NOTE: Remember the online logs must be reset after you perform an incomplete recovery or you perform
recovery with a backup control file. So finally you will need to open database in RESETLOGS mode. To
synchronize datafiles with control files and redo logs, open database using "resetlogs" options.
What is a point in time recovery?
A point in time recovery is a method to recover your database to any point in time since the last database
backup.
We use RECOVER DATABASE UNTIL TIME statement to begin time-based recovery. The time is always
specified using the following format, delimited by single quotation marks: 'YYYY-MM-DD:HH24:MI:SS'.
Example: RECOVER DATABASE UNTIL TIME '2000-12-31:12:47:30'
If a backup of the control file is being used with this incomplete recovery, then indicate this in the
statement used to start recovery.
Example: RECOVER DATABASE UNTIL TIME '2000-12-31:12:47:30' USING BACKUP CONTROLFILE
In this type of recovery, apply redo logs until the last required redo log has been applied to the restored
datafiles. Oracle automatically terminates the recovery when it reaches the correct time, and returns a
message indicating whether recovery is successful.
What is change-based recovery?
Recovers until the specified SCN.
Change-based recovery is a recovery technique using which a database is recovered up to a specified
system change number (SCN). Using the UNTIL CHANGE clause with the RECOVER command
performs a manual change-based recovery. However, RMAN uses the UNTIL SCN clause to perform a
change-based recovery.
Begin change-based recovery, specifying the SCN for recovery termination. The SCN is specified as a
decimal number without quotation marks. For example, to recover through SCN 10034 issue:
RECOVER DATABASE UNTIL CHANGE 10034;

Continue applying redo log files until the last required redo log file has been applied to the restored
datafiles. Oracle automatically terminates the recovery when it reaches the correct SCN, and returns a
message indicating whether recovery is successful.

HowtoPerformUserManagedInconsistentBackup&
Recovery(WithExample)
1. Backup all data file and Control file by using following commands on every Sunday.
(BACKUP SCRIPTS)
host ocopy C:\Oracle\RDBMS\*.001 e:\temp
host del C:\Oracle\RDBMS\*.001 /q
set echo on
alter tablespace system begin backup;
host ocopy E:\LOCAL\ORADATA\SYSTEM.DBF e:\bu
alter tablespace system end backup;
alter tablespace undotbs begin backup;
host ocopy E:\LOCAL\ORADATA\UNDOTBS.DBF e:\bu
alter tablespace undotbs end backup;
alter tablespace data begin backup;
host ocopy E:\LOCAL\ORADATA\DATA.DBF e:\bu
alter tablespace data end backup;
alter tablespace "INDEX" begin backup;
host ocopy E:\LOCAL\ORADATA\FLEX_INDEX.DBF e:\bu
alter tablespace "INDEX" end backup;
alter database backup controlfile to 'e:\bu\control01.bak' reuse ;
host ocopy C:\Oracle\RDBMS\*.001 e:\bu /q
2. Backup all Archivelog files by using following commands on Daily Basic.
host ocopy C:\Oracle\RDBMS\*.001 e:\local\bu /q
3. Suppose You have Sunday hot backup and you have also Monday to Wednesday archive log file
backup. Suppose any data file corrupt or missed on Thursday, how will you recover database up to
Wednesday.

4. When you triying to open database following error accore:


SQL> startup
ORACLE instance started.
Total System Global Area 118255568 bytes
Fixed Size 282576 bytes
Variable Size 62914560 bytes
Database Buffers 54525952 bytes
Redo Buffers 532480 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 1 - see DBWR trace file
ORA-01110: data file 1: 'E:\LOCAL\ORADATA\SYSTEM.DBF'
5. Check backup status and messing file (Required recovery) and current log sicquence of all data file.

SQL> select * from v$backup;


FILE# STATUS CHANGE# TIME
---------- ------------------ ---------- --------1 FILE NOT FOUND 0
2 NOT ACTIVE 314014 13-OCT-06
3 NOT ACTIVE 314021 13-OCT-06
4 NOT ACTIVE 314027 13-OCT-06
SQL> select * from v$recover_file;
FILE# ONLINE ONLINE_ ERROR CHANGE# TIME
---------- ------- ------- ----------------------------------------------------------------- ---------- --------1 ONLINE ONLINE FILE NOT FOUND 0
SQL> select * from v$log;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- --------1 1 304 52428800 1 NO CURRENT 335069 14-OCT-06
2 1 302 52428800 1 YES INACTIVE 335067 14-OCT-06
3 1 303 52428800 1 YES INACTIVE 335068 14-OCT-06
6. Restore datafile from backup location.
7. Try to open database by using following command:
SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-01113: file 1 needs media recovery
ORA-01110: data file 1: 'E:\LOCAL\ORADATA\SYSTEM.DBF'
8. SQL> select * from v$backup;
FILE# STATUS CHANGE# TIME
---------- ------------------ ---------- --------1 ACTIVE 334936 14-OCT-06
2 NOT ACTIVE 334942 14-OCT-06
3 ACTIVE 334949 14-OCT-06
4 NOT ACTIVE 334977 14-OCT-06
SQL> select * from v$recover_file;
FILE# ONLINE ONLINE_ ERROR CHANGE# TIME
---------- ------- ------- ----------------------------------------------------------------- ---------- --------1 ONLINE ONLINE 334936 14-OCT-06
3 ONLINE ONLINE 334949 14-OCT-06
SQL> select * from v$log;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- --------1 1 304 52428800 1 NO CURRENT 335069 14-OCT-06

2 1 302 52428800 1 YES INACTIVE 335067 14-OCT-06


3 1 303 52428800 1 YES INACTIVE 335068 14-OCT-06
SQL> select * from v$log_history;
RECID STAMP THREAD# SEQUENCE# FIRST_CHANGE# FIRST_TIM NEXT_CHANGE#
---------- ---------- ---------- ---------- ------------- --------- -----------1 603754350 1 1 313930 13-OCT-06 314115
2 603754360 1 2 314115 13-OCT-06 314116
3 603754369 1 3 314116 13-OCT-06 314118
4 603754369 1 4 314118 13-OCT-06 314119
5 603754374 1 5 314119 13-OCT-06 314120
6 603754374 1 6 314120 13-OCT-06 314121
7 603754379 1 7 314121 13-OCT-06 314122
8 603754379 1 8 314122 13-OCT-06 314123
9 603754385 1 9 314123 13-OCT-06 314124
10 603754385 1 10 314124 13-OCT-06 314125
11 603754390 1 11 314125 13-OCT-06 314126
303 603822241 1 303 335068 14-OCT-06 335069
SQL> recover datafile 1;
ORA-00279: change 334936 generated at 10/14/2006 16:13:22 needed for thread 1
ORA-00289: suggestion : C:\ORACLE\RDBMS\ARC00208.001
ORA-00280: change 334936 for thread 1 is in sequence #208
Specify log: {=suggested | filename | AUTO | CANCEL}
AUTO

SQL*Plus: Release 9.0.1.0.1 - Production on Fri Oct 13 21:20:42 2006


(c) Copyright 2001 Oracle Corporation. All rights reserved.
SP2-0306: Invalid option.
Usage: CONN[ECT] [logon] [AS {SYSDBA|SYSOPER}]
where ::= [/][@] | /
Connected to:
Oracle9i Enterprise Edition Release 9.0.1.1.1 - Production
With the Partitioning option
JServer Release 9.0.1.1.1 - Production
SQL> set linesize 7777
SQL> select * from v$log;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- --------1 1 1 52428800 1 NO CURRENT 313930 13-OCT-06
2 1 0 52428800 1 YES UNUSED 0
3 1 0 52428800 1 YES UNUSED 0

SQL> select * from v$recover_file;


no rows selected
SQL> select * from v$bachup;
select * from v$bachup
*
ERROR at line 1:
ORA-00942: table or view does not exist
SQL> select * from v$backup;
FILE# STATUS CHANGE# TIME
---------- ------------------ ---------- --------1 NOT ACTIVE 251373 12-OCT-06
2 NOT ACTIVE 251381 12-OCT-06
3 NOT ACTIVE 251388 12-OCT-06
4 NOT ACTIVE 251396 12-OCT-06
SQL> select * from v$backup;
FILE# STATUS CHANGE# TIME
---------- ------------------ ---------- --------1 NOT ACTIVE 314007 13-OCT-06
2 ACTIVE 314014 13-OCT-06
3 NOT ACTIVE 251388 12-OCT-06
4 NOT ACTIVE 251396 12-OCT-06
SQL> /
FILE# STATUS CHANGE# TIME
---------- ------------------ ---------- --------1 NOT ACTIVE 314007 13-OCT-06
2 ACTIVE 314014 13-OCT-06
3 NOT ACTIVE 251388 12-OCT-06
4 NOT ACTIVE 251396 12-OCT-06
SQL> /
FILE# STATUS CHANGE# TIME
---------- ------------------ ---------- --------1 NOT ACTIVE 314007 13-OCT-06
2 ACTIVE 314014 13-OCT-06
3 NOT ACTIVE 251388 12-OCT-06
4 NOT ACTIVE 251396 12-OCT-06
SQL> /
FILE# STATUS CHANGE# TIME
---------- ------------------ ---------- --------1 NOT ACTIVE 314007 13-OCT-06
2 NOT ACTIVE 314014 13-OCT-06
3 ACTIVE 314021 13-OCT-06
4 NOT ACTIVE 251396 12-OCT-06

SQL> /
FILE# STATUS CHANGE# TIME
---------- ------------------ ---------- --------1 NOT ACTIVE 314007 13-OCT-06
2 NOT ACTIVE 314014 13-OCT-06
3 ACTIVE 314021 13-OCT-06
4 NOT ACTIVE 251396 12-OCT-06
SQL> /
FILE# STATUS CHANGE# TIME
---------- ------------------ ---------- --------1 NOT ACTIVE 314007 13-OCT-06
2 NOT ACTIVE 314014 13-OCT-06
3 ACTIVE 314021 13-OCT-06
4 NOT ACTIVE 251396 12-OCT-06
SQL> /
FILE# STATUS CHANGE# TIME
---------- ------------------ ---------- --------1 NOT ACTIVE 314007 13-OCT-06
2 NOT ACTIVE 314014 13-OCT-06
3 ACTIVE 314021 13-OCT-06
4 NOT ACTIVE 251396 12-OCT-06
SQL> /
FILE# STATUS CHANGE# TIME
---------- ------------------ ---------- --------1 NOT ACTIVE 314007 13-OCT-06
2 NOT ACTIVE 314014 13-OCT-06
3 ACTIVE 314021 13-OCT-06
4 NOT ACTIVE 251396 12-OCT-06
SQL> /
FILE# STATUS CHANGE# TIME
---------- ------------------ ---------- --------1 NOT ACTIVE 314007 13-OCT-06
2 NOT ACTIVE 314014 13-OCT-06
3 ACTIVE 314021 13-OCT-06
4 NOT ACTIVE 251396 12-OCT-06
SQL> /
FILE# STATUS CHANGE# TIME
---------- ------------------ ---------- --------1 NOT ACTIVE 314007 13-OCT-06
2 NOT ACTIVE 314014 13-OCT-06
3 NOT ACTIVE 314021 13-OCT-06
4 ACTIVE 314027 13-OCT-06
SQL> /

FILE# STATUS CHANGE# TIME


---------- ------------------ ---------- --------1 NOT ACTIVE 314007 13-OCT-06
2 NOT ACTIVE 314014 13-OCT-06
3 NOT ACTIVE 314021 13-OCT-06
4 NOT ACTIVE 314027 13-OCT-06
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area 118255568 bytes
Fixed Size 282576 bytes
Variable Size 62914560 bytes
Database Buffers 54525952 bytes
Redo Buffers 532480 bytes
Database mounted.
Database opened.
SQL> alter system switch logfile;
System altered.
SQL> /
System altered.
SQL> /
System altered.
SQL> /
System altered.
SQL> /
System altered.
SQL> /
System altered.
SQL> /
System altered.
SQL> /
System altered.
SQL> /
System altered.

SQL> /
System altered.
SQL> /
System altered.
SQL> /
System altered.
SQL> /
System altered.
SQL> /
System altered.
SQL> /
System altered.
SQL> /
System altered.
SQL> /
System altered.
SQL> /
System altered.
SQL> /
System altered.
SQL> /
System altered.
SQL> /
System altered.
SQL> /
System altered.
SQL> /
System altered.

SQL> /
System altered.
SQL> /
System altered.
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area 118255568 bytes
Fixed Size 282576 bytes
Variable Size 62914560 bytes
Database Buffers 54525952 bytes
Redo Buffers 532480 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 1 - see DBWR trace file
ORA-01110: data file 1: 'E:\LOCAL\ORADATA\SYSTEM.DBF'
SQL> select * from v$log;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- --------1 1 25 52428800 1 YES INACTIVE 314139 13-OCT-06
2 1 26 52428800 1 NO CURRENT 314140 13-OCT-06
3 1 24 52428800 1 YES INACTIVE 314138 13-OCT-06
SQL> select * from V$log_history;
RECID STAMP THREAD# SEQUENCE# FIRST_CHANGE# FIRST_TIM NEXT_CHANGE#
---------- ---------- ---------- ---------- ------------- --------- -----------1 603754350 1 1 313930 13-OCT-06 314115
2 603754360 1 2 314115 13-OCT-06 314116
3 603754369 1 3 314116 13-OCT-06 314118
4 603754369 1 4 314118 13-OCT-06 314119
5 603754374 1 5 314119 13-OCT-06 314120
6 603754374 1 6 314120 13-OCT-06 314121
7 603754379 1 7 314121 13-OCT-06 314122
8 603754379 1 8 314122 13-OCT-06 314123
9 603754385 1 9 314123 13-OCT-06 314124
10 603754385 1 10 314124 13-OCT-06 314125
11 603754390 1 11 314125 13-OCT-06 314126
RECID STAMP THREAD# SEQUENCE# FIRST_CHANGE# FIRST_TIM NEXT_CHANGE#
---------- ---------- ---------- ---------- ------------- --------- -----------12 603754390 1 12 314126 13-OCT-06 314127
13 603754395 1 13 314127 13-OCT-06 314128
14 603754395 1 14 314128 13-OCT-06 314129
15 603754401 1 15 314129 13-OCT-06 314130

16 603754401 1 16 314130 13-OCT-06 314131


17 603754406 1 17 314131 13-OCT-06 314132
18 603754406 1 18 314132 13-OCT-06 314133
19 603754411 1 19 314133 13-OCT-06 314134
20 603754412 1 20 314134 13-OCT-06 314135
21 603754417 1 21 314135 13-OCT-06 314136
22 603754417 1 22 314136 13-OCT-06 314137
RECID STAMP THREAD# SEQUENCE# FIRST_CHANGE# FIRST_TIM NEXT_CHANGE#
---------- ---------- ---------- ---------- ------------- --------- -----------23 603754422 1 23 314137 13-OCT-06 314138
24 603754422 1 24 314138 13-OCT-06 314139
25 603754445 1 25 314139 13-OCT-06 314140
25 rows selected.
SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-01157: cannot identify/lock data file 1 - see DBWR trace file
ORA-01110: data file 1: 'E:\LOCAL\ORADATA\SYSTEM.DBF'
SQL> select * from v$backup;
FILE# STATUS CHANGE# TIME
---------- ------------------ ---------- --------1 ACTIVE 314007 13-OCT-06
2 NOT ACTIVE 314014 13-OCT-06
3 NOT ACTIVE 314021 13-OCT-06
4 NOT ACTIVE 314027 13-OCT-06
SQL> alter * from v$recover_file;
alter * from v$recover_file
*
ERROR at line 1:
ORA-00940: invalid ALTER command
SQL> select * from v$recover_file;
FILE# ONLINE ONLINE_ ERROR CHANGE# TIME
---------- ------- ------- ----------------------------------------------------------------- ---------- --------1 ONLINE ONLINE 314007 13-OCT-06
SQL> select * from v$log_history;
RECID STAMP THREAD# SEQUENCE# FIRST_CHANGE# FIRST_TIM NEXT_CHANGE#
---------- ---------- ---------- ---------- ------------- --------- -----------1 603754350 1 1 313930 13-OCT-06 314115
2 603754360 1 2 314115 13-OCT-06 314116
3 603754369 1 3 314116 13-OCT-06 314118
4 603754369 1 4 314118 13-OCT-06 314119
5 603754374 1 5 314119 13-OCT-06 314120
6 603754374 1 6 314120 13-OCT-06 314121

7 603754379 1 7 314121 13-OCT-06 314122


8 603754379 1 8 314122 13-OCT-06 314123
9 603754385 1 9 314123 13-OCT-06 314124
10 603754385 1 10 314124 13-OCT-06 314125
11 603754390 1 11 314125 13-OCT-06 314126
RECID STAMP THREAD# SEQUENCE# FIRST_CHANGE# FIRST_TIM NEXT_CHANGE#
---------- ---------- ---------- ---------- ------------- --------- -----------12 603754390 1 12 314126 13-OCT-06 314127
13 603754395 1 13 314127 13-OCT-06 314128
14 603754395 1 14 314128 13-OCT-06 314129
15 603754401 1 15 314129 13-OCT-06 314130
16 603754401 1 16 314130 13-OCT-06 314131
17 603754406 1 17 314131 13-OCT-06 314132
18 603754406 1 18 314132 13-OCT-06 314133
19 603754411 1 19 314133 13-OCT-06 314134
20 603754412 1 20 314134 13-OCT-06 314135
21 603754417 1 21 314135 13-OCT-06 314136
22 603754417 1 22 314136 13-OCT-06 314137
RECID STAMP THREAD# SEQUENCE# FIRST_CHANGE# FIRST_TIM NEXT_CHANGE#
---------- ---------- ---------- ---------- ------------- --------- -----------23 603754422 1 23 314137 13-OCT-06 314138
24 603754422 1 24 314138 13-OCT-06 314139
25 603754445 1 25 314139 13-OCT-06 314140
25 rows selected.
SQL> select 8 from v$datafile;
8
---------8
8
8
8
SQL> select * from v$datafile;
FILE# CREATION_CHANGE# CREATION_ TS# RFILE# STATUS ENABLED CHECKPOINT_CHANGE#
CHECKPOIN UNRECOVERABLE_CHANGE# UNRECOVER LAST_CHANGE# LAST_TIME O
---------- ---------------- --------- ---------- ---------- ------- ---------- ------------------ --------- --------------------- -------------------- --------- 1 4 06-OCT-06 0 1 SYSTEM READ WRITE 314150 13-OCT-06 0 314150 13-OCT-06
2 5827 06-OCT-06 1 2 ONLINE READ WRITE 314150 13-OCT-06 0 314150 13-OCT-06
3 6752 06-OCT-06 2 3 ONLINE READ WRITE 314150 13-OCT-06 0 314150 13-OCT-06
4 6774 06-OCT-06 3 4 ONLINE READ WRITE 314150 13-OCT-06 0 314150 13-OCT-06
SQL> select namr from V%archivelogs;
select namr from V%archivelogs
*
ERROR at line 1:
ORA-00911: invalid character
SQL> select * from V$archive_logs;

select * from V$archive_logs


*
ERROR at line 1:
ORA-01219: database not open: queries allowed on fixed tables/views only
SQL> select * from V$archive_log;
select * from V$archive_log
*
ERROR at line 1:
ORA-01219: database not open: queries allowed on fixed tables/views only
SQL> ed
Wrote file afiedt.buf
1* select * from V$archived_log
SQL> /
RECID STAMP NAME
---------- -----------------------------------------------------------------------------------------------------------------------------------------------------1 603465266 C:\ORACLE\RDBMS\ARC00002.001
2 603465267 C:\ORACLE\RDBMS\ARC00003.001
3 603465268 C:\ORACLE\RDBMS\ARC00004.001
4 603491991 C:\ORACLE\RDBMS\ARC00005.001
5 603540234 C:\ORACLE\RDBMS\ARC00006.001
6 603668585 C:\ORACLE\RDBMS\ARC00007.001
7 603668586 C:\ORACLE\RDBMS\ARC00008.001
8 603668592 C:\ORACLE\RDBMS\ARC00009.001
9 603669141
10 603670713 C:\ORACLE\RDBMS\ARC00001.001
11 603670715 C:\ORACLE\RDBMS\ARC00002.001
RECID STAMP NAME
---------- -----------------------------------------------------------------------------------------------------------------------------------------------------12 603670719 C:\ORACLE\RDBMS\ARC00003.001
13 603670720 C:\ORACLE\RDBMS\ARC00004.001
14 603670726 C:\ORACLE\RDBMS\ARC00005.001
15 603670728 C:\ORACLE\RDBMS\ARC00006.001
16 603670729 C:\ORACLE\RDBMS\ARC00007.001
17 603670733 C:\ORACLE\RDBMS\ARC00008.001
18 603670733 C:\ORACLE\RDBMS\ARC00009.001
19 603670737 C:\ORACLE\RDBMS\ARC00010.001
20 603670737 C:\ORACLE\RDBMS\ARC00011.001
21 603670742 C:\ORACLE\RDBMS\ARC00012.001
22 603670743 C:\ORACLE\RDBMS\ARC00013.001
RECID STAMP NAME
---------- -----------------------------------------------------------------------------------------------------------------------------------------------------23 603670748 C:\ORACLE\RDBMS\ARC00014.001

24 603670748 C:\ORACLE\RDBMS\ARC00015.001
25 603670753 C:\ORACLE\RDBMS\ARC00016.001
26 603670753 C:\ORACLE\RDBMS\ARC00017.001
27 603670759 C:\ORACLE\RDBMS\ARC00018.001
28 603670759 C:\ORACLE\RDBMS\ARC00019.001
29 603670759 C:\ORACLE\RDBMS\ARC00020.001
30 603670764 C:\ORACLE\RDBMS\ARC00021.001
31 603670764 C:\ORACLE\RDBMS\ARC00022.001
32 603670769 C:\ORACLE\RDBMS\ARC00023.001
33 603670769 C:\ORACLE\RDBMS\ARC00024.001
RECID STAMP NAME
---------- -----------------------------------------------------------------------------------------------------------------------------------------------------34 603671995 C:\ORACLE\RDBMS\ARC00025.001
35 603672005 C:\ORACLE\RDBMS\ARC00026.001
36 603672006 C:\ORACLE\RDBMS\ARC00027.001
37 603672011 C:\ORACLE\RDBMS\ARC00028.001
38 603672011 C:\ORACLE\RDBMS\ARC00029.001
39 603672016 C:\ORACLE\RDBMS\ARC00030.001
40 603672017 C:\ORACLE\RDBMS\ARC00031.001
41 603673030 C:\ORACLE\RDBMS\ARC00032.001
42 603721691 C:\ORACLE\RDBMS\ARC00033.001
43 603722884 C:\ORACLE\RDBMS\ARC00034.001
44 603722886 C:\ORACLE\RDBMS\ARC00035.001
RECID STAMP NAME
---------- -----------------------------------------------------------------------------------------------------------------------------------------------------45 603722890 C:\ORACLE\RDBMS\ARC00036.001
46 603722890 C:\ORACLE\RDBMS\ARC00037.001
47 603726744 C:\ORACLE\RDBMS\ARC00038.001
48 603726745 C:\ORACLE\RDBMS\ARC00039.001
49 603726747 C:\ORACLE\RDBMS\ARC00040.001
50 603728810 C:\ORACLE\RDBMS\ARC00041.001
51 603737242 C:\ORACLE\RDBMS\ARC00042.001
52 603737242 C:\ORACLE\RDBMS\ARC00043.001
53 603737242 C:\ORACLE\RDBMS\ARC00044.001
54 603737243 C:\ORACLE\RDBMS\ARC00045.001
55 603737244 C:\ORACLE\RDBMS\ARC00046.001
RECID STAMP NAME
---------- -----------------------------------------------------------------------------------------------------------------------------------------------------56 603737244 C:\ORACLE\RDBMS\ARC00047.001
57 603737246 C:\ORACLE\RDBMS\ARC00048.001
58 603737251 C:\ORACLE\RDBMS\ARC00049.001
59 603737252 C:\ORACLE\RDBMS\ARC00050.001
60 603737253 C:\ORACLE\RDBMS\ARC00051.001
61 603737255 C:\ORACLE\RDBMS\ARC00052.001
62 603738258 C:\ORACLE\RDBMS\ARC00053.001
63 603738298 C:\ORACLE\RDBMS\ARC00054.001
64 603738303 C:\ORACLE\RDBMS\ARC00055.001

65 603738303 C:\ORACLE\RDBMS\ARC00056.001
66 603738303 C:\ORACLE\RDBMS\ARC00057.001
RECID STAMP NAME
---------- -----------------------------------------------------------------------------------------------------------------------------------------------------67 603738308 C:\ORACLE\RDBMS\ARC00058.001
68 603738309 C:\ORACLE\RDBMS\ARC00059.001
69 603738314 C:\ORACLE\RDBMS\ARC00060.001
70 603738314 C:\ORACLE\RDBMS\ARC00061.001
71 603738314 C:\ORACLE\RDBMS\ARC00062.001
72 603738319 C:\ORACLE\RDBMS\ARC00063.001
73 603738325 C:\ORACLE\RDBMS\ARC00064.001
74 603739549 C:\ORACLE\RDBMS\ARC00065.001
75 603739882 C:\ORACLE\RDBMS\ARC00001.001
76 603739883 C:\ORACLE\RDBMS\ARC00002.001
77 603739883 C:\ORACLE\RDBMS\ARC00003.001
RECID STAMP NAME
---------- -----------------------------------------------------------------------------------------------------------------------------------------------------78 603739884 C:\ORACLE\RDBMS\ARC00004.001
79 603739884 C:\ORACLE\RDBMS\ARC00005.001
80 603739890 C:\ORACLE\RDBMS\ARC00006.001
81 603739890 C:\ORACLE\RDBMS\ARC00007.001
82 603739896 C:\ORACLE\RDBMS\ARC00008.001
83 603749449 C:\ORACLE\RDBMS\ARC00009.001
84 603749779 C:\ORACLE\RDBMS\ARC00010.001
85 603749780 C:\ORACLE\RDBMS\ARC00011.001
86 603749784 C:\ORACLE\RDBMS\ARC00012.001
87 603749785 C:\ORACLE\RDBMS\ARC00013.001
88 603749791 C:\ORACLE\RDBMS\ARC00014.001
RECID STAMP NAME
---------- -----------------------------------------------------------------------------------------------------------------------------------------------------89 603749795 C:\ORACLE\RDBMS\ARC00015.001
90 603749796 C:\ORACLE\RDBMS\ARC00016.001
91 603749796 C:\ORACLE\RDBMS\ARC00017.001
92 603749797 C:\ORACLE\RDBMS\ARC00018.001
93 603749803 C:\ORACLE\RDBMS\ARC00019.001
94 603749804 C:\ORACLE\RDBMS\ARC00020.001
95 603749809 C:\ORACLE\RDBMS\ARC00021.001
96 603749810 C:\ORACLE\RDBMS\ARC00022.001
97 603751702 C:\ORACLE\RDBMS\ARC00023.001
98 603751703 C:\ORACLE\RDBMS\ARC00024.001
99 603751707 C:\ORACLE\RDBMS\ARC00025.001
RECID STAMP NAME
---------- -----------------------------------------------------------------------------------------------------------------------------------------------------100 603751707 C:\ORACLE\RDBMS\ARC00026.001

101 603751712 C:\ORACLE\RDBMS\ARC00027.001


102 603751712 C:\ORACLE\RDBMS\ARC00028.001
103 603751718 C:\ORACLE\RDBMS\ARC00029.001
104 603751718 C:\ORACLE\RDBMS\ARC00030.001
105 603751723 C:\ORACLE\RDBMS\ARC00031.001
106 603751723 C:\ORACLE\RDBMS\ARC00032.001
107 603751728 C:\ORACLE\RDBMS\ARC00033.001
108 603751728 C:\ORACLE\RDBMS\ARC00034.001
109 603751733 C:\ORACLE\RDBMS\ARC00035.001
110 603751733 C:\ORACLE\RDBMS\ARC00036.001
RECID STAMP NAME
---------- -----------------------------------------------------------------------------------------------------------------------------------------------------111 603751739 C:\ORACLE\RDBMS\ARC00037.001
112 603751753 C:\ORACLE\RDBMS\ARC00038.001
113 603752151
114 603754350 C:\ORACLE\RDBMS\ARC00001.001
115 603754360 C:\ORACLE\RDBMS\ARC00002.001
116 603754369 C:\ORACLE\RDBMS\ARC00003.001
117 603754369 C:\ORACLE\RDBMS\ARC00004.001
118 603754374 C:\ORACLE\RDBMS\ARC00005.001
119 603754374 C:\ORACLE\RDBMS\ARC00006.001
120 603754379 C:\ORACLE\RDBMS\ARC00007.001
121 603754379 C:\ORACLE\RDBMS\ARC00008.001
RECID STAMP NAME
---------- -----------------------------------------------------------------------------------------------------------------------------------------------------122 603754385 C:\ORACLE\RDBMS\ARC00009.001
123 603754385 C:\ORACLE\RDBMS\ARC00010.001
124 603754390 C:\ORACLE\RDBMS\ARC00011.001
125 603754390 C:\ORACLE\RDBMS\ARC00012.001
126 603754395 C:\ORACLE\RDBMS\ARC00013.001
127 603754395 C:\ORACLE\RDBMS\ARC00014.001
128 603754401 C:\ORACLE\RDBMS\ARC00015.001
129 603754401 C:\ORACLE\RDBMS\ARC00016.001
130 603754406 C:\ORACLE\RDBMS\ARC00017.001
131 603754406 C:\ORACLE\RDBMS\ARC00018.001
132 603754412 C:\ORACLE\RDBMS\ARC00019.001
RECID STAMP NAME
---------- -----------------------------------------------------------------------------------------------------------------------------------------------------133 603754412 C:\ORACLE\RDBMS\ARC00020.001
134 603754417 C:\ORACLE\RDBMS\ARC00021.001
135 603754417 C:\ORACLE\RDBMS\ARC00022.001
136 603754422 C:\ORACLE\RDBMS\ARC00023.001
137 603754422 C:\ORACLE\RDBMS\ARC00024.001
138 603754445 C:\ORACLE\RDBMS\ARC00025.001
138 rows selected.

SQL> show parameter log_Archive_form


NAME TYPE VALUE
------------------------------------ ----------- -----------------------------log_archive_format string ARC%S.%T

Recoveryfrommissingorcorruptedredologgroup
Recovery from missing or corrupted redo log group:
Case 1: A multiplexed copy of the missing log is available.
if a redo log is missing, it should be restored from a multiplexed copy, if possible. Here's an example,
where I attempt to startup from SQLPlus when a redo log is missing:
SQL> startup
ORACLE instance started.
Total System Global Area 131555128 bytes
Fixed Size 454456 bytes
Variable Size 88080384 bytes
Database Buffers 41943040 bytes
Redo Buffers 1077248 bytes
Database mounted.
ORA-00313: open failed for members of log group 3 of thread 1
ORA-00312: online log 3 thread 1: 'D:\ORACLE_DATA\LOGS\ORCL\REDO03.LOG'
SQL>
To fix this we simply copy REDO03.LOG from its multiplexed location on E: to the above location on D:.
SQL> alter database open;
Database altered.
SQL>
That's it - the database is open for use.

Case 2: Only A redo log file backup copy available


If a redo log is missing, it should be restored from a Cold backup (if redo log backup available in Sunday
Cold Backup) if possible. Here's an example, where I attempt to startup from SQLPlus when a redo log is
missing:
SQL> startup
ORACLE instance started.

Total System Global Area 122755896 bytes


Fixed Size 453432 bytes
Variable Size 67108864 bytes
Database Buffers 54525952 bytes
Redo Buffers 667648 bytes
Database mounted.
ORA-00313: open failed for members of log group 1 of thread 1
ORA-00312: online log 1 thread 1: 'C:\O\ORADATA\REDO01.LOG'

SQL>
To fix this we simply copy REDO01.LOG from Cold Backup.
SQL> alter database clear unarchived logfile group 1;
SQL> alter database open;
Database altered.
SQL>
That's it - the database is open for use.

Case 3: All redo log file or any one redo log file missing and we have no backup copy of redo log file or no
multiplexing redo log file.
If all or some redo log is missing. Here's an example, where I attempt to startup from SQLPlus when a
redo log is missing:
SQL> startup
ORACLE instance started.

Total System Global Area 122755896 bytes


Fixed Size 453432 bytes
Variable Size 67108864 bytes
Database Buffers 54525952 bytes
Redo Buffers 667648 bytes
Database mounted.

ORA-00313: open failed for members of log group 1 of thread 1


ORA-00312: online log 1 thread 1: 'C:\O\ORADATA\REDO01.LOG'

SQL> recover database until cancel;


Media recovery complete.

SQL> alter database open resetlogs;

Database altered.

Introduction
In general, backup and recovery refers to the various strategies and procedures involved in protecting
your database against data loss and reconstructing the data should that loss occur. The reconstructing of
data is achieved through media recovery, which refers to the various operations involved in restoring,
rolling forward, and rolling back a backup of database files.
Backups Concept:
A backup is a copy of data. This copy can include important parts of the database such as the control file
and datafiles. A backup is a safeguard against unexpected data loss and application errors. If you lose the
original data, then you can reconstruct it by using a backup.
Backups are divided into physical backups and logical backups. Physical backups, which are the primary
concern in a backup and recovery strategy, are copies of physical database files. You can make physical
backups with either the Recovery Manager (RMAN) utility or operating system utilities. In contrast, logical
backups contain logical data (for example, tables and stored procedures) extracted with the Oracle Export
utility and stored in a binary file. You can use logical backups to supplement physical backups.
Consistent backup
A consistent backup exhibits the following three properties:
All headers of datafiles that belong to a writable tablespaces have the same checkpoint SCN.
These datafiles don't have any changes past this checkpoint SCN. That is: it is not a fuzzy.
Lastly, The SCNs of the datafile headers match the checkpoint information in the controlfiles.
Inconsistent backup
An inconsistent backup is (almost by definition) in which at least on of the mentioned properties are not
exhibited. That is, some files contain changes that were made after the files were checkpointed. A
recovery is needed in order to make the backup consistent. An inconsistent backup is created by a hot
backup.
Also, if the database crashed (or was shutdown abort) and then the backup was made.
Restore

A restore is, if you want, the opposite of a backup: the backed up files (that is: the backup) is copied back
to their original location. Usually, a file is restored after a media failure. However, there are other
situations where a file is restored, for example a point in time recovery.
Types of recovery
There are four types of recovery:
Datafile media recovery (also called media recovery)
Crash recovery
Instance recovery
Disaster recovery
Recovery Concept:
To restore a physical backup of a datafile or control file is to reconstruct it and make it available to the
Oracle database server. To recover a restored datafile is to update it by applying archived redo logs and
online redo logs, that is, records of changes made to the database after the backup was taken. If you use
RMAN, then you can also recover restored datafiles with incremental backups, which are backups of a
datafile that contain only blocks that changed after a previous incremental backup.
After the necessary files are restored, media recovery must be initiated by the user. Media recovery can
use both archived redo logs and online redo logs to recover the datafiles. If you use SQL*Plus, then you
can run the RECOVER command to perform recovery. If you use RMAN, then you run the RMAN
RECOVER command to perform recovery.
Unlike media recovery, Oracle performs crash recovery and instance recovery automatically after an
instance failure. Crash and instance recovery recover a database to its transaction-consistent state just
before instance failure. By definition, crash recovery is the recovery of a database in a single-instance
configuration or an Oracle Real Application Clusters configuration in which all instances have crashed.
In contrast, instance recovery is the recovery of one failed instance by a live instance in an Oracle Real
Application Clusters configuration.
Crash and instance recovery involve two distinct operations: rolling forward the current, online datafiles by
applying both committed and uncommitted transactions contained in online redo records, and then rolling
back changes made in uncommitted transactions to their original state. Because crash and instance
recovery are automatic, this manual will not discuss these operations.
Database Instance Failure
Database instance failure occurs when a problem prevents an Oracle database instance from continuing
to run. An instance failure can result from a hardware problem, such as a power outage, or a software
problem, such as an operating system crash. Instance failure also results when you issue a SHUTDOWN
ABORT or STARTUP FORCE statement.
Mechanics of Instance and Crash Recovery
When one or more instances fail, Oracle automatically recovers the lost changes associated with the
instance or instances. Crash or instance recovery consists of the following steps:
1. Rolling forward to recover data that has not been recorded in the datafiles, yet has been recorded in
the online redo log, including changes to undo blocks. This phase is called cache recovery.
2. Opening the database. Instead of waiting for all transactions to be rolled back before making the
database available, Oracle allows the database to be opened as soon as cache recovery is complete. Any

data that is not locked by unrecovered transactions is immediately available.


3. Marking all transactions systemwide that were active at the time of failure as DEAD and marking the
rollback or undo segments containing these transactions as PARTLY AVAILABLE.
4. Rolling back dead transactions as part of SMON recovery. This phase is called transaction recovery.
5. Resolving any pending distributed transactions undergoing a two-phase commit at the time of the
instance failure.
6. As new transactions encounter rows locked by dead transactions, they can automatically roll back the
dead transaction to release the locks. If you are using Fast-Start Recovery, then only the data block is
immediately rolled back, as opposed to the entire transaction.

RMANBackupTerminology
Oracle provides a tool for Database backup and restore operation is called RMAN.
RMAN is a client/server application that uses database server sessions to perform backup and recovery.
It stores metadata about its operations in the control file of the target database and, optionally, in a
recovery catalog schema in an Oracle database.
Difference between RMAN and Traditional backup methods
RMAN is Oracle's backup and recovery utility. With RMAN, backups become as easy as:
BACKUP DATABASE;
RMAN reduces the complexity of backup and recovery. RMAN can determine what needs to be backed
up or restored.
Why should we use RMAN?
Ability to perform incremental backups.Ability to recover one block of a datafile.
Ability to perform the backup and restore with parallelization.
Ability to automatically delete archived redo logs after they are backed up.
Ability to automatically backup the control file and the SPFILE.
Ability to restart a failed backup without having to start from the beginning.
Ability to verify the integrity of the backup.
Ability to test the restore process without having to actually perform the restore.
Comparison of RMAN Automated and User-Managed Procedures
Important:
By using operating system commands for User-Managed Backup and Recovery, a DBA manually keeps
track of all database files and backups. But RMAN performs these same tasks automatically.
Understanding the RMAN Architecture
An oracle RMAN comprises of RMAN EXECUTABLE This could be present and fired even through client
side, TARGET DATABASE (This is the database which needs to be backed up) and RECOVERY
CATALOG (Recovery catalog is optional otherwise backup details are stored in target database control
file .)
About the RMAN Repository
The RMAN repository is a set of metadata that RMAN uses to store information about the target database
and its backup and recovery operations. RMAN stores information about:

Backup sets and pieces


Image copies (including archived redo logs)
Proxy copies
The target database schema
Persistent configuration settings
If you start RMAN without specifying either CATALOG or NOCATALOG on the command line, then RMAN
makes no connection to a repository. If you run a command that requires the repository, and if no
CONNECT CATALOG command has been issued yet, then RMAN automatically connects in the default
NOCATALOG mode. After that point, the CONNECT CATALOG command is not valid in the session.
Types of Database Connections
You can connect to the following types of databases.
Target database
RMAN connects you to the target database with the SYSDBA privilege. If you do not have this privilege,
then the connection fails.
Recovery catalog database
This database is optional: you can also use RMAN with the default NOCATALOG option.
Auxiliary database
You can connect to a standby database, duplicate database, or auxiliary instance (standby instance or
tablespace point-in-time recovery instance
Note:
That a SYSDBA privilege is not required when connecting to the recovery catalog.The only requirement is
that the RECOVERY_CATALOG_OWNER role be granted to the schema owner.
Using Basic RMAN Commands
After you have learned how to connect to a target database, you can immediately begin performing
backup and recovery operations. Use the examples in this section to go through a basic backup and
restore scenario using a test database. These examples assume the following:
The test database is in ARCHIVELOG mode.
You are running in the default NOCATALOG mode.
The RMAN executable is running on the same host as the test database.
Connecting to the Target Database
rman TARGET /
If the database is already mounted or open, then RMAN displays output similar to the following:
Recovery Manager: Release 9.2.0.0.0
connected to target database: PROD (DBID=1237603294)
Reporting the Current Schema of the Target Database
In this example, you generate a report describing the target datafiles. Run the report schema command
as follows:
RMAN> REPORT SCHEMA; (RMAN displays the datafiles currently in the target database.
Backing Up the Database

In this task, you back up the database to the default disk location. Because you do not specify the format
parameter in this example, RMAN assigns the backup a unique filename.
You can make two basic types of backups: full and incremental.
Making a Full Backup
Run the backup command at the RMAN prompt as follows to make a full backup of the datafiles, control
file, and current server parameter file (if the instance is started with a server parameter file) to the default
device type:
RMAN> BACKUP DATABASE;
Making an Incremental Backup
Incremental backups are a convenient way to conserve storage space because they back up only
database blocks that have changed. RMAN compares the current datafiles to a base backup, also called
a level 0 backup, to determine which blocks to back up.
RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE;
Backing Up Archived Logs
Typically, database administrators back up archived logs on disk to a third-party storage medium such as
tape. You can also back up archived logs to disk. In either case, you can delete the input logs
automatically after the backup completes.To back up all archived logs and delete the input logs (from the
primary archiving destination only), run the backup command at the RMAN prompt as follows:
RMAN> BACKUP ARCHIVELOG ALL DELETE INPUT;
Listing Backups and Copies
To list the backup sets and image copies that you have created, run the list command as follows:
RMAN> LIST BACKUP;
To list image copies, run the following command:
RMAN> LIST COPY;
Validating the Restore of a Backup
Check that you are able to restore the backups that you created without actually restoring them. Run the
RESTORE ... VALIDATE command as follows:
RMAN> RESTORE DATABASE VALIDATE;

DifferentTypeofRMANBackup
Full Backups
A full backup reads the entire file and copies all blocks into the backup set, only skipping datafile blocks
that have never been used.
About Incremental Backups
Rman create backup only changed block since a previous backup. You can use RMAN to create
incremental backups of datafiles, tablespaces, or the whole database.

How Incremental Backups Work


Each data block in a datafile contains a system change number (SCN), which is the SCN at which the
most recent change was made to the block. During an incremental backup, RMAN reads the SCN of each
data block in the input file and compares it to the checkpoint SCN of the parent incremental backup.
RMAN reads the entire file every time whether or not the blocks have been used.
The parent backup is the backup that RMAN uses for comparing the SCNs. If the current incremental is a
differential backup at level n, then the parent is the most recent incremental of level n or less. If the
current incremental is a cumulative backup at level n, then the parent is the most recent incremental of
level n-1 or less. If the SCN in the input data block is greater than or equal to the checkpoint SCN of the
parent, then RMAN copies the block.
Multilevel Incremental Backups
RMAN can create multilevel incremental backups. Each incremental level is denoted by an integer, for
example, 0, 1, 2, and so forth. A level 0 incremental backup, which is the base for subsequent incremental
backups, copies all blocks containing data. The only difference between a level 0 backup and a full
backup is that a full backup is never included in an incremental strategy.
If no level 0 backup exists when you run a level 1 or higher backup, RMAN makes a level 0 backup
automatically to serve as the base.
The benefit of performing multilevel incremental backups is that RMAN does not back up all blocks all of
the time.
Differential Incremental Backups
In a differential level n incremental backup, RMAN backs up all blocks that have changed since the most
recent backup at level n or lower.
For example, in a differential level 2 backups, RMAN determines which level 2 or level 1 backup occurred
most recently and backs up all blocks modified after that backup. If no level 1 is available, RMAN copies
all blocks changed since the base level 0 backup. If no level 0 backup is available, RMAN makes a new
base level 0 backup for this file.
Case 1: if you want to implement incremental backup strategy as a DBA in your organization:
Use Command for incremental Level Backup
RMAN> backup incremental level 0 database tag="SUNDAY";
RMAN> backup incremental level 3 database tag="MONDAY";
RMAN> backup incremental level 3 database tag="TUESDAY";
RMAN> backup incremental level 3 database tag="WEDNESDAY";
RMAN> backup incremental level 2 database tag="THURSDAY";
RMAN> backup incremental level 3 database tag="FRIDAY";
RMAN> backup incremental level 3 database tag="SATURDAY";
Backup Example
You can view your incremental Backup Details by using following Query
select incremental_level, incremental_change#, checkpoint_change#, blocks from v$backup_datafile;
Result of above Query:
INC_LEVEL INC_CHANGE# CHECKPOINT_CHANGE# BLOCKS

0 0 271365 59595
3 271365 271369 2
3 271369 271371 1
3 271371 271374 2
2 271365 271378 2
3 271378 271380 1
3 271380 271383 2
Cumulative Incremental Backups
RMAN provides an option to make cumulative incremental backups at level 1 or greater. In a cumulative
level n backup, RMAN backs up all the blocks used since the most recent backup at level n-1 or lower.
For example, in cumulative level 2 backups, RMAN determines which level 1 backup occurred most
recently and copies all blocks changed since that backup. If no level 1 backups is available, RMAN copies
all blocks changed since the base level 0 backup.
Cumulative incremental backups reduce the work needed for a restore by ensuring that you only need
one incremental backup from any particular level. Cumulative backups require more space and time than
differential backups, however, because they duplicate the work done by previous backups at the same
level.
Case 1: if you want to implement Cumulative backup strategy as a DBA in your organization:
Use Command for Cumulative Level Backup
backup incremental level=0 database tag='base';
backup incremental level=2 cumulative database tag='monday';
backup incremental level=2 cumulative database tag='tuesday';
backup incremental level=2 cumulative database tag='wednesday';
backup incremental level=2 cumulative database tag='thursday';
backup incremental level=2 cumulative database tag='friday';
backup incremental level=2 cumulative database tag='saturday';
backup incremental level=1 cumulative database tag='weekly';
Incremental backup implementation
RMAN will determine the incremental SCN for each datafile find the backup with highest checkpoint scn
that belongs to the incarnation of datafile matches the given file# is an incremental backup/copy at level N
or less if noncumulative or is an incremental backup/copy at level N-1 or less if cumulative belongs to an
available backup set if backup
Incremental Backup Strategy
You can implement a three-level backup scheme so that a full or level 0 backup is taken monthly, a
cumulative level 1 backup is taken weekly, and a cumulative level 2 is taken daily. In this scheme, you
never have to apply more than a day's worth of redo for complete recovery. When deciding how often to
take full or level 0 backups, a good rule of thumb is to take a new level 0 whenever 50% or more of the
data has changed. If the rate of change to your database is predictable, then you can observe the size of

your incremental backups to determine when a new level 0 is appropriate. The following query displays
the number of blocks written to a backup set for each datafile with at least 50% of its blocks backed up:
SELECT FILE#, INCREMENTAL_LEVEL, COMPLETION_TIME, BLOCKS, DATAFILE_BLOCKS
FROM V$BACKUP_DATAFILE
WHERE INCREMENTAL_LEVEL > 0 AND BLOCKS / DATAFILE_BLOCKS > .5
ORDER BY COMPLETION_TIME;
Compare the number of blocks in differential or cumulative backups to a base level 0 backup. For
example, if you only create level 1 cumulative backups, then when the most recent level 1 backup is
about half of the size of the base level 0 backup, take a new level 0.

Case1:Amultiplexedcopyofthemissinglogisavailable.
If a redo log is missing, it should be restored from a multiplexed copy, if possible. Here's an example,
where I attempt to startup from SQLPlus when a redo log is missing:
SQL> startup
ORACLE instance started.
Total System Global Area 131555128 bytes
Fixed Size 454456 bytes
Variable Size 88080384 bytes
Database Buffers 41943040 bytes
Redo Buffers 1077248 bytes
Database mounted.
ORA-00313: open failed for members of log group 3 of thread 1
ORA-00312: online log 3 thread 1: 'D:\ORACLE_DATA\LOGS\ORCL\REDO03.LOG'
SQL> To fix this we simply copy REDO03.LOG from its multiplexed location on E: to the above
location on D:.
SQL> alter database open;
Database altered.
SQL>
That's it - the database is open for use.

Case2:OnlyAredologfilebackupcopyavailable(butRedo
Logisnotcurrentandarchived)
If a redo log is missing, it should be restored from a Cold backup (if redo log backup available in Sunday
Cold Backup) if possible. Here's an example, where I attempt to startup from SQLPlus when a redo log is
missing:
SQL> startup
ORACLE instance started.
Total System Global Area 122755896 bytes
Fixed Size 453432 bytes
Variable Size 67108864 bytes
Database Buffers 54525952 bytes
Redo Buffers 667648 bytes
Database mounted.
ORA-00313: open failed for members of log group 1 of thread 1

ORA-00312: online log 1 thread 1: 'C:\O\ORADATA\REDO01.LOG'


SQL>
To fix this we simply copy REDO01.LOG from Cold Backup.
SQL> alter database clear unarchived logfile group 1;
SQL> alter database open;
Database altered.
SQL>

Case3:IfmissingRedologisCURRENT
When a current redo log is lost, the transactions in the log file are also lost before making to archived
logs. Since a DB startup can no more perform a crash recovery (since all the now-available online log
files are not sufficient to startup the DB in consistent state), an incomplete media recovery is the only
option.
We will need to restore the DB from a previous backup and restore to the point just before the lost redo
log file. The DB will need to be opened in RESETLOGS mode. There is some transaction loss in this
scenario.
Step 1: Restore Database (All control file, log file and data file) from Cold Backup
Step 2: Startup database with mount option.
SQL> startup mount
Step 3: Recover database using backup control file option.
SQL> recover database using backup controlfile;
Step 4: Cancel the recovery.
SQL> recover database using backup controlfile until cancel;
Step 5: Open the database with resetlog option:

Case4:Allredologfileoranyoneredologfilemissingandwe
havenobackupcopyofredologfileornomultiplexingredolog
file.
If all or some redo log is missing. Here's an example, where I attempt to startup from SQLPlus when a
redo log is missing:
SQL> startup
ORACLE instance started.
Total System Global Area 122755896 bytes
Fixed Size 453432 bytes
Variable Size 67108864 bytes
Database Buffers 54525952 bytes
Redo Buffers 667648 bytes
Database mounted.

ORA-00313: open failed for members of log group 1 of thread 1


ORA-00312: online log 1 thread 1: 'C:\O\ORADATA\REDO01.LOG'
SQL> recover database until cancel;
Media recovery complete.
SQL> alter database open resetlogs;
Database altered.

Case5:RedologisnotCURRENTandnotarchived,means
missingfileisinACTIVEstatus.
If the redo log is lost before it is archived then there are a few transactions in the redo log that are lost too.
Since a crash recovery is no more possible, a media recovery is required.
The DB needs to be restored from a previous backup and recovered until the last available archive log is
applied. The reason I say crash recovery is not possible is that by definition a crash recovery should need
only online redo logs to recover the database to a consistent state. If a recovery needs a log file that is not
online (like archive logs) then the recovery is no more a crash recovery. It becomes media recovery. In
our current case since we are applying the archives and using the previous backup to restore it is not a
crash recovery.
Also since we cannot recover database completely as some data is lost in the redo log files before being
archived, this becomes an incomplete recovery too. The DB needs to be opened up in RESETLOGS
mode.
The steps to recover are below:
Step 1: Restore the DB from previous backup and open the DB in mount state.
Step 2 : Recover database
SQL> RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL;
Step 3: Open database with resetlog option
SQL> ALTER DATABASE OPEN RESETLOGS;

Case1:Amultiplexedcopyofthecontrolfileisavailable.
On startup Oracle must read the control file in order to find out where the datafiles and
online logs are located. Oracle expects to find control files at locations specified in the
CONTROL_FILE initialisation parameter. The instance will fail to mount the database if any
one of the control files are missing or corrupt. Here's an example:
SQL> startup
ORACLE instance started.
Total System Global Area 135338868 bytes
Fixed Size 453492 bytes
Variable Size 109051904 bytes
Database Buffers 25165824 bytes
Redo Buffers 667648 bytes
ORA-00205: error in identifying controlfile, check alert log for more info
SQL>
On checking the alert log, as suggested, we find the following:
ORA-00202: controlfile: 'e:\oracle_dup_dest\controlfile\ORCL\control02.ctl'

ORA-27046: file size is not a multiple of logical block size


OSD-04012: file size mismatch (OS 5447783)
The above corruption was introduced by manually editing the control file when the
database was closed.
The solution is simple, provided you have at least one uncorrupted control file - replace the
corrupted control file with a copy using operating system commands. Remember to rename
the copied file. The database should now start up without any problems.

Case2:IfAllControlfileslossorcorruptedandyouhaveno
backup
If All Control files loss or corrupted and you have no backup (Also you have not taken control backup by
using alter database backup controlfile to tarce command). In this case, we will have to restore cold
backup and recover.
Step 1 Shutdown database with abort options
SQL> shutdown abort
Step 2 Startup the database with nomount option
SQL> startup nomount
Step 3 Recover database
SQL> recover database using backup controlfile;
SQL> recover database using backup controlfile until cancel;
Step 4 Open the database

Case3:IfAllControlfilelossorcorruptedandyouhavecontrol
filebackupbyusingalterdatabasebackupcontrolfiletotrace
command.
If All Control file loss or corrupted and you have control file backup by using alter database backup control
file to trace command. Then startup database with nomount option and create controlfile from trace file
scripts and recover database and open the database.
Step 1 Shutdown database with abort options
SQL> shutdown abort
Step 2 Startup the database with nomount option
SQL> startup nomount
Step3 Create controlfie from trace file backup scripts
Step 3 Recover database
SQL> recover database using backup controlfile;
SQL> recover database using backup controlfile until cancel;

Step 4 Open the database

Case1:Recoveryfromcorruptedormissingdatafile
This scenario deals with a situation where a datafile has gone missing, or is corrupted
beyond repair. For concreteness, we look at a case where a datafile is missing. Below is a
transcript of an SQL Plus session that attempts to open a database with a missing datafile
(typed commands in bold, lines in italics are my comments, all other lines are feedback from
SQL Plus):
--open SQL Plus from the command line without
--logging on to database
C:\>sqlplus /nolog
SQL*Plus: Release 9.2.0.4.0 - Production on Tue Jan 25 14:52:41 2005
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connect to the idle Oracle process as a privileged user and start up instance
SQL> connect / as sysdba
Connected to an idle instance.
SQL> startup
ORACLE instance started.
Total System Global Area 131555128 bytes
Fixed Size 454456 bytes
Variable Size 88080384 bytes
Database Buffers 41943040 bytes
Redo Buffers 1077248 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: 'D:\ORACLE_DATA\DATAFILES\ORCL\USERS01.DBF'
SQL>
The error message tells us that file# 4 is missing. Note that although the startup command
has failed, the database is in the mount state.
Step 1: Check data file Status.
SQL> select * from v$backup;
FILE# STATUS
CHANGE# TIME
---------- ------------------ ---------- --------1 NOT ACTIVE
0
2 NOT ACTIVE
0
3 NOT ACTIVE
0
4 FILE NOT FOUND
0
5 NOT ACTIVE
0
Step 2: Restore datafile from cold backup.

Step 3: Check recover file status:


SQL> select * from V$recover_file;
FILE# ONLINE ONLINE ERROR

CHANGE#

4 ONLINE ONLINE

TIME

529340

14-APR-11

Step 4: Recover datafile


SQL> recover datafile 4;
Step 5: Open the database.
SQL> alter database open;

Case2:Recoverformcorruptedormissingdatafile(When
databaseOpen)
Error At run time:
SQL> conn / as sysdba
Connected.
SQL> select * from scott.emp;
select * from scott.emp
*
ERROR at line 1:
ORA-01116: error in opening database file 4
ORA-01110: data file 4: '/home/oracle/DB/INDIAN/users01.dbf'
ORA-27041: unable to open file
Intel SVR4 UNIX Error: 2: No such file or directory
Additional information: 3
Step 1: Check data file Status.
SQL> select * from v$backup;
FILE# STATUS
CHANGE# TIME
---------- ------------------ ---------- --------1 NOT ACTIVE
0
2 NOT ACTIVE
0
3 NOT ACTIVE
0
4 FILE NOT FOUND
0
5 NOT ACTIVE
0
Step 2: offline missing datafile
SQL> alter database datafile offline;
Step 3: Recover datafile;
SQL> recover datafile 4;

Case3:lostdatafileandwehavenobackup(Whendatabase
open)
Step 1 offline the datafile
SQL> alter database datafile offline;
Step 2 Create datafile
SQL> alter database Create datafile as size ;
Step 3 Recover the datafile
SQL> recover datafile ;
Step 4 Online the datafile
SQL> alter database datafile online;

Case4:LostdatafileandNobackup(Whendatabaseabnormal
terminated)
Step 1 Startup the database with mount option
SQL> startup mount
Step 2 Recreate datafile
SQL> alter database create database datafile as size ;
Step 3 Online the datafile
SQL> alter database datafile online;
Step 4 Now Open the database
SQL> alter database open

Case5:Ifalldatabaselostandwehavenobackupofnewly
createddatafile.
Step 1 Restore database and mount the database
Step 2 Recover the database
SQL> recover database using backup controlfile;
Step 3 Check file status
SQL> select file#,name,status from v$datafile;

Step 4 Create datafile


Alter database create datafile as size ;
Step 5 Recovery the database
SQL> recover automatic database using backup controlfile until cancel;

RecoveryfromDisaster
Disaster Recovery
How to perform a disaster recovery of an Oracle server using Manual Backup Method
1.Pre-requisites:
The following are the pre-requisites to fully recover an Oracle database server in the event a
disaster occurs:
A FULL Oracle database backup (all data file, control file and redo log file) using copy
command. When making this backup, make sure the Oracle database is shut down. This
backup set will contain a FULL CLOSED Oracle Database backup. A FULL Oracle database
backup should be performed every time any changes are made to the physical and/or logical
structure of the Oracle database and forms the base for recovering the database server to a
fully working order.
Archive log file backup up to the time of the server failure.
Control file up to the time of the server failures.
Scenario
if your database is running in archive log mode and every Sunday you take full/cold backup
of database (all data file , control file and redolog file ) and every day Monday to Saturday
you take only archive log file backup. If a situation in which your database server has been
destroyed at Saturday , how will u recover data up to Saturday
Steps
1. Build the server
You need a server to host the database, so the first step is to acquire or build the new
machine. This is not strictly a DBA task, so we won't delve into details here. The main point
to keep in mind is that the replacement server should, as far as possible, be identical to the
old one. In particular, pay attention to the following areas:
Disk layout and capacity: Ideally the server should have the same number of disks as the
original. This avoids messy renaming of files during recovery. Obviously, the new disks
should also have enough space to hold all software and data that was on the original server.
Operating system, service pack and patches: The operating system environment should be
the same as the original, right up to service pack and patch level.
Memory: The new server must have enough memory to cater to Oracle and operating
system / other software requirements. Oracle memory structures (Shared pool, db buffer

caches etc) will be sized identically to the original database instance. Use of the backup
server parameter file will ensure this.
4. Install Oracle Software
Now we get to the meat of the database recovery process. The next step is to install Oracle
software on the machine. The following points should be kept in mind when installing the
software:
Install the same version of Oracle as was on the destroyed server. The version number
should match right down to the patch level, so this may be a multi-step process involving
installation followed by the application of one or more patchsets and patches.
Do not create a new database at this stage.
Create a listener using the Network Configuration Assistant. Ensure that it has the same
name and listening ports as the original listener. Relevant listener configuration information
can be found in the backed up listener.ora file.
4. Create directory structure for database files
After software installation is completed, create all directories required for datafiles, (online
and archived) logs, control files and backups. All directory paths should match those on the
original server. This, though not mandatory, saves additional steps associated with renaming
files during recovery.
Don't worry if you do not know where the database files should be located. You can obtain
the required information from the backup spfile and control file at a later stage. Continue
reading - we'll come back to this later.
5. Create Oracle service
An Oracle service must be exist before a database is created. The service is created using
the oradim utility, which must be run from the command line. The following commands show
how to create and modify a service (comments in italics, typed commands in bold):
--create a new service with auto startup
C:\>oradim -new -sid ORCL -intpwd ORCL -startmode a
Unfortunately oradim does not give any feedback, but you can check that the service exists
via the Services administrative panel. The service has been configured to start automatically
when the computer is powered up. Note that oradim offers options to delete, startup and
shutdown a service. See the documentation for details.
3. Restore backup from tape
The next step is to get your backup from tape on to disk.
6. Restore and recover database
If an Oracle database server experienced a disaster such as a hard disk failure, use this
procedure to recover the server and the Oracle databases:
Shutdown database
SQL> SHUTDOWN IMMEDIATE

Restore all data file, log file from cold backup and restore all archive log file from Cold
backup to disaster.
Restore current control file at the time of Disaster recovery.
When the restore operation completes, move to the Oracle database server
Start server manager, connect as Internal and start the database but only mount it by
typing:
SQL> STARTUP MOUNT
When the database is mounted, type:
RECOVER DATABASE USING BACKUP CONTROLFILE
Note :Oracle will respond to this command by returning the following message, suggesting a
log sequence to apply.
ORA-00279: Change 36579 generated at needed for thread 1
ORA-00289: Suggestion : \Oracle_Home\Oradata\\%SID%T00036579.ARC
ORA-00280: {=Suggested | filename | AUTO | FROM logsource | CANCEL}
At the prompt, type:
AUTO
Then press
This will automatically apply all archived log sequences required to recover the database
(assuming all archived redo logs are available in the location specified in the init.ora
parameter and that the format corresponds to the format specified).
It is possible that a final non-archived log sequence is requested to complete the recovery.
This will only hold one System Change Number (SCN) and no transactions relating to the
database, up to, and including the time of the FULL ONLINE Oracle backup. If this is the case,
the following message will be returned by Oracle:
ORA-00308: cannot open archived log
'E:\ORACLE\ORADATA\KIMSTAD\ARCHIVE\KIMSTADT00036949.ARC'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.
To finish the recovery, stay in server manager with the database mounted, and type:
RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE
Then press
9. When Oracle requests this final sequence again, type:

CANCEL
Then press
10. Oracle will return the following message:
Media recovery canceled
The media recovery of the database is complete.
11. To open the database and to synchronize the log sequence, type:
ALTER DATABASE OPEN RESETLOGS
Then press
The Oracle database server is now restored to full working order up to the time of the latest
full online Oracle backup.

Case1:Recoverfromcorruptedormissingdatafile
This scenario deal with a situation where data file has gone missing or corrupted at the time
of open the database.
This scenario deals with a situation where a datafile has gone missing, or is corrupted
beyond repair. For concreteness, we look at a case where a datafile is missing. Below is a
transcript of an SQL Plus session that attempts to open a database with a missing datafile
(typed commands in bold, lines in italics are my comments, all other lines are feedback from
SQL Plus):
--open SQL Plus from the command line without
--logging on to database
C:\>sqlplus /nolog
SQL*Plus: Release 9.2.0.4.0 - Production on Tue Jan 25 14:52:41 2005
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
--Connect to the idle Oracle process as a privileged user and start up instance
SQL> connect / as sysdba
Connected to an idle instance.
SQL> startup
ORACLE instance started.
Total System Global Area 131555128 bytes
Fixed Size 454456 bytes
Variable Size 88080384 bytes
Database Buffers 41943040 bytes
Redo Buffers 1077248 bytes
Database mounted.
ORA-01157:
cannot
identify/lock
data
file
4
see
DBWR
ORA-01110: data file 4: 'D:\ORACLE_DATA\DATAFILES\ORCL\USERS01.DBF'

trace

file

SQL>
The error message tells us that file# 4 is missing. Note that although the startup command
has failed, the database is in the mount state. Thus, the database control file, which is also
the RMAN repository can be accessed by the instance and by RMAN. We now recover the
missing file using RMAN. The transcript of the recovery session is reproduced below (bold
lines are typed commands, comments in italics, the rest is feedback from RMAN):
--logon to RMAN
C:\>rman target /
Recovery Manager: Release 9.2.0.4.0 Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
connected to target database: ORCL (DBID=1507972899)
--restore missing datafile
RMAN> restore datafile 4;
--recover restored datafile - RMAN applies all logs automatically
RMAN> recover datafile 4;
media recovery complete
--open database for general use
RMAN> alter database open;
database opened
RMAN>

Case2:Recoverfromcorruptedormissingdatafile(When
databasealreadyopen)
This scenario deal with a situation where data file has gone missing or corrupted and databases already
open.
If the database is already open when datafile corruption is detected, you can recover the datafile without
shutting down the database. The only additional step is to take the relevant tablespace offline before
starting recovery. In this case you would perform recovery at the tablespace level. The commands are:
C:\>rman target /
Recovery Manager: Release 9.2.0.4.0 Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
connected to target database: ORCL (DBID=1507972899)
--offline affected datafile
RMAN> sql 'alter database datafile 4 offline';

--recover offlined tablespace


RMAN> recover datafile 4;
--online recovered tablespace
RMAN> sql 'alter database datafile 4 online';
RMAN>
Here we have used the SQL command, which allows us to execute arbitrary SQL from within RMAN.

Case3:lostdatafileandwehavenobackup(Whendatabase
open)
Step 1 offline the datafile
SQL> alter database datafile offline;
Step 2 Create datafile
SQL> Create database create datafile as size ;
Step 3 Recover the datafile
SQL> recover datafile ;
Step 4 Online the datafile
SQL> alter database datafile online;

Case4:LostdatafileandNobackup(Whendatabaseabnormal
terminated)
Step 1 Startup the database with mount option
SQL> startup mount
Step 2 Recreate datafile
SQL> alter database create datafile as size ;
Step 3 Online the datafile
SQL> alter database datafile online;
Step 4 Now Open the database
SQL> alter database open

Case5:Ifalldatabaselostandwehavenobackupofnewly
createddatafile.
Step 1 Restore database and mount the database
Step 2 Recover the database
SQL> recover database using backup controlfile;
Step 3 Check file status
SQL> select file#,name,status from v$datafile;
Step 4 Create datafile
SQL> alter database Create datafile as size ;
Step 5 Recovery the database
SQL> recover automatic database using backup controlfile until cancel;

Case2:Allcontrolfileslost
What if you lose all your control files? In that case you have no option but to use a backup
control file. The recovery needs to be performed from within RMAN, and requires that all logs
(archived and current online logs) since the last backup are available. The logs are required
because all datafiles must also be restored from backup. The database will then have to be
recovered up to the time the control files went missing. This can only be done if all
intervening logs are available. Here's an annotated transcript of a recovery session (as
usual, lines in bold are commands to be typed, lines in italics are explanatory comments,
other lines are RMAN feedback):
Connect to RMANC:\rman
Recovery Manager: Release 9.0.1.1.1 Production
(c) Copyright 2001 Oracle Corporation. All rights reserved.
RMAN> set dbid 4102753520
executing command: SET DBID
set DBID - get this from the name of the controlfile autobackup. For example, if autobackup
name is CTL_SP_BAK_C-1507972899-20050124-00 the the DBID is
1507972899. This step will not be required if the instance is
RMAN> connect target sys/change_on_install
connected to target database: (not mounted)
Recovery Manager: Release 9.2.0.4.0 Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
connected to target database: ORCL (not mounted)
RMAN> restore controlfile from autobackup;

Finished restore at 26/JAN/05


-- Now that control files have been restored, the instance can mount the database.
RMAN> mount database;
database mounted
-- All datafiles must be restored, since the controlfile is older than the current
-- datafiles. Datafile restore must be followed by recovery up to the current log.
RMAN> restore database;
--Database must be recovered because all datafiles have been restored from backup
RMAN> recover database;
media recovery complete
Finished recover at 26/JAN/05
-- Recovery completed. The database must be opened with RESETLOGS
-- because a backup control file was used. Can also use
-- "alter database open resetlogs" instead.
RMAN> open resetlogs database;
database opened
Several points are worth emphasizing.
1. Recovery using a backup controlfile should be done only if a current control file is
unavailable.
2. All datafiles must be restored from backup. This means the database will need to be
recovered using archived and online redo logs. These MUST be available for recovery until
the time of failure.
3. As with any database recovery involving RESETLOGS, take a fresh backup immediately.
1.

Technically the above is an example of complete recovery - since all committed transactions
were recovered. However, some references consider this to be incomplete recovery because
the database log sequence had to be reset.
2. After recovery using a backup controlfile, all temporary files associated with locallymanaged tablespaces are no longer available. You can check that this is so by querying the
view V$TEMPFILE - no rows will be returned. Therefore tempfiles must be added (or
recreated) before the database is made available for general use. In the case at hand, the
tempfile already exists so we merely add it to the temporary tablespace. This can be done
using SQLPlus or any tool of your choice:
SQL> alter tablespace temp add tempfile

'D:\oracle_data\datafiles\ORCL\TEMP01.DBF';
Tablespace altered.
SQL>
Check that the file is available by querying v$TEMPFILE.

Case2:Allmembersofaloggrouplost
In this case an incomplete recovery is the best we can do. We will lose all transactions from
the missing log and all subsequent logs. We illustrate using the same example as above. The
error message indicates that members of log group 3 are missing. We don't have a copy of
this file, so we know that an incomplete recovery is required. The first step is to determine
how much can be recovered. In order to do this, we query the V$LOG view (when in the
mount state) to find the system change number (SCN) that we can recover to (Reminder: the
SCN is a monotonically increasing number that is incremented whenever a commit is issued)
--The database should be in the mount state for v$log access
SQL> select first_change# from v$log whnhi.ere group#=3 ;
FIRST_CHANGE#
------------370255
SQL>
The FIRST_CHANGE# is the first SCN stamped in the missing log. This implies that the last
SCN stamped in the previous log is 370254 (FIRST_CHANGE#-1). This is the highest SCN that
we can recover to. In order to do the recovery we must first restore ALL datafiles to this SCN,
followed by recovery (also up to this SCN). This is an incomplete recovery, so we must open
the database resetlogs after we're done. Here's a transcript of the recovery session (typed
commands in bold, comments in italics, all other lines are
RMAN feedback):
C:\>rman target /
--Restore ENTIRE database to determined SCN
RMAN> restore database until scn 370254;
RMAN>
The following points should be noted:
1. The entire database must be restored to the SCN that has been determined by querying
v$log.
2. All changes beyond that SCN are lost. This method of recovery should be used only if you
are sure that you cannot do better. Be sure to multiplex your redo logs, and (space
permitting) your archived logs!

1.

The database must be opened with RESETLOGS, as a required log has not been
applied. This resets the log sequence to zero, thereby rendering all prior backups worthless.
Therefore, the first step after opening a database RESETLOGS is to take a fresh backup. Note
that the RESETLOGS option must be used for any incomplete recovery

2. Case1:Amultiplexedcopyofthemissinglogisavailable
3. If a redo log is missing, it should be restored from a multiplexed copy, if possible. Here's an
example, where I attempt to startup from SQLPLUS when a redo log is missing:
SQL> startup
ORACLE instance started.
Total System Global Area 131555128 bytes
Fixed Size 454456 bytes
Variable Size 88080384 bytes
Database Buffers 41943040 bytes
Redo Buffers 1077248 bytes
Database mounted.
ORA-00313: open failed for members of log group 3 of thread 1
ORA-00312: online log 3 thread 1: 'D:\ORACLE_DATA\LOGS\ORCL\REDO03A.LOG'
SQL>
To fix this we simply copy REDO03A.LOG from its multiplexed location on E: to the above
location on D:.
SQL> alter database open;
Database altered.
SQL>

4. RecoveryfromDisaster
5.

6. Introduction:
7. - i.e. a situation in which your database server has been destroyed and has taken all
your database files (control files, logs and data files) with it. Obviously, recovery from
a disaster of this nature is dependent on what you have in terms of backups and
hardware resources. We assume you have the following available after the disaster:

8.
9. * A server with the same disk layout as the original.
10. * The last full hot backup on tape.
11.
12. With the above items at hand, it is possible to recover all data up to the last full
backup. One can do better if subsequent archive logs (after the last backup) are
available. In our case these aren't available, since our only archive destination was
on the destroyed server ). Oracle provides methods to achieve better data protection.
We will discuss some of these towards the end of the article.

13.
14. Now on with the task at hand. The high-level steps involved in disaster recovery are:
15.
16. Build replacement server.

17. Restore backup from tape.


18. Install database software.
19. Create Oracle service.
20. Restore and recover database.
21.
22. Step:1 Build the server
23.
24. You need a server to host the database, so the first step is to acquire or build the new
machine. This is not strictly a DBA task, so we won't delve into details here. The main
point to keep in mind is that the replacement server should, as far as possible, be
identical to the old one. In particular, pay attention to the following areas:
* Ideally the server should have the same number of disks as the original. The new
disks should also have enough space to hold all software and data that was on the
original
server.

25. * The operating system environment should be the same as the original, right up to
service pack and patch level.

26.
27. * The new server must have enough memory to cater to Oracle and operating system
/ other software requirements. Oracle memory structures (Shared pool, db buffer
caches etc) will be sized identically to the original database instance. Use of the
backup server parameter file will ensure this.

28.
29. Step:2 Restore backup from tape
30.
31. The next step is to get your backup from tape on to disk.
32.
33. Step:3 Install Oracle Software
34.
35. The next step is to install Oracle software on the machine. The following points
should be

36. kept in mind when installing the software:


37.
38. * Install the same version of Oracle as was on the destroyed server. The version
number should match right down to the patch level, so this may be a multi-step
process involving installation followed by the application of one or more patch sets
and
patches.

39. *Do not create a new database at this stage.


40.
41. * Create a listener using the Network Configuration Assistant. Ensure that it has the
same name and listening ports as the original listener. Relevant listener configuration
information can be found in the backed up listener.ora file.

42.
43. Step:4 Create directory structure for database files
44.

45. After software installation is completed, create all directories required for datafiles,
(online and archived) logs, control files and backups. All directory paths should match
those on the original server. Don't worry if you do not know where the database files
should be located. You can obtain the required information from the backup spfile and
control file at a later stage. Continue reading - we'll come back to this later.

46.
47. Step: 5 Create Oracle service
48.
49. An Oracle service must be exist before a database is created. The service is created
using the oradim utility, which must be run from the command line. The following
commands show how to create and modify a service (comments in italics, typed
commands in bold):

50.
51. --create a new service with auto startup
52.
53. C:\>oradim -new -sid ORCL -intpwd ORCL -startmode a
54.
55. Unfortunately oradim does not give any feedback, but you can check that the service
exists via the Services administrative panel. The service has been configured to start
automatically when the computer is powered up.

56.
57. Step: 6 Restore and recover database
58.
59. Now it is time to get down to the nuts and bolts of database recovery. There are
several

60. steps, so we'll list them in order:


61.
62. * Copy PASSWORD and TNSNAMES file from backup: The backed up password file and
tnsnames.ora files should be copied from the backup directory to the proper
locations.
Default
location
for
password
and
tnsnames
files
are
ORACLE_HOME\database
ORACLE_HOME\network\admin
respectively.

63. * Set ORACLE_SID environment variable: ORACLE_SID should be set to the proper SID
name (ORCL in our case). This can be set either in the registry (registry key:
HKLM\Software\Oracle\HOME\ORACLE_SID) or from the system applet in the control
panel.

64. * Invoke RMAN and set the DBID: We invoke rman and connect to the target database
as usual. No login credentials are required since we connect from an OS account
belonging to ORA_DBA. Note that RMAN accepts a connection to the database
although the database is yet to be recovered. RMAN doesn't as yet "know" which
database we intend to connect to. We therefore need to identify the (to be restored)
database to RMAN. This is done through the database identifier (DBID). The DBID can
be figured out from the name of the controlfile backup. Example: if you use the
controlfile backup format , your controlfile backup name will be something like
"CTL_SP_BAK_C-1507972899-20050228-00". In this case the DBID is 1507972899.
Here's a transcript illustrating the process of setting the DBID:

65.
66. C:\>rman

67.
68. Recovery Manager: Release 9.2.0.4.0 Production
69. Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
70.
71. RMAN> set dbid 1507972899
72.
73. executing command: SET DBID
74.
75. RMAN>connect target /
76.
77. connected to target database (not started)
78. RMAN>
79.
80. Restore spfile from backup: To restore the spfile, you first need to startup the
database in the nomount state. This starts up the database using a dummy
parameter file. After that you can restore the spfile from the backup (which has been
restored from tape ). Finally you restart the database in nomount state. Here is an
example RMAN transcript for the foregoing procedure. Note the difference in SGA size
and components between the two startups:

81.
82. RMAN> startup nomount
83.
84. startup failed: ORA-01078:

failure
in
LRM-00109:
could
not
'C:\ORACLE\ORA92\DATABASE\INITORCL.ORA'

processing
open

system
parameters
parameter
file

85.
86. trying to start the Oracle instance without parameter files ...
87.
88. Oracle instance started
89.
90. Total System Global Area 97590928 bytes
91.
92. Fixed Size 454288 bytes
93. Variable Size 46137344 bytes
94. Database Buffers 50331648 bytes
95. Redo Buffers 667648 bytes
96.
97. RMAN> restore spfile from 'e:\backup\CTL_SP_BAK_C-1507972899-20050228-00';
98.
99. Starting restore at 01/MAR/05
100.
101.
using target database controlfile instead of recovery catalog
102.
allocated channel: ORA_DISK_1
103.
channel ORA_DISK_1: sid=9 devtype=DISK
104.
channel
ORA_DISK_1:
autobackup
found:
e:\backup\CTL_SP_BAK_C1507972899
105.
20050228-00
106.
channel ORA_DISK_1: SPFILE restore from autobackup complete

107.
108.
109.
110.
111.
112.
113.
114.
115.
116.
117.
118.
119.
120.
121.
122.

Finished restore at 01/MAR/05


RMAN> startup force nomount
Oracle instance started
Total System Global Area 1520937712 bytes
Fixed Size 457456 bytes
Variable Size 763363328 bytes
Database Buffers 754974720 bytes
Redo Buffers 2142208 bytes
RMAN>
The instance is now started up with the correct initialization parameters.

We are now in a position to determine the locations of control file and archive
destination, as this information sits in the spfile. This is done via SQL Plus as follows:

123.
124.
125.
126.
127.
128.
129.
130.
131.
132.
133.
134.

C:\>sqlplus /nolog
SQL>connect / as sysdba
Connected.
SQL> show parameter control_file
SQL> show parameter log_archive_dest

The directories listed in the CONTROL_FILES and LOG_ARCHIVE_DEST_N


parameters should be created at this stage if they haven't been created earlier.

135.
136.

Restore control file from backup: The instance now "knows" where the control
files should be restored, as this is listed in the CONTROL_FILES initialization
parameter. Therefore, the next step is to restore these files from backup. Once the
control files are restored, the instance should be restarted in mount mode. A restart
is required because the instance must read the initialization parameter file in order to
determine the control file locations. At the end of this step RMAN also has its proper
configuration parameters,

137.
138.
139.
140.
141.
142.

as these are stored in the control file.


Here is a RMAN session transcript showing the steps detailed here:

RMAN>
20050228-00';

143.
144.
145.

restore

RMAN> shutdown

controlfile

from

'e:\backup\CTL_SP_BAK_C-1507972899-

146.
147.
148.
149.
150.
151.
152.

Oracle instance shut down


RMAN> exit
Recovery Manager complete.
C:\>rman target /

Recovery
Copyright
(c)
connected

Manager:
Release
9.2.0.4.0
1995,
2002,
Oracle
Corporation.
All
rights
to
target
database
(not

Production
reserved.
started)

RMAN>startup mount;

153.
154.
155.
156.
157.
158.
159.
160.
161.
162.
163.
164.
165.
166.
167.
168.
169.
170.
171.
172.
173.
174.
175.

Oracle instance started


database mounted
Total System Global Area 1520937712 bytes
Fixed Size 457456 bytes
Variable Size 763363328 bytes
Database Buffers 754974720 bytes
Redo Buffers2142208 bytes
RMAN> show all;
using target database controlfile instead of recovery catalog
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO
'e:\backup\ctl_sp_bak_%F';
CONFIGURE DEVICE TYPE DISK PARALLELISM 2;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; #

default

176.
177.
178.
179.
180.
181.
182.

CONFIGURE CHANNEL 1 DEVICE TYPE DISK FORMAT 'e:\backup\%U.bak'


MAXPIECESIZE 4G;
CONFIGURE CHANNEL 2 DEVICE TYPE DISK FORMAT 'e:\backup\%U.bak'
MAXPIECESIZE 4G;
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO
'C:\ORACLE\ORA92\DATABASE\SNCFORCL.ORA';
#
default

RMAN>

183.
184.

At this stage we can determine the locations of data files and redo logs if we
don't know where they should go. This is done from SQL Plus as follows:

185.

186.
187.
188.
189.
190.
191.
192.
193.
194.
195.

C:\>sqlplus /nolog
SQL>connect / as sysdba
Connected.
SQL>select name from v$datafile;
SQL>select member from v$logfile;
SQL>

The directories shown in the output should be created manually if this hasn't been
done earlier. Restore all datafiles: This is easy. Simply issue a "restore database"
command from RMAN, and it will do all the rest for you:

196.
197.
198.
199.

RMAN> restore database;

RMAN>
Recover database: The final step is to recover the database. Obviously recovery is
dependent on the available archived (and online) redo logs. Since we have lost our
database server and have no remote archive destination, we can recover only up to
the time of the backup. Further, since this is an incomplete recovery, we will have to
open the database with resetlogs. Here's a sample RMAN session illustrating this:
RMAN> recover database;

200.
201.
202.
203.
204.
205.

RMAN>alter database open resetlogs;


database opened
RMAN>

Note that RMAN automatically applies all available archive logs. It first applies the
backed up log and then searches for subsequent logs in the archive destination. This
opens the door for further recovery if the necessary logs are available. In our case,
however, we have no more redo so we open the database with resetlogs. The error
message above simply indicates that RMAN has searched, unsuccessfully, for
subsequent logs.

PointintimeRecovery
RMAN can perform recovery of the whole database to a specified past time, SCN, or log
sequence number. This type of recovery is sometimes called incomplete recovery because

it does not completely use all of the available redo. Incomplete recovery of the whole
database is also called database point-in-time recovery (DBPITR).
To recover the database until a specified time, SCN, or log sequence number:
Note:
The database must be closed to perform database point-in-time recovery.
If you are recovering to a time, then you should set the time format environment
variables before invoking RMAN.
The following are sample Globalization Support settings
NLS_LANG = american_america.us7ascii
NLS_DATE_FORMAT="Mon DD YYYY HH24:MI:SS"

Step 1 After connecting to the target database and, optionally, the recovery catalog
database, ensure that the database is mounted. If the database is open, shut it down and
then mount it:
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
Step 2 Determine the time, SCN, or log sequence that should end recovery. For example, if
you discover that a user accidentally dropped a tablespace at 9:02 a.m., then you can
recover to 9 a.m.--just before the drop occurred. You will lose all changes to the database
made after that time.
You can also examine the alert.log to find the SCN of an event and recover to a prior SCN.
Alternatively, you can determine the log sequence number that contains the recovery
termination SCN, and then recover through that log. For example, query V$LOG_HISTORY to
view the logs that you have archived.
Note: If you want to know drop date and time detail, u must to enable audit trail feature.
Step 3 Perform the following operations within a RUN command:
The following example performs an incomplete recovery until November 15 at 9 a.m.
RUN
{
SET UNTIL TIME 'Nov 15 2002 09:00:00';
# SET UNTIL SCN 1000;
# alternatively, specify SCN
# SET UNTIL SEQUENCE 9923; # alternatively, specify log sequence number
RESTORE DATABASE;
RECOVER DATABASE;
}
Step 4 If recovery was successful, then open the database and reset the online logs:
ALTER DATABASE OPEN RESETLOGS;

TimeBased,ChangeBased,orSCNbasedIncomplete
Recovery
Incomplete recovery uses a backup to produce a non-current version of the database. In
other words, you do not apply all of the redo records generated after the most recent
backup.
You usually perform incomplete recovery of the whole database in the following situations:

1.
2.
3.
4.

Media failure destroys some or all of the online redo logs.


A user error causes data loss, for example, a user by mistake drops a table.
You cannot perform complete recovery because an archived redo log is missing.

You lose your current control file and must use a backup control file to open the
database.
To perform incomplete media recovery, you must restore all datafiles from backups created
prior to the time to which you want to recover and then open the database with the
RESETLOGS option when recovery completes. The RESETLOGS operation creates a new
incarnation of the database; in other words, a database with a new stream of log sequence
numbers starting with log sequence 1.
NOTE Start every RMAN incomplete recovery with the following commands:
$ sqlplus "/ as sysdba"
SQL> shutdown abort;
SQL> startup mount;
$ rman target /
-For time-based recovery, use these commands:
RMAN> restore database until time "to_date('07/12/05 12:0:00','MM/DD/YY
HH24:MI:SS')";
RMAN> recover
HH24:MI:SS')";

database

until

time

"to_date('07/12/05

media recovery complete


SQL> alter database open resetlogs;
--For SCN-based recovery, user these commands:
RMAN> restore database until scn 1000;
RMAN> recover database until scn 1000;
media recovery complete.
SQL> alter database open resetlogs;

12:0:00','MM/DD/YY

--For change-based recovery, user these commands:


RMAN> restore database until sequence 9923; --Archived log sequence number
RMAN> recover database until sequence 9923; --Archived log sequence number
media recovery complete.
SQL> alter database open resetlogs;

Case1:Ifundodatafilelostaftercleanlyshutdown
When you are trying to start database:
SQL> startup
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size
1279120 bytes
Variable Size
71306096 bytes
Database Buffers
92274688 bytes
Redo Buffers
2912256 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 2 - see DBWR trace file
ORA-01110: data file 2: '/DB/ANUP/undotbs01.dbf'
Steps 1 Shutdown immediate
Step 2 Set UNDO_MANAGEMENT=manual in init file.
Step 3 Mount the database in restricted mode.
SQL> STARTUP RESTRICT MOUNT
Step 4 Offline drops the lost undo data file.
SQL> alter database datafile '/DB/ANUP/undotbs01.dbf' offline drop;
Step 5 Open the database.
SQL> ALTER DATABASE OPEN
Step 6 Drop the undo tablespace which contains rollback segments to which the datafile
belonged.
SQL> drop tablespace undotbs1 including contents;
Step 7 Recreate the undo tablespace.
SQL> create undo tablespace undotbs2 datafile '/DB/ANUP/undotbs02.dbf' size 100m;

Step 8 Edit the parameter file setting:


UNDO_MANAGEMENT=AUTO and UNDO_TABLESPACE=
Step 9 Shutdown and startup the the database

Case2:THEDATABASEWASNOTCLEANLYSHUT
DOWN
Situation where the database was last shut down abort or crashed. In this case, it is almost certain that
the rollback segments that had extents in the lost data file still contain active transactions. Therefore, the
file cannot be offlined or dropped. You must restore the lost data file from a backup and apply media
recovery to it.
When you are trying to start database:
SQL> startup
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 1279120 bytes
Variable Size 71306096 bytes
Database Buffers 92274688 bytes
Redo Buffers 2912256 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 2 - see DBWR trace file
ORA-01110: data file 2: '/DB/ANUP/undotbs02.dbf'
Step 1: Shutdown the database with immediate option
Step 2: Restore the lost file from a backup.
Step 3: Mount the database:
SQL> STARTUP MOUNT;
Step 4: Issue the following query:
SQL> SELECT FILE#, NAME, STATUS FROM V$DATAFILE;
Step 5: If the status of the file you just restored is "OFFLINE," you must online it before proceeding.
SQL> ALTER DATABASE DATAFILE '' ONLINE;
Step 6: Recover the updo datafile
SQL> recover datafile ;
Step 7: After recovery open the database.

SQL> alter database open;

Case3THEDATABASEisUP
If you have detected the loss of the rollback datafile and the database is still up and running,
DO NOT SHUT IT DOWN. In most cases, it is simpler to solve this problem with the database
up than with it down.
Two approaches are possible in this scenario:
A) The first approach involves creating a new UNDO (or rollback segment) tablespace,
altering the system to use the new and dropping the old.
B) The other approach involves offlining the lost datafile, restoring it from backup, and then
applying media recovery to it to make it consistent with the rest of the database. This
method can only be used if the database is in ARCHIVELOG mode.
In general, approach IIA is simpler to try first, however, depending on activity and undo
usage, the drop of the older tablespace may result in error regarding active transactions.
There are a few things to try to get the older tablespace to drop, but if all fail, approach IIB
may be required.
Second Approach Steps:
Step 1 Offline the lost datafile.
SQL> ALTER DATABASE DATAFILE '' OFFLINE;
NOTE: Depending on the current amount of database activity, you may have to create
additional rollback segments in a different tablespace to keep the database going while you
take care of the problem.
Step 2 Restore the datafile from a backup.
Step 3 Issue the following query:
SQL> SELECT V1.GROUP#, MEMBER, SEQUENCE# FROM V$LOG V1, V$LOGFILE V2 WHERE
V1.GROUP# = V2.GROUP# ;
This will list all your online redolog files and their respective sequence numbers.
Step 4 Recover the datafile:
SQL> RECOVER DATAFILE ''
Step 5 Confirm each of the logs that you are prompted for until you receive the message
"Media recovery complete". If you are prompted for a non-existing archived log, Oracle
probably needs one or more of the online logs to proceed with the recovery. Compare the
sequence number referenced in the ORA-280 message with the sequence numbers of your

online logs. Then enter the full path name of one of the members of the redo group whose
sequence number matches the one you are being asked for. Keep entering online logs as
requested until you receive the message "Media recovery complete".
Step 6 Bring the datafile back online.
SQL> ALTER DATABASE DATAFILE '' ONLINE;
First Approach Steps:
Step 1 Create a new undo tablespace:
SQL> CREATE UNDO TABLESPACE DATAFILE '' SIZE ....;
Step 2 Alter the system to use the new undo tablespace:
SQL> ALTER SYSTEM SET UNDO_TABLESPACE='';
At this point, all new undo will be allocated in undo segments managed within the
UNDO_TBS2 tablespace.
Step 3 Try dropping older undo tablespace:
SQL> DROP TABLESPACE INCLUDING CONTENTS;
If this succeeds, you are done. If it returns an error, please see "Handling errors dropping
undo/rollback segments".

HANDLINGERRORSDROPPINGUNDO/ROLLBACK
SEGMENTS:
NOTE, although automatic undo and undo tablespace is used by the database, the following
procedure is the same. The only difference is that the segment names are assigned by
Oracle.
1. Check for any rollback segments online.
SQL> SELECT SEGMENT_NAME, STATUS FROM DBA_ROLLBACK_SEGS WHERE
TABLESPACE_NAME = '';
If any of the rollbacks you tried to offline still has an "ONLINE" status, this is usually an
indication that this segment contains active transactions.
2. Check for active transactions with the following query:
SQL> SELECT SEGMENT_NAME, XACTS ACTIVE_TX, V.STATUS FROM V$ROLLSTAT V,
DBA_ROLLBACK_SEGS WHERE TABLESPACE_NAME = 'UNDOTBS1' AND SEGMENT_ID = USN;

If the above query returns no rows, it means all the rollbacks in the affected tablespace are
already offline. Repeat the query in step 1 to retrieve the names of the rollbacks that just
became offline and attempt to drop the undo tablespace or individual rollback segments as
described above.
If the above query returns one or more rows, they should show status "PENDING
OFFLINE". Next, check the ACTIVE_TX column for each rollback. If it has a value of 0, it
implies there are no pending transactions left in the rollback, and it should go offline
shortly. Repeat the query in step 1 a few more times until it shows the rollback segments
being offline and then attempt the drop again. Once the drop is successful, you are done.
If any of the "pending offline" rollbacks has a value of 1 or greater in the ACTIVE_TX column,
move on to step 3.
3. Force rollbacks with active transactions to go offline.
At this point, the only way to move forward is to have the "pending offline" rollbacks
released. The active transactions in these rollbacks must either be committed or rolled
back.
The following query shows which users have transactions assigned to which rollbacks:
SQL> SELECT S.SID, S.SERIAL#, S.USERNAME, R.NAME "ROLLBACK"
FROM V$SESSION S, V$TRANSACTION T, V$ROLLNAME R
WHERE R.NAME IN ('', ... , '')
AND S.TADDR = T.ADDR AND T.XIDUSN = R.USN;
You may directly contact the users with transactions in the "pending offline" rollbacks and
ask them to commit (preferably) or rollback immediately. If that is not feasible, you can
force that to happen by killing their sessions. For each of the entries returned by the above
query, issue the statement:
SQL> ALTER SYSTEM KILL SESSION ', ';
Where and are those returned by the previous query. After the sessions are killed, it may
take a few minutes before Oracle finishes rolling back and doing cleanup work. Go back to
step 1 and repeat the query in there periodically until all rollbacks in the affected tablespace
are offline and ready to be dropped.
If you are unable to drop the older undo tablespace or rollback segments, try to restore the
file from backup and recover it fully (approach II.B). Once the file is recovered try to drop
the older undo tablespace. If this is not possible, contact Oracle Support Services.

RestoringandRecoveringtheDatabasetoasamehostwith
DifferentMountPoint
Case 1: Our database has been crashed, due to disk corruption. All Database files have
been not accessible but RMAN backup set location is accessible. In this case we need to
restore and recover database within new location.

Source Setup:
Database Mount Point is /DB/PROD
Archive Log location is /DB/arch/online
Backup set location is /DB/rman_bkp
Target Setup:
Database Mount Point is /DB/anup/datafile
Archive Log location is /DB/anup/archive
Backup set location is /DB/rman_bkp
Step 1 First we restore control file to new location and mount the database.
Change the control_files parameter in init file.
Example : control_files=/DB/anup/datafile/control01.ctl
Invoke RMAN utility and startup nomount database.
RMAN > set dbid <.>
RMAN> restore controlfile from autobackup;
RMAN>
Step 2 Restore Datafiles to new location.
run
{
set newname for datafile '/DB/PROD/system01.dbf' to '/DB/anup/datafile/system01.dbf';
set newname for datafile '/DB/PROD/undotbs01.dbf' to '/DB/anup/datafile/undotbs01.dbf';
set newname for datafile '/DB/PROD/sysaux01.dbf' to '/DB/anup/datafile/sysaux01.dbf';
set newname for datafile '/DB/PROD/users01.dbf' to '/DB/anup/datafile/users01.dbf';
restore database;
switch datafile all; # point control file to new filenames
}
Step 3 Restore Backup archive log file to new location.
run
{
set archivelog destination to '/DB/anup/archive';
restore archivelog all;
}
Step 4 Recover Database:
RMAN> recover database;
Step 5 Relocate Log file location.
SQL> alter database rename file '/DB/PROD/redo01.log' to '/DB/anup/datafile/redo01.log'

Database altered.
SQL> alter database rename file '/DB/PROD/redo02.log' to '/DB/anup/datafile/redo02.log';
Database altered.
SQL> alter database rename file '/DB/PROD/redo03.log' to '/DB/anup/datafile/redo03.log';
Step 6 Open database with reset options
RMAN> alter database open resetlogs;
Case 2: our database has been crashed, due to disk corruption. All Database files have
been not accessible and RMAN backup set location is also not accessible. In this case we
need to restore and recover database within new location.
Source Setup:
Datafile/Logfile/Controlfile Location is /DB/PROD
Archive Log location is /DB/arch/online
Backup set location /DB/rman_bkp
Target Setup:
Datafile/Logfile/Controlfile Location is /DB/anup/datafile
Archive Log location /DB/anup/archive
Backup set location /DB/anup/rman_bkp
Step 1 First we restore control file to new location and mount the database.
Change the control_files parameter in init file.
Example : control_files=/DB/anup/datafile/control01.ctl
Invoke RMAN utility and startup nomount database.
RMAN > set dbid <.>
RMAN> restore controlfile from
RMAN> alter database mount;
Step 2 Catalog your backuppiece.
RMAN> catalog backuppiece '/DB/anup/rman_bkp/TAG20080506T150716_421c355f_.bkp';
cataloged backuppiece
backup piece handle=/oradata2/o1_mf_nnndf_TAG20080506T150716_421c355f_.bkp
recid=33 stamp=65398295
RMAN> list backup;
Step 3 restore Datafiles to new location.

run
{
set newname for datafile '/DB/PROD/system01.dbf' to '/DB/anup/datafile/system01.dbf';
set newname for datafile '/DB/PROD/undotbs01.dbf' to '/DB/anup/datafile/undotbs01.dbf';
set newname for datafile '/DB/PROD/sysaux01.dbf' to '/DB/anup/datafile/sysaux01.dbf';
set newname for datafile '/DB/PROD/users01.dbf' to '/DB/anup/datafile/users01.dbf';
restore database;
switch datafile all; # point control file to new filenames
}
Step 4 Restore Backup archive log file to New location.
run
{
set archivelog destination to '/DB/anup/archive';
restore archivelog all;
}
Step 5 Recover Database:
RMAN> recover database;
Step 6 Relocate Log file location.
SQL> alter database rename file '/DB/PROD/redo01.log' to '/DB/anup/datafile/redo01.log';
Database altered.
SQL> alter database rename file '/DB/PROD/redo02.log' to '/DB/anup/datafile/redo02.log';
Database altered.
SQL> alter database rename file '/DB/PROD/redo03.log' to '/DB/anup/datafile/redo03.log';
Step 7 Open the database
RMAN> alter database open resetlogs;

RestoreandrecovertheDatabasetoanewhostwithasame
directorystructure
Restore and Recover database to a new host with same directory structure
In this case DBID of the database will be the same as of original database.
Source Configuration:
Host Name SUN1
DB Name PROD
Database Location /DB/PROD

Backup Location /RMAN_BKP


Archive Location /archive
Target Configuration:
Host Name SUN2
DB Name PROD
Database Location /DB/PROD
Backup Location /RMAN_BKP
Archive Location /archive

Step 1 Take a full backup of Source Database


RMAN> backup database plus archivelog;
RMAN>backup spfile; - if you are using spfile , other wise take backup of pfile by using os command.
Step 2 Transfer these backup pieces to target machine (/RMAN_BKP location)
Step 3 Determine the DBID of source machine
SQL> select dbid from v$database;
DBID
---------142618240
Step 4 Now perform task on target machine
First set ORACLE_SID, ORACLE_HOME and PATH then connect to rman
Step 5 Set DBID and restore spfile or pfile.
RMAN> set dbid 142618240
executing command: SET DBID
RMAN> startup nomount
Step 6 Restore pfile.
RMAN> restore spfile to pfile '/export/home/oracle/oracle/product/10.2.0/db_1/dbs/initPROD.ora' from
'/RMAN_BKP/c-142618240-20100927-01';
Step 7 Start the instance with pfile.
RMAN> STARTUP FORCE NOMOUNT
Step 8 Restore and Mount the Control file on Target Instance
RMAN> restore controlfile from '/RMAN_BKP/c-142618240-20100927-01';
RMAN> ALTER DATABASE MOUNT;
Step 9 Restore the database
RMAN> restore database;
Step 10 Recover the database

RMAN > recover database;


Step 11 Open the Database resetlogs option.
RMAN> alter database open resetlogs;

RestoreandRecoverdatabasetoanewhostwithdifferent
directorystructure
Restore and Recover database to a new host with different directory structure
In this case DBID of the database will be the same as of original database.
Source Configuration:
Host Name
DB Name
Database Location
Backup Location
Archive Location

SUN1
PROD
/DB/PROD
/RMAN_BKP
/archive

Target Configuration:
Host Name
DB Name
Database Location
Backup Location
Archive Location

SUN2
PROD
/INDIAN/PROD
/INDIAN/RMAN_BKP
/INDIAN/archive

Step 1 Take a full backup of Source Database


RMAN> backup database plus archivelog;
RMAN>backup spfile; - if you are using spfile , other wise take backup of pfile by using os
command.
Step 2 Transfer these backup pieces to target machine (/INDIAN/RMAN_BKP location)
Step 3 Determine the DBID of source machine
SQL> select dbid from v$database;
DBID
---------142618240
Step 4 Now perform task on target machine
First set ORACLE_SID, ORACLE_HOME and PATH then connect to rman

Step 5 Set DBID and restore spfile or pfile.


RMAN> set dbid 142618240
executing command: SET DBID
RMAN> startup nomount
Step 6 Restore pfile.
RMAN> restore spfile to pfile
'/export/home/oracle/oracle/product/10.2.0/db_1/dbs/initPROD.ora' from
'/INDIAN/RMAN_BKP/c-142618240-20100927-01';
Step 7 after restoration of pfile from spfile backup . shutdown the instance.
RMAN> shutdown immediate
Step 8 Open parameter file and edit control_files parameter to new location
(/INDIAN/PROD/)
Step 7 Start the instance with pfile.
RMAN> STARTUP FORCE NOMOUNT
Step 8 Restore and Mount the Control file on Target Instance
RMAN> restore controlfile from '/INDIAN/RMAN_BKP/c-142618240-20100927-01';
RMAN> ALTER DATABASE MOUNT;
Step 9 Catalog the all backup piece
RMAN> catalog backuppiece '/DB/RMAN_BKP/TAG20080506T150716_421c355f_.bkp';
-..
..
.
RMAN> list backup
Step 10 Restore the database to new location (/INDIAN/PROD)
A) From SQL*Plus determine the data file and redo log file name.
SQL> SELECT FILE# AS "File/Grp#", NAME FROM V$DATAFILE
UNION
SELECT GROUP#,MEMBER FROM V$LOGFILE
B) Make a script by issuing SET NEWNAME and run.
run
{
set newname for datafile '/DB/PROD/system01.dbf' to '/DB/ANUP/system01.dbf';

set newname for datafile


set newname for datafile
set newname for datafile
set newname for datafile
set newname for datafile
set newname for datafile
restore database;
switch datafile all;
}

'/DB/PROD/undotbs01.dbf' to '/DB/ANUP/undotbs01.dbf';
'/DB/PROD/sysaux01.dbf' to '/DB/ANUP/sysaux01.dbf';
'/DB/PROD/users01.dbf' to '/DB/ANUP/users01.dbf';
'/DB/PROD/example01.dbf' to '/DB/ANUP/example01.dbf';
'/DB/PROD/RMAN.DBF' to '/DB/ANUP/RMAN.dbf';
'/DB/PROD/a.sql' to '/DB/ANUP/a.sql';

Step 11 Restore Backup archive log file to new location.


run
{
set archivelog destination to '/DB/archive';
restore archivelog all;
}
Step 12 Recover the database
RMAN > recover database;
Step 13 Relocate Log file location.
alter database rename file '/DB/PROD/redo01.log' to '/DB/ANUP/redo01.log';
Database altered.
alter database rename file '/DB/PROD/redo02.log' to '/DB/ANUP/redo02.log';
Database altered.
alter database rename file '/DB/PROD/redo03.log' to '/DB/ANUP/redo03.log';
Step 14 Open the Database resetlogs option.
RMAN> alter database open resetlogs;

UsefulQuery
Query 1
To determine which databases are currently registered in the recovery
catalog:
SQL> SELECT * FROM RC_DATABASE;
Query 2
To determine which tablespaces are currently stored in the recovery catalog
for the target database:

SELECT DB_KEY, DBINC_KEY, DB_NAME, TS#, NAME,


CREATION_CHANGE#, CHANGE#, CREATION_TIME, CRE_DATE
FROM RC_TABLESPACE;
Query 3
To determine which scripts are currently stored in the recovery catalog for
the target database:
SELECT * FROM RC_STORED_SCRIPT;
Query 4
To list all backups of files in tablespace tbs_1 that was made since November
first:
RMAN> list until time 'Nov 1 1996 00:00:00' backupset of tablespace tbs_1;
Query 5
To list all backups on device type 'sbt_tape':
RMAN> list device type 'sbt_tape' backupset of database;
Query 6
To list all database incarnations registered in the recovery catalog:
RMAN> list incarnation of database;
You can also use the list command to determine which copies and backups
can be deleted. For example, if a full backup of the database was created on
November 2, and it will not be necessary to recover the database to an
earlier point-in-time, then the backup sets listed in the following report can
be deleted:
RMAN> list until time 'Nov 1 1996 00:00:00' backupset of database;
Query 7
To report on all datafiles which need a new backup because they contain
unlogged changes that were made after the last full or incremental backup.
RMAN> report unrecoverable database;
Query 8
To report on all datafiles which need a new backup because 3 or more
incremental backups have been taken since the last full backup.
RMAN> report need backup incremental 3 database;

Query 9
To report on all datafiles in tablespace tbs_1 which need a new backup
because the last full or incremental backup was taken more than 5 days ago.
RMAN> report need backup days 5 database;

RecoveryfromBlockCorruption
Recovery from block corruption
It is possible to recover corrupted blocks using RMAN backups. This is a somewhat exotic
scenario, but it can be useful in certain circumstances, as illustrated by the following
example. Here's the situation: a user connected to SQLPlus gets a data block corruption
error when she queries a table. Here's a part of the session transcript:
Step 1 SQL> conn scott/anup123;
Connected.
Step 2 SQL> create table test(id number);
Table created.
Step 3 SQL> insert into test values(1);
1 row created.
SQL> commit;
Commit complete.
Step 4 SQL> select * from test;
ID
--------1
Step 5 Check header block.
SQL> conn / as sysdba
Connected.
SQL> select header_block from dba_segments where segment_name='TEST';
HEADER_BLOCK
-----------395
SQL>
Step 6 use dd commands for block corruption.
dd of=/DB/ANUP/users01.dbf bs=8192 conv=notrunc seek=395<< EOF
> testing corruption
> EOF

0+1 records in
0+1 records out
Step 7 ALTER SYSTEM FLUSH BUFFER_CACHE;
System altered.
Step 8 select * from test;
select * from test
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 4, block # 395)
ORA-01110: data file 4: '/DB/ANUP/users01.dbf'
Step 9 Now connect to the RMAN and recover the data block (not the whole datafile or
database) using BLOCKRECOVER command as follows:
RMAN> blockrecover datafile 4 block 395;
Starting blockrecover at 28-MAR-10
.

..
Finished blockrecover at 28-MAR-10
RMAN> exit
Connect to SQL*Plus and query the table:
SQL> SELECT * FROM test;
ID
1

RecoverTablespace
To recover from a case of a dropped tablespace, the Tablespace Point In Time Recovery
(TSPITR) method cannot be used.
When you drop a tablespace, the controlfile will then no longer have any records of the
tablespace which has been dropped. Attempts to use the RMAN RECOVER TABLESPACE
command will return the RMAN error RMAN-06019 could not translate tablespace name as
shown below.
SQL> drop tablespace anup including contents and datafiles;
Tablespace dropped.
RMAN> restore tablespace anup;
Starting restore at 03-AUG-09
using target database control file instead of recovery catalog

allocated channel: ORA_SBT_TAPE_1


channel ORA_SBT_TAPE_1: sid=141 devtype=SBT_TAPE
channel ORA_SBT_TAPE_1: Data Protection for Oracle: version
5.5.1.0
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=140 devtype=DISK
RMAN-00571:
===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS
===============
RMAN-00571:
===========================================================
RMAN-03002: failure of restore command at 08/03/2009 11:54:11
RMAN-20202: tablespace not found in the recovery catalog
So to recover from a dropped tablespace, we have two options:

Do a point in time recovery of the whole database until the time the tablespace was
dropped.

Create a clone of the database from a valid backup, export the required tables from the
tablespace which has been dropped, recreate the tablespace and then import the tables from the
clone.
The first option will require an outage of the entire database and the entire database will be rolled
back in tine in order to recover the tablespace. The second option can be peformed online, but we
will need to factor in the disk space requirements to create a clone of the database from which
the tablespace has been dropped.
How to find object in Tablespace
SELECT * FROM DBA_SEGMENTS
WHERE TABLESPACE_NAME = 'tablespace_here';

RecycleBin(Part4)
A recycle bin contains all the dropped database objects and their dependent objects.
Note:

We can drop it immediately and permanently with purge option (DROP TABLE emp PURGE)
We can view the dropped objects in the recycle bin from two dictionary views:
USER_RECYCLEBIN list all dropped user objects.
DBA_RECYCLEBIN list all dropped system-wide objects

Viewing the original name of dropped objects?


SQL> SELECT object_name as recycle_name, original_name, object_type FROM recyclebin;
How to remove table from recycle bin?
SQL> purge table "BIN$0+ktoVChEmXgNAAADiUEHQ==$0";
How to purge recycle bin?
SQL> purge recyclebin;
How to purge all objects from the recycle bin?
SQL> purge dba_recyclebin;
How to purge all objects from tablespace in the recycle bin?
SQL> purge tablespace users;

MonitoringflashbackDatabase(Part3)
How to eastimating size of Flash Recovery Area for Flashback Logs?
SQL> SELECT ESTIMATED_FLASHBACK_SIZE FROM V$FLASHBACK_DATABASE_LOG;
This Query will return an estimate disk space (needed) for current flashback retation target.
This will be based on the database workload since Flashback Database was enabled.
How to determine earliest point in time to which can actually rewind your
database?
SQL>SELECT OLDEST_FLASHBACK_SCN,OLDEST_FLASHBACK_TIME FROM
V$FLASHBACK_DATABASE_LOG;
Monitor logging in the Flashback Database logs?
SQL>select begin_time, flashback_data, db_data, redo_data, ESTIMATED_FLASHBACK_SIZE from
v$flashback_database_stat;
Monitor the Flashback Database retention target?
select * from v$flashback_database_log;

EnableFlashRecoveryArea/Flashbackdatabase(Part1)
Flashback functionalities provide fast and flexible data recovery.

Type of flashback recovery:


Flashback Database (We can revert database at a past time)

Flashback Drop (Reverses the effects of a DROP TABLE statement)

Flashback Table (Reverses a table to its state at a previous point in time)

Flashback Query (We can specify a target time and then run queries, viewing results and recover
from an unwanted change)

Flashback Transaction Query (We can view changes made by a transaction during a period of
time.)
Requirement for Flashback:

Database must be in Archive log mode

Must have flash recovery area enable

Why database must be in archive log mode? Because archive logs are used in the Flashback
operation.
Why flash back recovery area required? Because flashback logs can only be store in this area.
Important: In RAC Database, flashback recovery area must be store is clustered file system or in ASM.
How to Enable Flash Recovery Area?
Set two parameter db_recovery_file_dest and db_recovery_file_dest_size.
SQL> alter system set db_recovery_file_dest='<path> ';</path>
SQL> alter system set db_recovery_file_dest_size=2048m
How to Enable Flashback database?
Mount the database (not open), set parameter DB_FLASHBACK_RETENTION_TARGET and issue the
ALTER DATABASE FLASHBACK ON command.
SQL> ALTER SYSTEM SET DB_FLASHBACK_RETENTION_TARGET=1440;
SQL> ALTER DATABASE FLASHBACK ON;
Note :

Default retation target is 1440 (One Days).


If we want to retain flashback logs to perform a 48 hour flashback, set the retention target to 2880
minutes (2 days x 24 hours/day x 60 minutes/hour)

By default, flashback logs are generated for all permanent tablespaces

We can reduce overhead by disabling flashback logging specific tablespaces:


SQL> ALTER TABLESPACE users FLASHBACK OFF;
We can re-enable flashback logging for a tablespace later with this command:
SQL> ALTER TABLESPACE users FLASHBACK ON;
Important:

If we disable Flashback Database for a tablespace, then we must take its datafiles offline before
running FLASHBACK DATABASE.

We can enable Flashback Database not only on a primary database, but also on a standby
database.

FlashbackRecoveryCaseStudy(Part2)
Prerequisites for Flashback recovery:

Must have FLASHBACK ANY TABLE system privilege or must have FLASHBACK object privilege
on the table.

Must have SELECT, INSERT, DELETE, and ALTER privileges on the table.
Row movement must be enabled on the table.
SQL>ALTER TABLE ENABLE ROW MOVEMENT;
Flashback Drop:( Reverses the effects of a DROP TABLE statement)
Reverse the Drop table ->
SQL> FLASHBACK TABLE EMP TO BEFORE DROP;
Assign a new name to the restored table->
SQL> FLASHBACK TABLE EMP TO BEFORE DROP RENAME TO ;

Flashback Table: (Reverses a table to its state at a previous point in time)


Case:
At 01:00 PM , we discovers that some rows/records is missing from the EMP table. This record was
present at 11:00 AM. Someone accidentally deleted the record from EMP table between 11:00 AM to
01.00 PM. We can return the table to its stat at 11.00 AM by Flashback table.
Example:
FLASHBACK TABLE EMP TO TIMESTAMP TO_TIMESTAMP('2009-01-02 14:14:13','YYYY-MM-DD
HH:MI:SS') ENABLE TRIGGERS;
OR
FLASHBACK TABLE EMP TO TIMESTAMP (SYSTIMESTAMP - INTERVAL '1' minute);
Flashback Query: (We can specify a target time and then run queries, viewing results and recover from
an unwanted change)
Case:
At 02:00 PM, we discover that some records deleted from the EMP table and we know that at 9:30AM
that data was correctly stored in the database, We could query the contents of the table as of a time
before the deletion to find out what data had been lost, and, if appropriate, re-insert the lost data in the
database.
Example:
Retrives the state of the record at 9:30AM.
SQL> SELECT * FROM EMP AS OF TIMESTAMP TO_TIMESTAMP('2009-01-01 09:30:00', 'YYYY-MMDD HH:MI:SS');
Restoring information to the table EMP (Missing Data)
INSERT INTO emp (SELECT * FROM emp AS OF TIMESTAMP TO_TIMESTAMP('2009-01-01 09:30:00',
'YYYY-MM-DD HH:MI:SS');

Flashback Transaction Query: (We can view changes made by a transaction during a period of time.)
Case:
At 03:00 PM, we discover that some records deleted from the EMP table and we know that at 12:00 PM
that data was correctly stored in the database, We could query the contents of the table data between
12:00 PM and 03:00PM
SQL>
SELECT commit_timestamp , logon_user FROM FLASHBACK_TRANSACTION_QUERY WHERE xid IN
(SELECT versions_xid FROM emp VERSIONS BETWEEN TIMESTAMP TO_TIMESTAMP('2003-04-04
14:00:00', 'YYYY-MM-DD HH:MI:SS') and TO_TIMESTAMP('2003-04-04 17:00:00', 'YYYY-MM-DD
HH:MI:SS') );
Flashback database: (We can revert database at a past time)
FLASHBACK DATABASE TO TIMESTAMP to_timestamp('2003-04-04 14:00:00', 'YYYY-MM-DD
HH:MI:SS');

HOWORACLEWORKS?
An instance is currently running on the computer that is executing Oracle called database server.
A computer is running an application (local machine) runs the application in a user process.
The client application attempts to establish a connection to the server using the proper Net8 driver.
When the oracle server detects the connection request from the client its check client authentication, if
authentication pass the oracle server creates a (dedicated) server process on behalf of the user process.
When the user executes a SQL statement and commits the transaction. For example, the user changes a
name in a row of a table. The server process receives the statement and checks the shared pool for any
shared SQL area that contains an identical SQL statement. If a shared SQL area is found, the server
process checks the user's access privileges to the requested data and the previously existing shared SQL
area is used to process the statement; if not, a new shared SQL area is allocated for the statement so that
it can be parsed and processed. The server process retrieves any necessary data values from the actual
datafile or those stored in the system global area. The server process modifies data block in the system
global area. The DBWn process writes modified blocks permanently to disk when doing so is efficient.
Because the transaction committed, the LGWR process immediately records the transaction in the online
redo log file. If the transaction is successful, the server process sends a message across the network to
the application. If it is not successful, an appropriate error message is transmitted. Throughout this entire
procedure, the other background processes run, watching for conditions that require intervention.

BasicsofOracleArchitecture
The Basics of Oracle Architecture
As an Oracle DBA, you must be understand the concepts of Oracle architecture clearly. It is a basic step
or main point that you need before you go to manage your database. By this article, I will try to share my
knowledge about it. Hope it can be useful for you.
What is An Oracle Database?
Basically, there are two main components of Oracle database instance and database itself. An
instance consists of some memory structures (SGA) and the background processes.

Figure 1 will show you the relationship.

Figure 1. Two main components of Oracle database


Instance
As we cover above, instance is consist of the memory structures and background processes. The
memory structure itself consists of System Global Area (SGA), Program Global Area (PGA). In the other
hand, the mandatory background processes are Database Writer (DBWn), Log Writer (LGWR),
Checkpoint (CKPT), System Monitor (SMON), and Process Monitor (PMON). And another optional
background processes are Archiver (ARCn), Recoverer (RECO), etc.
Figure 2 will illustrate the relationship for those components on an instance.

Figure 2. The instance components


System Global Area
SGA is the primary memory structures. This area is broken into a few of part memory Buffer Cache,
Shared Pool, Redo Log Buffer, Large Pool, and Java Pool.
Buffer Cache
Buffer cache is used to stores the copies of data block that retrieved from datafiles. That is, when user
retrieves data from database, the data will be stored in buffer cache. Its size can be manipulated via
DB_CACHE_SIZE parameter in init.ora initialization parameter file.
Shared Pool
Shared pool is broken into two small part memories Library Cache and Dictionary Cache. The library
cache is used to stores information about the commonly used SQL and PL/SQL statements; and is
managed by a Least Recently Used (LRU) algorithm. It is also enables the sharing those statements
among users. In the other hand, dictionary cache is used to stores information about object definitions in
the database, such as columns, tables, indexes, users, privileges, etc.
The shared pool size can be set via SHARED_POOL_SIZE parameter in init.ora initialization parameter
file.
Redo Log Buffer
Each DML statement (insert, update, and delete) executed by users will generates the redo entry. What is
a redo entry? It is an information about all data changes made by users. That redo entry is stored in redo
log buffer before it is written into the redo log files. To manipulate the size of redo log buffer, you can use
the LOG_BUFFER parameter in init.ora initialization parameter file.
Large Pool

Large pool is an optional area of memory in the SGA. It is used to relieves the burden place on the shared
pool. It is also used for I/O processes. The large pool size can be set by LARGE_POOL_SIZE parameter
in init.ora initialization parameter file.
Java Pool
As its name, Java pool is used to services parsing of the Java commands. Its size can be set by
JAVA_POOL_SIZE parameter in init.ora initialization parameter file.
Oracle Background Processes
Oracle background processes is the processes behind the scene that work together with the memories.
DBWn
Database writer (DBWn) process is used to write data from buffer cache into the datafiles. Historically, the
database writer is named DBWR. But since some of Oracle version allows us to have more than one
database writer, the name is changed to DBWn, where n value is a number 0 to 9.
LGWR
Log writer (LGWR) process is similar to DBWn. It writes the redo entries from redo log buffer into the redo
log files.
CKPT
Checkpoint (CKPT) is a process to give a signal to DBWn to writes data in the buffer cache into datafiles.
It will also updates datafiles and control files header when log file switch occurs.
SMON
System Monitor (SMON) process is used to recover the system crach or instance failure by applying the
entries in the redo log files to the datafiles.
PMON
Process Monitor (PMON) process is used to clean up work after failed processes by rolling back the
transactions and releasing other resources.
Database
We can broken up database into two main structures Logical structures and Physical structures.
Logical Structures
The logical units are tablespace, segment, extent, and data block.
Figure 3 will illustrate the relationships between those units.

Figure 3. The relationships between the Oracle logical structures

Tablespace
A Tablespace is a grouping logical database objects. A database must have one or more tablespaces. In
the Figure 3, we have three tablespaces SYSTEM tablespace, Tablespace 1, and Tablespace 2.
Tablespace is composed by one or more datafiles.
Segment
A Tablespace is further broken into segments. A segment is used to stores same type of objects. That is,
every table in the database will store into a specific segment (named Data Segment) and every index in
the database will also store in its own segment (named Index Segment). The other segment types are
Temporary Segment and Rollback Segment.
Extent
A segment is further broken into extents. An extent consists of one or more data block. When the
database object is enlarged, an extent will be allocated. Unlike a tablespace or a segment, an extent
cannot be named.
Data Block
A data block is the smallest unit of storage in the Oracle database. The data block size is a specific
number of bytes within tablespace and it has the same number of bytes.
Physical Structures
The physical structures are structures of an Oracle database (in this case the disk files) that are not
directly manipulated by users. The physical structure consists of datafiles, redo log files, and control files.
Datafiles
A datafile is a file that correspondens with a tablespace. One datafile can be used by one tablespace, but
one tablespace can has more than one datafiles.
Redo Log Files
Redo log files are the files that store the redo entries generated by DML statements. It can be used for
recovery processes.
Control Files
Control files are used to store information about physical structure of database, such as datafiles size and
location, redo log files location, etc.

Startingupadatabase
This article explains the procedures involved in starting an Oracle instance and database.
First Stage: Oracle engine start an Oracle Instance
When Oracle starts an instance, it reads the initialization parameter file to determine the values of
initialization parameters. Then, it allocates an SGA, which is a shared area of memory used for database
information, and creates background processes. At this point, no database is associated with these
memory structures and processes.

Second Stage: Mount the Database


To mount the database, the instance finds the database control files and opens them. Control files are
specified in the CONTROL_FILES initialization parameter in the parameter file used to start the instance.
Oracle then reads the control files to get the names of the database's datafiles and redo log files.
At this point, the database is still closed and is accessible only to the database administrator. The
database administrator can keep the database closed while completing specific maintenance operations.
However, the database is not yet available for normal operations.
Final Stage: Database open for normal operation
Opening a mounted database makes it available for normal database operations. Usually, a database
administrator opens the database to make it available for general use.
When you open the database, Oracle opens the online datafiles and online redo log files. If a tablespace
was offline when the database was previously shut down, the tablespace and its corresponding datafiles
will still be offline when you reopen the database.
If any of the datafiles or redo log files are not present when you attempt to open the database, then
Oracle returns an error. You must perform recovery on a backup of any damaged or missing files before
you can open the database.
Open a Database in Read-Only Mode
You can open any database in read-only mode to prevent its data from being modified by user
transactions. Read-only mode restricts database access to read-only transactions, which cannot write to
the datafiles or to the redo log files.
Disk writes to other files, such as control files, operating system audit trails, trace files, and alert files, can
continue in read-only mode. Temporary tablespaces for sort operations are not affected by the database
being open in read-only mode. However, you cannot take permanent tablespaces offline while a database
is open in read-only mode. Also, job queues are not available in read-only mode.
Read-only mode does not restrict database recovery or operations that change the database's state
without generating redo data. For example, in read-only mode:
* Datafiles can be taken offline and online
* Offline datafiles and tablespaces can be recovered
* The control file remains available for updates about the state of the database

Shutdownthedatabase
The three steps to shutting down a database and its associated instance are:
*
Close the database.
*
Unmount the database.
*
Shut down the instance.

Close a Database
When you close a database, Oracle writes all database data and recovery data in the SGA to the datafiles
and redo log files, respectively. Next, Oracle closes all online datafiles and online redo log files. At this
point, the database is closed and inaccessible for normal operations. The control files remain open after a
database is closed but still mounted.

Close the Database by Terminating the Instance

In rare emergency situations, you can terminate the instance of an open database to close and
completely shut down the database instantaneously. This process is fast, because the operation of writing
all data in the buffers of the SGA to the datafiles and redo log files is skipped. The subsequent reopening
of the database requires recovery, which Oracle performs automatically.
Un mount a Database
After the database is closed, Oracle un mounts the database to disassociate it from the instance. At this
point, the instance remains in the memory of your computer.
After a database is un mounted, Oracle closes the control files of the database.
Shut Down an Instance
The final step in database shutdown is shutting down the instance. When you shut down an instance, the
SGA is removed from memory and the background processes are terminated.
Abnormal Instance Shutdown
In unusual circumstances, shutdown of an instance might not occur cleanly; all memory structures might
not be removed from memory or one of the background processes might not be terminated. When
remnants of a previous instance exist, a subsequent instance startup most likely will fail. In such
situations, the database administrator can force the new instance to start up by first removing the
remnants of the previous instance and then starting a new instance, or by issuing a SHUTDOWN ABORT
statement in Enterprise Manager.

ManaginganOracleInstance
When Oracle engine starts an instance, it reads the initialization parameter file to determine the values of
initialization parameters. Then, it allocates an SGA and creates background processes. At this point, no
database is associated with these memory structures and processes.
Type of initialization file:
Static (PFILE) Persistent (SPFILE)
Text file Binary file
Modification with an OS editor Cannot Modified
Modification made manually Maintained by the Server

Initialization parameter file content:


* Instance parameter
* Name of the database
* Memory structure of the SGA
* Name and location of control file
* Information about undo segments
* Location of udump, bdump and cdump file
Creating an SPFILE:
Create SPFILE=..ORA
From PFILE=..ORA;
Note:
* Required SYSDBA Privilege.
* Execute before or after instance startup.

OracleBackgroundProcesses
An Oracle instance runs two types of processes
Server Process
Background Process

Before work user must connect to an Instance. When user LOG on Oracle Server Oracle Engine create a
process called Server processes. Server process communicate with oracle instance on the behalf of user
process.

Each background process is useful for a specific purpose and its role is well defined.

Background processes are invoked automatically when the instance is started.


Database Writer (DBWr)
Process Name: DBW0 through DBW9 and DBWa through DBWj
Max Processes: 20

This process writes the dirty buffers for the database buffer cache to data files. One database writer
process is sufficient for most systems; more can be configured if essential. The initialisation parameter,
DB_WRITER_PROCESSES, specifies the number of database writer processes to start.

The DBWn process writes dirty buffer to disk under the following conditions:
When a checkpoint is issued.
When a server process cannot find a clean reusable buffer after scanning a threshold number of buffers.
Every 3 seconds.
When we place a normal or temporary table space offline and read only mode
When we drop and truncate table.
When we put a table space in backup mode;
Log Writer(LGWR)
Process Name: LGWR
Max Processes: 1

The log writer process writes data from the redo log buffers to the redo log files on disk.

The writer is activated under the following conditions:


*
When a transaction is committed, a System Change Number (SCN) is generated and tagged to it. Log
writer puts a commit record in the redo log buffer and writes it to disk immediately along with the
transaction's redo entries.
*
Every 3 seconds.
*
When the redo log buffer is 1/3 full.
*
When DBWn signals the writing of redo records to disk. All redo records associated with changes in the
block buffers must be written to disk first (The write-ahead protocol). While writing dirty buffers, if the
DBWn process finds that some redo information has not been written, it signals the LGWR to write the
information and waits until the control is returned.
Log writer will write synchronously to the redo log groups in a circular fashion. If any damage is identified
with a redo log file, the log writer will log an error in the LGWR trace file and the system Alert Log.
Sometimes, when additional redo log buffer space is required, the LGWR will even write uncommitted
redo log entries to release the held buffers. LGWR can also use group commits (multiple committed
transaction's redo entries taken together) to write to redo logs when a database is undergoing heavy write
operations.
The log writer must always be running for an instance.
System Monitor

Process Name: SMON


Max Processes: 1

This process is responsible for instance recovery, if necessary, at instance startup. SMON also cleans up
temporary segments that are no longer in use. SMON wakes up about every 5 minutes to perform
housekeeping activities. SMON must always be running for an instance.
Process Monitor
Process Name: PMON
Max Processes: 1

This process is responsible for performing recovery if a user process fails. It will rollback uncommitted
transactions. PMON is also responsible for cleaning up the database buffer cache and freeing resources
that were allocated to a process. PMON also registers information about the instance and dispatcher
processes with network listener.
PMON wakes up every 3 seconds to perform housekeeping activities. PMON must always be running for
an instance.
Checkpoint Process
Process Name: CKPT
Max processes: 1

Checkpoint process signals the synchronization of all database files with the checkpoint information. It
ensures data consistency and faster database recovery in case of a crash.
CKPT ensures that all database changes present in the buffer cache at that point are written to the data
files, the actual writing is done by the Database Writer process. The datafile headers and the control files
are updated with the latest SCN (when the checkpoint occurred), this is done by the log writer process.
The CKPT process is invoked under the following conditions:

*
When a log switch is done.
*
When the time specified by the initialization parameter LOG_CHECKPOINT_TIMEOUT exists between
the incremental checkpoint and the tail of the log; this is in seconds.
*
When the number of blocks specified by the initialization parameter LOG_CHECKPOINT_INTERVAL
exists between the incremental checkpoint and the tail of the log; these are OS blocks.
*

The number of buffers specified by the initialization parameter FAST_START_IO_TARGET required to


perform roll-forward is reached.
*
Oracle 9i onwards, the time specified by the initialization parameter FAST_START_MTTR_TARGET is
reached; this is in seconds and specifies the time required for a crash recovery. The parameter
FAST_START_MTTR_TARGET replaces LOG_CHECKPOINT_INTERVAL and
FAST_START_IO_TARGET, but these parameters can still be used.
*
When the ALTER SYSTEM SWITCH LOGFILE command is issued.
*
When the ALTER SYSTEM CHECKPOINT command is issued.
Incremental Checkpoints initiate the writing of recovery information to datafile headers and controlfiles.
Database writer is not signaled to perform buffer cache flushing activity here.
Archiver
Process Name: ARC0 through ARC9
Max Processes: 10

The ARCn process is responsible for writing the online redo log files to the mentioned archive log
destination after a log switch has occurred. ARCn is present only if the database is running in archivelog
mode and automatic archiving is enabled. The log writer process is responsible for starting multiple ARCn
processes when the workload increases. Unless ARCn completes the copying of a redo log file, it is not
released to log writer for overwriting.
The number of Archiver processes that can be invoked initially is specified by the initialization parameter
LOG_ARCHIVE_MAX_PROCESSES. The actual number of Archiver processes in use may vary based
on the workload.
Lock Monitor
Process Name: LMON
processes: 1

Meant for Parallel server setups, Lock Monitor manages global locks and resources. It handles the
redistribution of instance locks whenever instances are started or shutdown. Lock Monitor also recovers
instance lock information prior to the instance recovery process. Lock Monitor co-ordinates with the
Process Monitor to recover dead processes that hold instance locks.
Lock processes
Process Name: LCK0 through LCK9
Max Processes: 10

Meant for Parallel server setups, the instance locks that are used to share resources between instances
are held by the lock processes.

Block Server Process


Process Name: BSP0 through BSP9
Max processes: 10

Meant for Parallel server setups, Block server Processes have to do with providing a consistent read
image of a buffer that is requested by a process of another instance, in certain circumstances.
Queue Monitor
Process Name: QMN0 through QMN9
Max Processes: 10

This is the advanced Queuing Time manager process. QMNn monitors the message queues. Failure of
QMNn process will not cause the instance to fail.
Event Monitor
Process Name: EMN0/EMON
Max Processes: 1

This process is also related to Advanced Queuing, and is meant for allowing a publish/subscribe style of
messaging between applications.
Recoverer
Process Name: RECO
Max processes: 1

Intended for distributed recovery. All in-doubt transactions are recovered by this process in the distributed
database setup. RECO will connect to the remote database to resolve pending transactions.
Job Queue Processes
Process Name: J000 through J999 (Originally called SNPn processes)
Max Processes: 1000

Job queue processes carry out batch processing. All scheduled jobs are executed by these processes.
The initialization parameter JOB_QUEUE_PROCESSES specifies the maximum job processes that can
be run concurrently. If a job fails with some Oracle error, it is recorded in the alert file and a process trace
file is generated. Failure of the Job queue process will not cause the instance to fail.
Dispatcher
Process Name: Dnnn

Max Processes: -

Intended for Shared server setups (MTS). Dispatcher processes listen to and receive requests from
connected sessions and places them in the request queue for further processing. Dispatcher processes
also pickup outgoing responses from the result queue and transmit them back to the clients. Dnnn are
mediators between the client processes and the shared server processes. The maximum number of
Dispatcher process can be specified using the initialization parameter MAX_DISPATCHERS.
Shared Server Processes
Process Name: Snnn
Max Processes: -

Intended for Shared server setups (MTS). These processes pickup requests from the call request queue,
process them and then return the results to a result queue. The number of shared server processes to be
created at instance startup can be specified using the initialization parameter SHARED_SERVERS.
Parallel Execution Slaves
Process Name: Pnnn
Max Processes: -

These processes are used for parallel processing. It can be used for parallel execution of SQL statements
or recovery. The Maximum number of parallel processes that can be invoked is specified by the
initialization parameter PARALLEL_MAX_SERVERS.
Trace Writer
Process Name: TRWR
Max Processes: 1

Trace writer writes trace files from an Oracle internal tracing facility.
Input/Output Slaves
Process Name: Innn
Max Processes: -

These processes are used to simulate asynchronous I/O on platforms that do not support it. The
initialization parameter DBWR_IO_SLAVES is set for this purpose.
Wakeup Monitor Process
Process Name: WMON
Max Processes: -

This process was available in older versions of Oracle to alarm other processes that are suspended while
waiting for an event to occur. This process is obsolete and has been removed.
Conclusion
With every release of Oracle, new background processes have been added and some existing ones
modified. These processes are the key to the proper working of the database. Any issues related to
background processes should be monitored and analyzed from the trace files generated and the alert log.

CreateStandalone10gDatabaseManually
Step 1 Create a initSID.ora (Example: initTEST.ora) file in $ORACLE_HOME/dbs/ directory.
Example: $ORACLE_HOME/dbs/initTEST.ora
Put following entry in initTEST.ora file
##############################################################
background_dump_dest=<put BDUMP log destination>
core_dump_dest=<put CDUMP log destination>
user_dump_dest=<put UDUMP log destination>
control_files = (/<Destination>/control1.ctl,/ <Destination>/control2.ctl,/ <Destination>/control3.ctl)
undo_management = AUTO
undo_tablespace = UNDOTBS1
db_name = test
db_block_size = 8192
sga_max_size = 1073741824
sga_target = 1073741824
####################################################
Step 2 Create a password file
$ORACLE_HOME/bin/orapwd file=$ORACLE_HOME/dbs/pwd<sid>.ora password=<password>
entries=5
Step 3 Set your ORACLE_SID
$ export ORACLE_SID=test
$ export ORACLE_HOME=/<Destination>
Step 4 Run the following sqlplus command to connect to the database and startup the instance.
$sqlplus '/ as sysdba'
SQL> startup nomount
Step 5 Create the Database. use following scripts.

create database test


logfile group 1 ('<Destination>/redo1.log') size 100M,
group 2 ('<Destination>/redo2.log') size 100M,
group 3 ('<Destination>/redo3.log') size 100M
character set WE8ISO8859P1
national character set utf8
datafile '<Destination>/system.dbf' size 500M autoextend on next 10M maxsize unlimited extent
management local
sysaux datafile '<Destination>/sysaux.dbf' size 100M autoextend on next 10M maxsize unlimited
undo tablespace undotbs1 datafile '<Destination>/undotbs1.dbf' size 100M
default temporary tablespace temp tempfile '<Destination>/temp01.dbf' size 100M;

Step 6 Run the scripts necessary to build views, synonyms, etc.:


CATALOG.SQL-- creates the views of data dictionary tables and the dynamic performance views.
CATPROC.SQL-- establishes the usage of PL/SQL functionality and creates many of the PL/SQL
Oracle supplied packages.

Create10gOMFDatabaseManually
Step 1
Create a initSID.ora(Example: initTEST.ora) file in $ORACLE_HOME/dbs/ directory.
Example: $ORACLE_HOME/dbs/initTEST.ora
Put following entry in initTEST.ora file
##############################################################
background_dump_dest=<put BDUMP log destination>
core_dump_dest=<put CDUMP log destination>
user_dump_dest=<put UDUMP log destination>
control_files = (/<Destination>/control1.ctl,/ <Destination>/control2.ctl,/ <Destination>/control3.ctl)
undo_management = AUTO
undo_tablespace = UNDOTBS1
db_name = test

db_block_size = 8192
sga_max_size = 1073741824
sga_target = 1073741824
db_create_file_dest = /<Put DB File Destination> #OMF
db_create_online_log_dest_1 = /<Put first redo and control file destination> #OMF
db_create_online_log_dest_2 = /<Put second redo and control file destination> #OMF
db_recovery_file_dest = /<put flash recovery area destination> #OMF
################################################################

Step 2 Create a password file


$ORACLE_HOME/bin/orapwd file=$ORACLE_HOME/dbs/pwd<sid>.ora password=<password>
entries=5

Step 3 Set your ORACLE_SID


export ORACLE_SID=test
export ORACLE_HOME=/<oracle home path>
Step 4 Run the following sqlplus command to connect to the database and startup the instance.
sqlplus '/ as sysdba'
SQL> startup nomount
Step 5 Create the database
create database test
character set WE8ISO8859P1
national character set utf8
undo tablespace undotbs1
default temporary tablespace temp;
Step 6 Run catalog and catproc

@?/rdbms/admin/catalog.sql
@?/rdbms/admin/catproc.sql

ManagingDataFiles
What is data File?
Data files are physical files of the OS that store the data of all logical structures in the database. Data file
must be created for each tablespace.

How to determine the number of dataf iles?


At least one datafile is required for the SYSTEM tablespace. We can create separate datafile for other
teblespace. When we create DATABASE , MAXDATAFILES may be or not specify in create database
statement clause. Oracle assassin db_files default value to 200. We can also specify the number of
datafiles in init file.
When we start the oracle instance , the DB_FILES initialization parameter reserve for datafile information
and the maximum number of datafile in SGA. We can change the value of DB_FILES (by changing the
initialization parameter setting), but the new value does not take effect until you shut down and restart the
instance.

Important:

If the value of DB_FILES is too low, you cannot add datafiles beyond the DB_FILES limit.
Example : if init parameter db_files set to 2 then you can not add more then 2 in your database.

If the value of DB_FILES is too high, memory is unnecessarily consumed.


When you issue CREATE DATABASE or CREATE CONTROLFILE statements,
the MAXDATAFILES parameter specifies an initial size. However, if you attempt to add a new file whose
number is greater than MAXDATAFILES, but less than or equal toDB_FILES, the control file will expand
automatically so that the datafiles section can accommodate more files.

Note:
If you add new datafiles to a tablespace and do not fully specify the filenames, the database creates the
datafiles in the default database directory . Oracle recommends you always specify a fully qualified name
for a datafile. Unless you want to reuse existing files, make sure the new filenames do not conflict with
other files. Old files that have been previously dropped will be overwritten.

How to add datafile in execting tablespace?


alter tablespace <Tablespace_Name> add datafile /............../......./file01.dbf size 10m autoextend on;

How to resize the datafile?


alter database datafile '/............../......./file01.dbf' resize 100M;

How to bring datafile online and offline?

alter database datafile '/............../......./file01.dbf' online;


alter database datafile '/............../......./file01.dbf' offline;

How to renaming the datafile in a single tablesapce?


Step:1 Take the tablespace that contains the datafiles offline. The database must be open.
alter tablespace <Tablespace_Name> offline normal;
Step:2 Rename the datafiles using the operating system.
Step:3 Use the ALTER TABLESPACE statement with the RENAME DATAFILE clause to change the
filenames within the database.
alter tablespace <Tablespace_Name> rename datafile '/...../..../..../user.dbf' to '/..../..../.../users1.dbf';
Step 4: Back up the database. After making any structural changes to a database, always perform an
immediate and complete backup.

How to relocate datafile in a single tablesapce?


Step:1 Use following query to know the specifiec file name or size.
select file_name,bytes from dba_data_files where tablespace_name='<tablespace_name>';

Step:2 Take the tablespace containing the datafiles offline:


alter tablespace <Tablespace_Name> offline normal;

Step:3 Copy the datafiles to their new locations and rename them using the operating system.
Step:4 Rename the datafiles within the database.
ALTER TABLESPACE <Tablespace_Name> RENAME DATAFILE
'/u02/oracle/rbdb1/users01.dbf', '/u02/oracle/rbdb1/users02.dbf'
TO '/u03/oracle/rbdb1/users01.dbf','/u04/oracle/rbdb1/users02.dbf';

Step:5 Back up the database. After making any structural changes to a database, always perform an
immediate and complete backup.

How to Renaming and Relocating Datafiles in Multiple Tablespaces?


Step:1 Ensure that the database is mounted but closed.
Step:2 Copy the datafiles to be renamed to their new locations and new names, using the operating
system.

Step:3 Use ALTER DATABASE to rename the file pointers in the database control file.
ALTER DATABASE
RENAME FILE
'/u02/oracle/rbdb1/sort01.dbf',
'/u02/oracle/rbdb1/user3.dbf'
TO '/u02/oracle/rbdb1/temp01.dbf',
'/u02/oracle/rbdb1/users03.dbf;
Step:4 Back up the database. After making any structural changes to a database, always perform an
immediate and complete backup.

Howtodropadatafilefromatablespace
Important : Oracle does not provide an interface for dropping datafiles in the same way you would drop a
schema object such as a table or a user.
Reasons why you want to remove a datafile from a tablespace:

You may have mistakenly added a file to a tablespace.

You may have made the file much larger than intended and now want to remove it.

You may be involved in a recovery scenario and the database won't start because a datafile is
missing.
Important : Once the DBA creates a datafile for a tablespace, the datafile cannot be removed. If you want
to do any critical operation like dropping datafiles, ensure you have a full backup of the database.
Step: 1 Determining how many datafiles make up a tablespace
To determine how many and which datafiles make up a tablespace, you can use the following query:
SELECT file_name, tablespace_name FROM dba_data_files WHERE tablespace_name ='<name of
tablespace>';
Case 1
If you have only one datafile in the tablespace and you want to remove it. You can simply drop the entire
tablespace using the following:
DROP TABLESPACE <tablespace name> INCLUDING CONTENTS;
The above command will remove the tablespace, the datafile, and the tablespace's contents from the
data dictionary.

Important : Oracle will not drop the physical datafile after the DROP TABLESPACE command. This
action needs to be performed at the operating system.
Case 2
If you have more than one datafile in the tablespace, and you wnat to remove all datafiles and also no
need the information contained in that tablespace, then use the same command as above:
DROP TABLESPACE <tablespace name> INCLUDING CONTENTS;
Case 3
If you have more than one datafile in the tablespace and you want to remove only one or two ( not all)
datafile in the tablesapce or you want to keep the objects that reside in the other datafile(s) which are part
of this tablespace, then you must export all the objects inside the tablespace.
Step: 1 Gather information on the current datafiles within the tablespace by running the following query in
SQL*Plus:
SELECT file_name, tablespace_name FROM dba_data_files WHERE tablespace_name ='<name of
tablespace>';
Step: 2 You now need to identify which objects are inside the tablespace for the purpose of running an
export. To do this, run the following query:
SELECT owner, segment_name, segment_type FROM dba_segments WHERE
tablespace_name='<name of tablespace>'
Step : 3 Now, export all the objects that you wish to keep.
Step : 4 Once the export is done, issue the
DROP TABLESPACE <tablespace name> INCLUDING CONTENTS.
Step : 5 Delete the datafiles belonging to this tablespace using the operating system.
Step : 6 Recreate the tablespace with the datafile(s) desired, then import the objects into that tablespace.
Case : 4

If you do not want to follow any of these procedures, there are other things that can be done besides
dropping the tablespace.
If the reason you wanted to drop the file is because you mistakenly created the file of the wrong
size, then consider using the RESIZE command.
If you really added the datafile by mistake, and Oracle has not yet allocated any space within this
datafile, then you can use ALTER DATABASE DATAFILE <filename> RESIZE; command to make the file
smaller than 5 Oracle blocks. If the datafile is resized to smaller than 5 oracle blocks, then it will never be
considered for extent allocation. At some later date, the tablespace can be rebuilt to exclude the incorrect
datafile.

Important : The ALTER DATABASE DATAFILE <datafile name> OFFLINE DROP command is not
meant to allow you to remove a datafile. What the command really means is that you are offlining the
datafile with the intention of dropping the tablespace.
Important : If you are running in archivelog mode, you can also use: ALTER DATABASE DATAFILE
<datafile name> OFFLINE; instead of OFFLINE DROP. Once the datafile is offline, Oracle no longer
attempts to access it, but it is still considered part of that tablespace. This datafile is marked only as
offline in the controlfile and there is no SCN comparison done between the controlfile and the datafile
during startup (This also allows you to startup a database with a non-critical datafile missing). The entry
for that datafile is not deleted from the controlfile to give us the opportunity to recover that datafile.

ManagingControlFiles
A control file is a small binary file that records the physical structure of the database with database name,
Names and locations of associated datafiles, online redo log files, timestamp of the database creation,
current log sequence number and Checkpoint information.
Note:

Without the control file, the database cannot be mounted.

You should create two or more copies of the control file during database creation.
Role of Control File:
When Database instance mount, Oracle recognized all listed file in Control file and open it. Oracle writes
and maintains all listed control files during database operation.
Important:

If you do not specify files for CONTROL_FILES before database creation, and you are not using
the Oracle Managed Files feature, Oracle creates a control file in <DISK>:\ORACLE_HOME\DTATBASE\
location and uses a default filename. The default name is operating system specific.

Every Oracle database should have at least two control files, each stored on a different disk. If a
control file is damaged due to a disk failure, the associated instance must be shut down.

Oracle writes to all filenames listed for the initialization parameter CONTROL_FILES in the
database's initialization parameter file.

The first file listed in the CONTROL_FILES parameter is the only file read by the Oracle database
server during database operation.

If any of the control files become unavailable during database operation, the instance becomes
inoperable and should be aborted.

How to Create Control file at the time od database creation:


The initial control files of an Oracle database are created when you issue the CREATE DATABASE
statement. The names of the control files are specified by the CONTROL_FILES parameter in the
initialization parameter file used during database creation.
How to Create Additional Copies, Renaming, and Relocating Control Files
Step:1 Shut down the database.
Step:2 Copy an existing control file to a different location, using operating system commands.
Step:3 Edit the CONTROL_FILES parameter in the database's initialization parameter file to add the new
control file's name, or to change the existing control filename.
Step:4 Restart the database.
When you Create New Control Files?

All control files for the database have been permanently damaged and you do not have a control
file backup.

You want to change one of the permanent database parameter settings originally specified in the
CREATE DATABASE statement. These settings include the database's name and the following
parameters: MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, MAXDATAFILES, and
MAXINSTANCES.

Steps for Creating New Control Files


Step:1 Make a list of all datafiles and online redo log files of the database.
SELECT MEMBER FROM V$LOGFILE;
SELECT NAME FROM V$DATAFILE;
SELECT VALUE FROM V$PARAMETER WHERE NAME = 'CONTROL_FILES';
Step:2 Shut down the database.
Step:3 Back up all datafiles and online redo log files of the database.
Step:4 Start up a new instance, but do not mount or open the database:

STARTUP NOMOUNT
Step:5 Create a new control file for the database using the CREATE CONTROLFILE statement.
Example:
CREATE CONTROLFILE REUSE DATABASE "<DB_NAME" NORESETLOGS NOARCHIVELOG
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXDATAFILES 100
MAXINSTANCES 1
MAXLOGHISTORY 226
LOGFILE
GROUP 1 '<DISK>:\Directory\REDO01.LOG' SIZE 5024K,
GROUP 2 '<DISK>:\Directory\REDO02.LOG' SIZE 5024K,
GROUP 3 '<DISK>:\Directory\REDO03.LOG' SIZE 5024K
# STANDBY LOGFILE
DATAFILE
'<DISK>:\Directory\SYSTEM.DBF',
'<DISK>:\Directory\UNDOTBS.DBF'
CHARACTER SET WE8MSWIN1252
;
Step:6 Open the database using one of the following methods:

If you specify NORESETLOGS when creation the control file, use following commands: ALTER
DATABASE OPEN;

If you specified RESETLOGS when creating the control file, use the ALTER DATABASE
statement, indicating RESETLOGS.
ALTER DATABASE OPEN RESETLOGS;

TIPS:
When creating a new control file, select the RESETLOGS option if you have lost any online redo log
groups in addition to control files. In this case, you will need to recover from the loss of the redo logs . You
must also specify the RESETLOGS option if you have renamed the database. Otherwise, select the
NORESETLOGS option.
Backing Up Control Files
Method 1:
Back up the control file to a binary file (duplicate of existing control file) using the following statement:
ALTER DATABASE BACKUP CONTROLFILE TO '<DISK>:\Directory\control.bkp';
Method 2:
Produce SQL statements that can later be used to re-create your control file:
ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
How to retrieve information related to Control File:
V$DATABASE
Displays database information from the control file
V$CONTROLFILE
Lists the names of control files
V$CONTROLFILE_RECORD_SECTION
Displays information about control file record sections

ManagingRedoLogFiles
Redo logs consists of two or more pre allocated files that store all changes made to the database. Every
instance of an Oracle database has an associated online redo log to protect the database in case of an
instance failure.
Main points to consider before creating redo log files?

Members of the same group should be stores in separate disk so that no single disk failure can
cause LGWR and database instance to fail.

Set the archive destination to separate disk other than redo log members to avoid contention
between LGWR and Arch.

With mirrored groups of online redo logs , all members of the same group must be the same size.
What are the parameters related to Redo log files?
Parameters related to redo log files are

MAXLOGFILES

MAXLOGMEMEBERS
MAXLOGFILES and MAXLOGMEMEBERS parameters are defined while creation of database. You can
increase these parameters by recreating the control file.

How do you create online Redo log group?


Alter database add logfile group <group Number> (<DISK>:\Directory\<LOG_FILE_NAME>.log,
(<DISK>:\Directory\<LOG_FILE_NAME>.log) size 500K;

How to check the status of added redo log group?


Select * from v$log;
Interpretation:
Here you will observe that status is UNUSED means that this redo log file is not being used by oracle as
yet. ARC is the archived column in v$log , it is by default YES when you create a redo log file. It will
returns to NO if the system is not in archive log mode and this file is used by oracle. Sequence# 0 also
indicate that it is not being used as yet.
How to create online redo log member ?
alter database add logfile member
'<DISK>:\Directory\<LOG_FILE_NAME>.log,'<DISK>:\Directory\<LOG_FILE_NAME>.log' to group
<GROUP NUMBER>;
How to rename and relocate online redo log members ?
Important: Take the backup before renaming and relocating.
Step:1 Shutdown the database .

Step:2 Startup the database as startup mount.


Step:3 Copy the desired redo log files to new location . You can change the name of redo log file in the
new location.
Step:4 Alter database rename file <DISK>:\Directory\<LOG_FILE_NAME>.log to <new
path><DISK>:\Directory\<LOG_FILE_NAME>.log,
Step:5 Alter database open;
Step: 6 Shutdown the database normal and take the backup.
How to drop online redo log group?
Important:

You must have at- least two online groups.


You can not drop a active online redo log group. If it active switch it by alter system switch logfile
before dropping.
Also make sure that online redo log group is archived ( if archiving is enabled).
Syntax:
If you want to drop log group:
Alter database drop logfile group <GROUP_NUMBER>;
If you want to drop a logfile member:
Alter database drop logfile member <DISK>:\Directory\<LOG_FILE_NAME>.log;
How to Viewing Online Redo Log Information?
SELECT * FROM V$LOG;
GROUP# THREAD# SEQ BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
------ ------- ----- ------- ------- --- --------- ------------- --------1 1 10605 1048576 1 YES ACTIVE 11515628 16-APR-00
2 1 10606 1048576 1 NO CURRENT 11517595 16-APR-00
3 1 10603 1048576 1 YES INACTIVE 11511666 16-APR-00
4 1 10604 1048576 1 YES INACTIVE 11513647 16-APR-00

SELECT * FROM V$LOGFILE;


GROUP# STATUS MEMBER
------ ------- ---------------------------------1 D:\ORANT\ORADATA\IDDB2\REDO04.LOG
2 D:\ORANT\ORADATA\IDDB2\REDO03.LOG
3 D:\ORANT\ORADATA\IDDB2\REDO02.LOG
4 D:\ORANT\ORADATA\IDDB2\REDO01.LOG
If STATUS is blank for a member, then the file is in use.

ManagingTemporaryTablespace
First we will discus about use of temporary tablespace. We use it to manage space for database sort
operation. For example: if we join two large tables it require space for sort operation because oracle
cannot do shorting in memory. This sort operation will be done in temperory tablespace.
We must assign a temporary tablespace to each user in the database; if we dont assign temperory
tablespace to user in the database oracle allocate sort space in the SYSTEM tablespace by default.
Important:

That a temporary tablespace cannot contain permanent objects and therefore doesn't need to be
backed up.

When we create a TEMPFILE, Oracle only writes to the header and last block of the file. This is
why it is much quicker to create a TEMPFILE than to create a normal database file.

TEMPFILEs are not recorded in the database's control file.


We cannot remove datafiles from a tablespace until you drop the entire tablespace but we can
remove a TEMPFILE from a database:
SQL> ALTER DATABASE TEMPFILE ''<disk>:\<directory>\<Tablespace Name>.dbf' DROP INCLUDING
DATAFILES;

Except for adding a tempfile, you cannot use the ALTER TABLESPACE statement for a locally
managed temporary tablespace (operations like rename, set to read only, recover, etc. will fail).
How does create Temporary Tablespaces?
CREATE TEMPORARY TABLESPACE temp
TEMPFILE '<disk>:\<directory>\<Tablespace Name>.dbf' size 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 16M;


For best performance, the UNIFORM SIZE must be a multiple of the SORT_AREA_SIZE parameter.

How can define Default Temporary Tablespaces?


We can define a Default Temporary Tablespace at database creation time, or by issuing an "ALTER
DATABASE" statement:
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp;
Important:

The default Default Temporary Tablespace is SYSTEM.

Each database can be assigned one and only one Default Temporary Tablespace.

Temporary Tablespace is automatically assigned to users.


Restriction:
The following restrictions apply to default temporary tablespaces:

The Default Temporary Tablespace must be of type TEMPORARY

The DEFAULT TEMPORARY TABLESPACE cannot be taken off-line

The DEFAULT TEMPORARY TABLESPACE cannot be dropped until you create another one.

How to see the default temporary tablespace for a database?


SELECT * FROM DATABASE_PROPERTIES where
PROPERTY_NAME='DEFAULT_TEMP_TABLESPACE';
How to Monitoring Temporary Tablespaces and Sorting?
Use following query to view temp file information:
Select * from dba_temp_files; or Select * from v$tempfile;
Use following query for monitor temporary segment
Select * from v$sort_segments or Select * from v$sort_usage
Use following query for free space in tablespace :
select TABLESPACE_NAME,BYTES_USED, BYTES_FREE from V$TEMP_SPACE_HEADER;

How to Dropping / Recreating Temporary Tablespace? (Method)


This should be performed during off ours with no user logged on performing work.
If you are working with a temporary tablespace that is NOT the default temporary tablespace for the
database, this process is very simple. Simply drop and recreate the temporary tablespace:
Step:1 Drop the Tablespace
DROP TABLESPACE temp;
Tablespace dropped.
Step: 2 Create new temporary tablespace.
CREATE TEMPORARY TABLESPACE TEMP
TEMPFILE '<disk>:\<directory>\<Tablespace Name>.dbf' SIZE 500M REUSE
AUTOEXTEND ON NEXT 100M MAXSIZE unlimited
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M;
How to Dropping / Recreating Default Temporary Tablespace? (Method)
You will know fairly quickly if the tablespace is a default temporary tablespace when you are greeted with
the following exception:
DROP TABLESPACE temp;
drop tablespace temp
*
ERROR at line 1:
ORA-12906: cannot drop default temporary tablespace
Step: 1 Create another temperory tablespace.
CREATE TEMPORARY TABLESPACE temp2
TEMPFILE '<disk>:\<directory>\<Tablespace Name>.dbf'SIZE 5M REUSE
AUTOEXTEND ON NEXT 1M MAXSIZE unlimited
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M;

Tablespace created.
Step: 2 Make default tablespace.
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp2;
Database altered.
Step: 3 Drop old defaule tablespace.
DROP TABLESPACE temp INCLUDING CONTENTS AND DATAFILES;
Tablespace dropped.
Most Importent:
You do not need to assign temporary tablespace while creating a database user. The Temporary
Tablespace is automatically assigned. The name of the temporary tablespace is determined by the
DEFAULT_TEMP_TABLESPACE column from the data dictionary view
DATABASE_PROPERTIES_VIEW.
Example:
Step:1 Create database user
create user test identified by test default TABLESPACE users;
User created.
Step: 2 View information
SELECT USERNAME, DEFAULT_TABLESPACE, TEMPORARY_TABLESPACE FROM
DBA_USERS WHERE USERNAME='TEST';
USERNAME
--------

DEFAULT_TABLESPACE TEMPORARY_TABLESPACE
------------------------------

------------------------------

TEST USERS TEMP


NOTE: Temporary Tablespace TEMP is automatically assigned to the user TEST.
Certain Restrictions?

The default temporary tablespace can not be DROP.

The default temporary tablespace cab not be taken offline

ManagingUNDOTABLESPACE
Before commit, Oracle Database keeps records of actions of transaction because Oracle needs this
information to rollback or Undo the Changes.
What is the main Init.ora Parameters for Automatic Undo Management?
UNDO_MANAGEMENT:
The default value for this parameter is MANUAL. If you want to set the database in an automated mode,
set this value to AUTO. (UNDO_MANAGEMENT = AUTO)
UNDO_TABLESPACE:
UNDO_TABLESPACE defines the tablespaces that are to be used as Undo Tablespaces. If no value is
specified, Oracle will use the system rollback segment to startup. This value is dynamic and can be
changed online (UNDO_TABLESPACE = <Tablespace_Name>)
UNDO_RETENTION:
The default value for this parameter is 900 Secs. This value specifies the amount of time, Undo is kept in
the tablespace. This applies to both committed and uncommitted transactions since the introduction of
FlashBack Query feature in Oracle needs this information to create a read consistent copy of the data in
the past.
UNDO_SUPRESS_ERRORS:
Default values is FALSE. Set this to true to suppress the errors generated when manual management
SQL operations are issued in an automated management mode.
How to Creating UNDO Tablespaces?
UNDO tablespaces can be created during the database creation time or can be added to an existing
database using the create UNDO Tablespace command
Scripts at the time of Database creation:
CREATE DATABASE <DB_NAME>
MAXINSTANCES 1
MAXLOGHISTORY 1

MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXDATAFILES 100
DATAFILE '<DISK>:\Directory\<FILE_NAME>.DBF' SIZE 204800K REUSE
AUTOEXTEND ON NEXT 20480K MAXSIZE 32767M
UNDO TABLESPACE "<UNDO_TABLESPACE_NAME>"
DATAFILE '<DISK>:\DIRECTORY\<FILE_NAME>.DBF SIZE 1178624K REUSE
AUTOEXTEND ON NEXT 1024K MAXSIZE 32767M
CHARACTER SET WE8MSWIN1252
NATIONAL CHARACTER SET AL16UTF16
LOGFILE GROUP 1 (<DISK>:\DIRECTORY\<FILE_NAME>.LOG') SIZE 5024K,
GROUP 2 ('<DISK>:\DIRECTORY\<FILE_NAME>.LOG') SIZE 5024K,
GROUP 3 (<DISK>:\DIRECTORY\<FILE_NAME>.LOG') SIZE 5024K;
Scripts after creating Database:
CREATE UNDO TABLESPACE "<UNDO_TABLESPACE_NAME"
DATAFILE '<DISK>:\DIRECTORY\<FILE_NAME>.DBF' SIZE 1178624K REUSE
AUTOEXTEND ON;
How to Dropping an Undo Tablespace?
You cannot drop Active undo tablespace. Means, undo tablespace can only be dropped if it is not
currently used by any instance. Use the DROP TABLESPACE statement to drop an undo tablespace and
all contents of the undo tablespace are removed.
Example:
DROP TABLESPACE <UNDO_TABLESPACE_NAME> including contents;
How to Switching Undo Tablespaces?

We can switch form one undo tablespace to another undo tablespace. Because
the UNDO_TABLESPACE initialization parameter is a dynamic parameter, the ALTER SYSTEM
SET statement can be used to assign a new undo tablespace.
Step 1: Create another UNDO TABLESPACE
CREATE UNDO TABLESPACE "<ANOTHER_UNDO_TABLESPACE>"
DATAFILE '<DISK>:\Directory\<FILE_NAME>.DBF' SIZE 1178624K REUSE
AUTOEXTEND ON;
Step 2: Switches to a new undo tablespace:
alter system set UNDO_TABLESPACE=<UNDO_TABLESPACE>;
Step 3: Drop old UNDO TABLESPACE
drop tablespace <UNDO_TABLESPACE> including contents;
IMPORTANT:
The database is online while the switch operation is performed, and user transactions can be executed
while this command is being executed. When the switch operation completes successfully, all
transactions started after the switch operation began are assigned to transaction tables in the new undo
tablespace.
The switch operation does not wait for transactions in the old undo tablespace to commit. If there is any
pending transactions in the old undo tablespace, the old undo tablespace enters into a PENDING
OFFLINE mode (status). In this mode, existing transactions can continue to execute, but undo records for
new user transactions cannot be stored in this undo tablespace.
An undo tablespace can exist in this PENDING OFFLINE mode, even after the switch operation
completes successfully. A PENDING OFFLINE undo tablespace cannot used by another instance, nor
can it be dropped. Eventually, after all active transactions have committed, the undo tablespace
automatically goes from the PENDING OFFLINE mode to the OFFLINE mode. From then on, the undo
tablespace is available for other instances (in an Oracle Real Application Cluster environment).
If the parameter value for UNDO TABLESPACE is set to '' (two single quotes), the current undo tablespace
will be switched out without switching in any other undo tablespace. This can be used, for example, to
unassign an undo tablespace in the event that you want to revert to manual undo management mode.
The following example unassigns the current undo tablespace:
ALTER SYSTEM SET UNDO_TABLESPACE = '';
How to Monitoring Undo Space?

The V$UNDOSTAT view is useful for monitoring the effects of transaction execution on undo space in the
current instance. Statistics are available for undo space consumption, transaction concurrency, and length
of queries in the instance.
The following example shows the results of a query on the V$UNDOSTAT view.
SELECT BEGIN_TIME, END_TIME, UNDOTSN, UNDOBLKS, TXNCOUNT,
MAXCONCURRENCY AS "MAXCON" FROM V$UNDOSTAT;

ImportantQueryrelatedtoTablesapce
How to retrieve tablespace default storage Parameters?
SELECT TABLESPACE_NAME "TABLESPACE",
INITIAL_EXTENT "INITIAL_EXT",
NEXT_EXTENT "NEXT_EXT",
MIN_EXTENTS "MIN_EXT",
MAX_EXTENTS "MAX_EXT",
PCT_INCREASE
FROM DBA_TABLESPACES;
TABLESPACE
---------RBS
SYSTEM
TEMP
TESTTBS
USERS

INITIAL_EXT
----------1048576
106496
106496
57344
57344

NEXT_EXT
-------1048576
106496
106496
16384
57344

MIN_EXT
------2
1
1
2
1

MAX_EXT
------40
99
99
10
99

PCT_INCREASE
-----------0
1
0
1
1

How to retrieve information tablesapce and associated datafile?

SELECT FILE_NAME, BLOCKS, TABLESPACE_NAME


FROM DBA_DATA_FILES;
FILE_NAME
-----------/U02/ORACLE/IDDB3/RBS01.DBF
/U02/ORACLE/IDDB3/SYSTEM01.DBF
/U02/ORACLE/IDDB3/TEMP01.DBF

BLOCKS
---------1536
6586
6400

TABLESPACE_NAME
------------------RBS
SYSTEM
TEMP

/U02/ORACLE/IDDB3/TESTTBS01.DBF
/U02/ORACLE/IDDB3/USERS01.DBF

6400
384

TESTTBS
USERS

How to retrive Statistics for Free Space (Extents) of Each Tablespace?

SELECT TABLESPACE_NAME "TABLESPACE", FILE_ID,


COUNT(*)
"PIECES",
MAX(blocks) "MAXIMUM",
MIN(blocks) "MINIMUM",
AVG(blocks) "AVERAGE",
SUM(blocks) "TOTAL"
FROM DBA_FREE_SPACE
GROUP BY TABLESPACE_NAME, FILE_ID;
TABLESPACE
---------RBS
SYSTEM
TEMP
TESTTBS
USERS

FILE_ID
------2
1
4
5
3

PIECES
-----1
1
1
5
1

MAXIMUM
------955
119
6399
6364
363

MINIMUM
------955
119
6399
3
363

AVERAGE
------955
119
6399
1278
363

TOTAL
-----955
119
6399
6390
363

PIECES shows the number of free space extents in the tablespace file, MAXIMUM and MINIMUM show the
largest and smallest contiguous area of space in database blocks, AVERAGE shows the average size in
blocks of a free space extent, and TOTAL shows the amount of free space in each tablespace file in
blocks. This query is useful when you are going to create a new object or you know that a segment is
about to extend, and you want to make sure that there is enough space in the containing tablespace.

ManagingTablespace
A tablespace is a logical storage unit. Why we are say logical because a tablespace is not visible in the
file system. Oracle store data physically is datafiles. A tablespace consist of one or more datafile.
Type of tablespace?
System Tablespace

Created with the database

Required in all database

Contain the data dictionary


Non System Tablespace:

Separate undo, temporary, application data and application index segments Control the amount
of space allocation to the users objects
Enable more flexibility in database administration
How to Create Tablespace?
CREATE TABLESPACE "tablespace name"
DATAFILE clause SIZE . REUSE
MENIMUM EXTENT (This ensure that every used extent size in the tablespace is a multiple of the
integer)
BLOCKSIZE
LOGGING | NOLOGGING (Logging: By default tablespace have all changes written to redo, Nologging :
tablespace do not have all changes written to redo)
ONLINE | OFFLINE (OFFLINE: tablespace unavailable immediately after creation)
PERMANENT | TEMPORARY (Permanent: tablespace can used to hold permanent object, temporary:
tablespace can used to hold temp object)
EXTENT MANAGEMENT clause
Example:
CREATE TABLESPACE "USER1"
DATAFILE 'C:\LOCAL\ORADATA\USER_DATA.DBF' SIZE 10m REUSE
BLOCKSIZE 8192
LOGGING
ONLINE
PERMANENT
EXTENT MANAGEMENT LOCAL
How to manage space in Tablespace?
Tablespace allocate space in extent.
Locally managed tablespace:
The extents are managed with in the tablespace via bitmaps. In locally managed tablespace, all
tablespace information store in datafile header and dont use data dictionary table for store information.
Advantage of locally managed tablespace is that no DML generate and reduce contention on data
dictionary tables and no undo generated when space allocation or deallocation occurs.

Extent Management [Local | Dictionary]


The storage parameters NEXT, PCTINCREASE, MINEXTENTS, MAXEXTENTS, and DEFAULT
STORAGE are not valid for segments stored in locally managed tablespaces.
To create a locally managed tablespace, you specify LOCAL in the extent management clause of the
CREATE TABLESPACE statement. You then have two options. You can have Oracle manage extents for
you automatically with the AUTOALLOCATE option, or you can specify that the tablespace is managed
with uniform extents of a specific size (UNIFORM SIZE).
If the tablespace is expected to contain objects of varying sizes requiring different extent sizes and having
many extents, then AUTOALLOCATE is the best choice.
If you do not specify either AUTOALLOCATE or UNIFORM with the LOCAL parameter, then
AUTOALLOCATE is the default.
Dictionary Managed tablespace
When we declaring a tablespace as a Dictionary Managed, the data dictionary manages the extents. The
Oracle server updates the appropriate tables (sys.fet$ and sys.uet$) in the data dictionary whenever an
extent is allocated or deallocated.
How to Create a Locally Managed Tablespace?
The following statement creates a locally managed tablespace named USERS, where AUTOALLOCATE
causes Oracle to automatically manage extent size.
CREATE TABLESPACE users
DATAFILE 'C:\LOCAL\ORADATA\USER_DATA.DBF' SIZE 50M
EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
Alternatively, this tablespace could be created specifying the UNIFORM clause. In this example, a 512K
extent size is specified. Each 512K extent (which is equivalent to 64 Oracle blocks of 8K) is represented
by a bit in the bitmap for this file.
CREATE TABLESPACE users
DATAFILE 'C:\LOCAL\ORADATA\USER_DATA.DBF' SIZE 50M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512K;
How to Create a Dictionary Managed Tablespace?
The following is an example of creating a DICTIONARY managed tablespace in Oracle9i:
CREATE TABLESPACE users
DATAFILE 'C:\LOCAL\ORADATA\USER_DATA.DBF' SIZE 50M
EXTENT MANAGEMENT DICTIONARY

DEFAULT STORAGE (
INITIAL 64K
NEXT 64K
MINEXTENTS 2
MAXEXTENTS 121
PCTINCREASE 0);
What is Segment Space Management Options?
Two choices for segment-space management, one is manual (the default) and another auto.
Manual: This is default option. This option use free lists for managing free space within segments. What
are free lists: Free lists are lists of data blocks that have space available for inserting new rows.
Auto: This option use bitmaps for managing free space within segments. This is typically
called automatic segment-space management
Example:
CREATE TABLESPACE users
DATAFILE 'C:\LOCAL\ORADATA\USER_DATA.DBF' SIZE 10M REUSE
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512K
SEGMENT SPACE MANAGEMENT AUTO
PERMANENT
ONLINE;
How to Convert between LMT and DMT Tablespace?
The DBMS_SPACE_ADMIN package allows DBAs to quickly and easily convert between LMT and DMT
mode. Look at these examples:
SQL> exec dbms_space_admin.Tablespace_Migrate_TO_Local('ts1');
PL/SQL procedure successfully completed.
SQL> exec dbms_space_admin.Tablespace_Migrate_FROM_Local('ts2');
PL/SQL procedure successfully completed.

CreatingStandbyDatabasethroughRMAN
You can use the Recovery Manager DUPLICATE TARGET DATABASE FOR STANDBY command to
create a standby database.
RMAN automates the following steps of the creation procedure:

Restores the standby control file.

Restores the primary datafile backups and copies.

Optionally, RMAN recovers the standby database (after the control file has been mounted) up to
the specified time or to the latest archived redo log generated.

RMAN leaves the database mounted so that the user can activate it, place it in manual or
managed recovery mode, or open it in read-only mode.
After the standby database is created, RMAN can back up the standby database and archived redo logs
as part of your backup strategy. These standby backups are fully interchangeable with primary backups.
In other words, you can restore a backup of a standby datafile to the primary database, and you can
restore a backup of a primary datafile to the standby database.
Step-by-Step Stand by Database Configuration:
Step1: Configure Listener in Production Server and Standby Server.

TIPS: You should try to Create Listener (Standby) by using Net Configuration Assistant on
Standby Server.

TIPS: assume Listener already configure with PROD name on Primary Node. If Listener not
configured on Primery Node , You Should Create Listener by using Net Configuration Assistant on
Primary Server.
Step2: Configure TNSNAMES.ORA in Production Server and Standby Server. following
TNSNAMES.ORA entry on Production Database and Standby Database
# Connection string for Primary Instance.
PROD =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = Production IP)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = PROD)
)

)
# Connecting string for Standby Instance
STANDBY =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = STANDBY IP)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = PROD)
)
)
Step3: Put your production database in Archive Log mode if your database not running in Archive log
mode add following entries in init.ora file in Production Server.
LOG_ARCHIVE_START=TRUE
LOG_ARCHIVE_DEST_1='LOCATION=C:\oracle\database\archive MANDATORY REOPEN=30'
LOG_ARCHIVE_DEST_2='SERVICE=STANDBY REOPEN=300'
LOG_ARCHIVE_DEST_STATE_1=enable
LOG_ARCHIVE_DEST_STATE_2=enable
LOG_ARCHIVE_FORMAT=ARC%S.arc
REMOTE_ARCHIVE_ENABLE=true
STANDBY_FILE_MANAGEMENT=AUTO
STANDBY_ARCHIVE_DEST = 'C:\standby\archive '
# db_file_name_convert: do not need; same directory structure
# log_file_name_convert: do not need; same directory structure
Step4 : Configure RMAN in Production Instance if not configured earlier.
Example:

1. CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO


'/home/oraprod/rmanbkp/%F';
2. CONFIGURE CHANNEL 1 DEVICE TYPE DISK FORMAT '/home/oraprod/rmanbkp/%U.bak' MAX
PIECESIZE 4 G;
3. CONFIGURE CHANNEL 2 DEVICE TYPE DISK FORMAT '/home/oraprod/rmanbkp/%U.bak' MAX
PIECESIZE 4 G;
Step5 : Take a full valied backup of Production instance
RMAN> backup database plus archivelog;
Step6 : go to Standby machine and Create Service for standby instance
Step7 : create a standby controlfile on production Machine
RMAN> backup current controlfile for standby format='c:\rman_backup\stby_cfile.%U';
Step8 : Record last log sequance
SQL> alter system switch logfile;
SQL> alter system switch logfile;
SQL> select max(sequence#) from v$archived_log;
MAX(SEQUENCE#)
-------------100
Step8 : Backup new archive logs
RMAN>backup archivelog all;
Step9 : Make the RMAN Backups Available to Standby Server
Step10 : On the both system, the same directory structure was set-up
Step11 : Create init.ora file for standby database.
Copy init.ora file from Production Server to Stand by Server in Database folder in oracle home directory
and add following entries:
LOG_ARCHIVE_START = TRUE
LOG_ARCHIVE_DEST_1 = 'LOCATION=c:\oracle\database\archive MANDATORY'
LOG_ARCHIVE_FORMAT = arch%s.arc
REMOTE_ARCHIVE_ENABLE = true
STANDBY_FILE_MANAGEMENT = AUTO
LOG_ARCHIVE_MIN_SUCCEED_DEST=1
STANDBY_ARCHIVE_DEST = 'C:\standby\archive '

fal_server = FAL
fal_client = STANDBY
# db_file_name_convert: do not need; same directory structure
# log_file_name_convert: do not need; same directory structure
Note: Although most of the initialization parameter settings in the text initialization parameter file that you
copied from the primary system are also appropriate for the physical standby database, some
modifications need to be made.
Step: 12 Start Physical standby database
Start up the stand by database using following commands
C:\>set oracle_sid=PROD
C:\>sqlplus /nolog
SQL> conn sys/prod as sysdba
Connected to an idle instance.
SQL> startup nomount;
ORACLE instance started.
Total System Global Area 135338868 bytes
Fixed Size 453492 bytes
Variable Size 109051904 bytes
Database Buffers 25165824 bytes
Redo Buffers 667648 bytes
Step13 : Go to the Standby server and connect RMAN
Run the following:
CMD> rman target sys/change_on_install@prod_conn_string
RMAN > connect auxiliary sys/change_on_install
Step14 : The following RUN block can be used to fully duplicate the target database from the latest full
backup. This will create the standby database:
run {
# Set the last log sequence number
set until sequence = 100 thread = 1;
# Allocate the channel for the duplicate work
allocate auxiliary channel ch1 type disk;
# Duplicate the database to ORA920
duplicate target database for standby dorecover nofilenamecheck ;
}
RMAN> exit
Step15 : Put the Standby in Managed recovery Mode
On the standby database, run the following:
SQL> sqlplus "/ as sysdba"
SQL> recover standby database;

SQL> alter database recover managed standby database disconnect;


Database altered.
Error and Solution:
Error 1: RMAN-06059: expected archived log not found, lost of archived log compromises recoverability
Solution: RMAN> change archivelog all crosscheck;

StandbyDatabaseMaintenance
Cancel/Stop Managed Standby Recovery
While connected to the standby database follow following steps:

ALTER DATABASE SET STANDBY DATABASE UNPROTECTED;

RECOVER MANAGED STANDBY DATABASE CANCEL;

ALTER DATABASE OPEN READ ONLY;


The database can subsequently be switched back to recovery mode as follows:
Start-up managed recovery on standby database

CONNECT / AS SYSDBA

SHUTDOWN IMMEDIATE

STARTUP NOMOUNT

ALTER DATABASE MOUNT STANDBY DATABASE;

RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

SwitchoverandFailover
Database Switchover
A database can be in one of two mutually exclusive modes (primary or standby). These roles can be
altered at runtime without loss of data or resetting of redo logs. This process is known as a Switchover
and can be performed using the following statements:
While connected to the primary database, issue the following commands:

CONNECT / AS SYSDBA

ALTER DATABASE COMMIT TO SWITCHOVER TO STANDBY;

SHUTDOWN IMMEDIATE;

STARTUP NOMOUNT

ALTER DATABASE MOUNT STANDBY DATABASE;


ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM
SESSION;
Now the original Primary database is in Standby mode and waiting for the new Primary database to
activate, which is done while connected to the standby database (not the original primary)

CONNECT / AS SYSDBA

ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;

SHUTDOWN IMMEDIATE;

STARTUP
This process has no affect on alternative standby locations. The process of converting the instances back
to their original roles is known as a Switchback. The switchback is accomplished by performing another
switchover.
Database Fail Over
Graceful Database Fail over occurs when database fail over causes a standby database to be converted
to a primary database:

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;

ALTER DATABASE ACTIVATE STANDBY DATABASE;


This process will recovery all or some of the application data using the standby redo logs, therefore
avoiding reinstantiation of other standby databases. If completed successfully, only the primary database
will need to be reinstatiated as a standby database.

StandbyDiagnosisQueryforPrimaryNode
Query 1: protection_level should match the protection_mode after the next log switch
select name,database_role role,log_mode, protection_mode,protection_level from v$database;
NAME

ROLE

TEST
PRIMARY
PERFORMANCE

LOG_MODE
ARCHIVELOG

PROTECTION_MODE
MAXIMUM PERFORMANCE

PROTECTION_LEVEL
MAXIMUM

1 row selected.
Query 2: ARCHIVER can be (STOPPED | STARTED | FAILED). FAILED means that the archiver failed to
archive a log last time, but will try again within 5 minutes. LOG_SWITCH_WAIT The ARCHIVE
LOG/CLEAR LOG/CHECKPOINT event log switching is waiting for. Note that if ALTER SYSTEM SWITCH
LOGFILE is hung, but there is room in the current online redo log, then value is NULL

select instance_name,host_name,version,archiver,log_switch_wait from v$instance;


INSTANCE_NAME

HOST_NAME

TEST

flex-suntdb

VERSION
9.2.0.5.0

ARCHIVE

LOG_SWITCH_

STARTED

1 row selected.
Query 3: Query give us information about catpatch.
select version, modified, status from dba_registry where comp_id = 'CATPROC';
VERSION
9.2.0.5.0

MODIFIED
19-NOV-2004

STATUS
10:12:27 VALID

1 row selected.
Query 4: Force logging is not mandatory but is recommended. Supplemental logging must be enabled if
thestandby associated with this primary is a logical standby. During normal operations it is acceptable for
SWITCHOVER_STATUS to be SESSIONS ACTIVE or TO STANDBY.
select force_logging,remote_archive,supplemental_log_data_pk,supplemental_log_data_ui,
switchover_status,dataguard_broker from v$database;
FORCE_LOGGING
REMOTE_ARCHIVE
DATAGUARD_BROKER
NO
DISABLED

ENABLED

SUP

NO

SUP

NO

SWITCHOVER_STATUS

SESSIONS ACTIVE

1 row selected.
Query 5: This query produces a list of all archive destinations. It shows if they are enabled, what process
is servicing that destination, if the destination is local or remote, and if remote what the current mount ID
is.
select dest_id "ID",destination,status,target,schedule,process,mountid mid from v$archive_dest
order by dest_id;
ID
MID

DESTINATION

/applprod/archprod

STANDBY

........

STATUS

VALID
VALID

TARGET

PRIMARY
STANDBY

SCHEDULE

ACTIVE
ACTIVE

PROCESS

ARCH
ARCH

0
0

........
10 rows selected.
Query 6: This select will give further detail on the destinations as to what options have been set. Register
indicates whether or not the archived redo log is registered in the remote destination control file.
select dest_id "ID",archiver,transmit_mode,affirm,async_blocks async, net_timeout
net_time,delay_mins delay,reopen_secs reopen, register,binding from v$archive_dest order by
dest_id;
ID
ARCHIVER
REOPEN
REG

TRANSMIT_MOD
BINDING

AFF

ASYNC

NET_TIME

DELAY

1
YES

ARCH
SYNCHRONOUS
MANDATORY

NO

300

2
YES

ARCH
OPTIONAL

NO

300

SYNCHRONOUS

...
...
10 rows selected.
Query 7: The following select will show any errors that occured the last time an attempt to archive to the
destination was attempted. If ERROR is blank and status is VALID then the archive completed correctly.
select dest_id,status,error from v$archive_dest;
DEST_ID

STATUS

VALID

VALID

INACTIVE

ERROR

.........
...........
10 rows selected.
Query 8: The query below will determine if any error conditions have been reached by querying the
v$dataguard_status view (view only available in 9.2.0 and above):

select message, timestamp from v$dataguard_status where severity in ('Error','Fatal') order by


timestamp;
no rows selected
Query 9: The following query will determine the current sequence number and the last sequence
archived. If you are remotely archiving using the LGWR process then the archived sequence should be
one higher than the current sequence. If remotely archiving using the ARCH process then the archived
sequence should be equal to the current sequence. The applied sequence information is updated at log
switch time.
select ads.dest_id,max(sequence#) "Current Sequence", max(log_sequence) "Last Archived"
from v$archived_log al, v$archive_dest ad, v$archive_dest_status ads where
ad.dest_id=al.dest_id and al.dest_id=ads.dest_id group by ads.dest_id;
DEST_ID

Current Sequence

Last Archived

233

233

233

233

2 rows selected.
Query 10: The following select will attempt to gather as much information as possible from the standby.
SRLs are not supported with Logical Standby until Version 10.1.
select dest_id id,database_mode db_mode,recovery_mode,
protection_mode,standby_logfile_count "SRLs", standby_logfile_active ACTIVE, archived_seq#
from v$archive_dest_status;
ID
DB_MODE
ARCHIVED_SEQ#

RECOVER

1
233

OPEN

IDLE

2
233

MOUNTED-STANDBY IDLE

PROTECTION_MODE

MAXIMUM PERFORMANCE

MAXIMUM PERFORMANCE

SRLs

ACTIVE

...
...
10 rows selected.
Query 11: Query v$managed_standby to see the status of processes involved in the shipping redo on
this system. Does not include processes needed to apply redo.

select process,status,client_process,sequence# from v$managed_standby;


PROCESS

STATUS

CLIENT_P

SEQUENCE#

ARCH

CLOSING

ARCH

233

ARCH

CLOSING

ARCH

232

2 rows selected.
Query 12: The following query is run on the primary to see if SRL's have been created in preparation for
switchover.
select group#,sequence#,bytes from v$standby_log;
no rows selected
Query 13: The above SRL's should match in number and in size with the ORL's returned below:
select group#,thread#,sequence#,bytes,archived,status from v$log;

StandbyDiagnosisQueryforStandbyNode
Query 1: ARCHIVER can be (STOPPED | STARTED | FAILED) FAILED means that the archiver failed to
archive a log last time, but will try again within 5 minutes. LOG_SWITCH_WAIT The ARCHIVE
LOG/CLEAR LOG/CHECKPOINT event log switching is waiting for. Note that if ALTER SYSTEM SWITCH
LOGFILE is hung, but there is room in the current online redo log, then value is NULL
select instance_name,host_name,version,archiver,log_switch_wait from v$instance;
INSTANCE_NAME
LOG_SWITCH_
TEST

HOST_NAME

flex-sprod

VERSION

9.2.0.5.0

ARCHIVE

STARTED

1 row selected.
Query 2: The following select will give us the generic information about how this standby is setup. The
database_role should be standby as that is what this script is intended to be ran on. If protection_level is
different than protection_mode then for some reason the mode listed in protection_mode experienced a
need to downgrade. Once the error condition has been corrected the protection_level should match the
protection_mode after the next log switch.
select name,database_role,log_mode,controlfile_type,protection_mode,protection_level from
v$database;

Query 3: Force logging is not mandatory but is recommended. Supplemental logging should be enabled
on the standby if a logical standby is in the configuration. During normal operations it is acceptable for
SWITCHOVER_STATUS to be SESSIONS ACTIVE or NOT ALLOWED.
select force_logging,remote_archive,supplemental_log_data_pk,supplemental_log_data_ui,
switchover_status,dataguard_broker from v$database;
FORCE_LOGGING
REMOTE_ARCHIVE
DATAGUARD_BROKER
NO
DISABLED

ENABLED

SUP

NO

SUP

NO

SWITCHOVER_STATUS

SESSIONS ACTIVE

1 row selected.
Query 4: This query produces a list of all archive destinations and shows if they are enabled, what
process is servicing that destination, if the destination is local or remote, and if remote what the current
mount ID is. For a physical standby we should have at least one remote destination that points the
primary set but it should be deferred.
select dest_id "ID",destination,status,target, archiver,schedule,process,mountid from
v$archive_dest;
Query 5: If the protection mode of the standby is set to anything higher than max performance then we
need to make sure the remote destination that points to the primary is set with the correct options else we
will have issues during switchover.
select
dest_id,process,transmit_mode,async_blocks,net_timeout,delay_mins,reopen_secs,register,binding from
v$archive_dest;
Query 6: The following select will show any errors that occured the last time an attempt to archive to the
destination was attempted. If ERROR is blank and status is VALID then the archive completed correctly.
select dest_id,status,error from v$archive_dest;
Query 7: Determine if any error conditions have been reached by querying thev$dataguard_status view
(view only available in 9.2.0 and above):
select message, timestamp from v$dataguard_status where severity in ('Error','Fatal') order by
timestamp;
Query 8: The following query is ran to get the status of the SRL's on the standby. If the primary is
archiving with the LGWR process and SRL's are present (in the correct number and size) then we should
see a group# active.
select group#,sequence#,bytes,used,archived,status from v$standby_log;

Query 9: The above SRL's should match in number and in size with the ORL's returned below:
select group#,thread#,sequence#,bytes,archived,status from v$log;
Query 10: Query v$managed_standby to see the status of processes involved in the configuration.
select process,status,client_process,sequence#,block#,active_agents,known_agents from
v$managed_standby;
Query 11: Verify that the last sequence# received and the last sequence# applied to standby database.
select max(al.sequence#) "Last Seq Recieved", max(lh.sequence#) "Last Seq Applied" from
v$archived_log al, v$log_history lh;
Query 12: The V$ARCHIVE_GAP fixed view on a physical standby database only returns the next gap
that is currently blocking redo apply from continuing. After resolving the identified gap and starting redo
apply, query the V$ARCHIVE_GAP fixed view again on the physical standby database to determine the
next gap sequence, if there is one.
select * from v$archive_gap;

CreateStandbyInstancein10gR2
Create Standby instance and Configure Data Guard on 10gR2
Step1: Configure Listener in Production Server and Standby Server.
TIPS: You should try to Create Listener (Standby) by using Net Configuration Assistant on
Standby Server.
TIPS: assume Listener already configure with PROD name on Primary Node. If Listener not
configured on Primery Node , You Should Create Listener by using Net Configuration
Assistant on Primary Server.
Step2: Configure TNSNAMES.ORA in Production Server and Standby Server. Following
TNSNAMES.ORA entry on Production Database and Standby Database
# Connection string for Primary Instance.
PROD =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = Production IP)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = PROD)
)
)
# Connecting string for Standby Instance
STANDBY =

(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = STANDBY IP)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = PROD)
)
)
Step3: Put your production database in Archive Log mode if your database not running in
Archive log mode.
Step4: add following entries in init.ora file in Production Server.
LOG_ARCHIVE_DEST_1='LOCATION=/oracle/database/archive MANDATORY REOPEN=30'
LOG_ARCHIVE_DEST_2='SERVICE=STANDBY REOPEN=300'
STANDBY_FILE_MANAGEMENT=AUTO
# db_file_name_convert: do not need; same directory structure
# log_file_name_convert: do not need; same directory structure
Step 5: shutdown immediate and Copy the Data files and redo log files to standby location.
Step 6: Restart the Production Database
SQL> startup;
Step10 : Create Control file for Standby Database Issue the following command on
production database to create control file for the standby database.
SQL> Alter database create standby controlfile as '/oracle/controlfile_standby.ctl';
Database altered.
Step11: Create init.ora file for standby database.
Copy init.ora file from Production Server to Stand by Server in Database folder in oracle
home directory and add following entries:
STANDBY_FILE_MANAGEMENT = AUTO
LOG_ARCHIVE_MIN_SUCCEED_DEST=1
STANDBY_ARCHIVE_DEST = '/standby/archive '
fal_server = FAL
fal_client = STANDBY
db_unique_name=standby

# db_file_name_convert: do not need; same directory structure


# log_file_name_convert: do not need; same directory structure
Step 12 Copy the standby control file to standby server and modify control_file
parameter in standby init file.

Step: 13 Start Physical standby database


Start up the stand by database using following commands
SQL> conn sys/prod as sysdba
Connected to an idle instance.
SQL> startup nomount;
ORACLE instance started.
Total System Global Area 135338868 bytes
Fixed Size 453492 bytes
Variable Size 109051904 bytes
Database Buffers 25165824 bytes
Redo Buffers 667648 bytes
SQL> alter database mount standby database;
Database altered.
Step: 14 Initiate Log apply services The example includes the DISCONNECT FROM SESSION
option so that log apply services run in a background session.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT
FROM SESSION;
Database altered.
Now your Standby Instance has been created successfully.

ConfigureDataGuardin10gR2
Configure Data Guard
Step 1 Check parameter DG_BROKER_START on primary and standby side. If value show FALSE then
set to TRUE.
Step 2 Define a service for the listener:
At Standby instance:

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = \oracle\product\10.2.0\db_1)
(PROGRAM = extproc)
)
(SID_DESC =
(GLOBAL_DBNAME = prod_DGMGRL)

(ORACLE_HOME = \oracle\product\10.2.0\db_1)
(SID_NAME = standby)
)
)
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = oprsms)(PORT = 1521))
)
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
)
)
At Primery Instance:
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = \oracle\product\10.2.0\db_1)
(PROGRAM = extproc)
)
(SID_DESC =
(GLOBAL_DBNAME = prod_DGMGRL)
(ORACLE_HOME = \oracle\product\10.2.0\db_1)
(SID_NAME = prod)
)
)
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = primery hostname)(PORT = 1521))
)
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
)
)
Note: GLOBAL_DBNAME should be set to <<db_unique_name>>_DGMGRL in
listener.ora on all instances of both primary and standby. This is important otherwise you'll
have TNS-12154 error during switchover operation.
Step 3 Now to create the data guard broker configurations:
$ dgmgrl
DGMGRL> connect /
Connected.

DGMGRL> create configuration 'TEST'


> as primary database is 'prod'
> connect identifier is prod;
Configuration "TEST" created with primary database "prod"
DGMGRL>
DGMGRL> show configuration
Configuration
Name: TEST
Enabled: NO
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
prod- Primary database
Current status for "TEST":
DISABLED
Step 4 Now add standby DB to the data broker configuration:
DGMGRL> add database 'standby' as
> connect identifier is standby
> maintained as physical;
Database "standby" added
DGMGRL>
DGMGRL> show configuration
Configuration
Name: TEST
Enabled: NO
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
prod- Primary database
standby - Physical standby database
Current status for "TEST":
DISABLED
Step 5 Now enable the configurations:
DGMGRL> enable configuration
Enabled.
DGMGRL> show configuration
Configuration

Name: TEST
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
prod- Primary database
standby- Physical standby database
Current status for "TEST":
DGMGRL> show database verbose standby
Now your data guard has been configured successfully.

DifferencebetweenPhysicalandLogicalStandbyDatabase
Physical Standby Database:
A physical standby database is a exact copy of the primary database. Oracle uses the primary database
archive log file to recover the physical standby database.
We can open a physical standby database in read only mode, but at the time of read only mode, the
received logs will not be applied.
When the logs are applied, the database is not accessible.
Logical Standby Database:
Logical Standby database are opened in read/write mode, even while they are is applied mode. That is,
they can be used to generate reports and the like. It is indeed a fully functional database.

Createalogicalstandbydatabasein10gR2
Step: 1 For logical standby we have to first create PHYSICAL STANDBY DATABASE and
sure physical standby database working properly.
Step 2 Stop Redo Apply on the Physical Standby Database
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

Step 3 Set Parameters for Logical Standby in Primary

LOG_ARCHIVE_DEST_1= 'LOCATION=/data01/archive/online/
VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=prod'

LOG_ARCHIVE_DEST_3= 'LOCATION=/data01/archive/standby/
VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=prod'

Note: LOG_ARCHIVE_DEST_3 only takes effect when the primary database is transitioned to
the logical standby role.
Step 4 Build a Dictionary in the Redo Data on Primary Database
SQL> EXECUTE DBMS_LOGSTDBY.BUILD;
Step 5 Convert to a Logical Standby Database (Execute bellow command on Standby Site)
SQL> ALTER DATABASE RECOVER TO LOGICAL STANDBY ;
For db_name, specify a database name to identify the new logical standby database. If you are
using a spfile for standby, then command will update the db_name parameter otherwise it will
issues a message reminding you to set the name of the DB_NAME parameter after shutting
down the database.
Step 6 Create a New Password File for Logical Standby Database
$ORAPWD FILE= PASSWORD= ENTRIES=

Because the conversion process changes the database name (that was originally set with
the DB_NAME initialization parameter) for the logical standby database, you must re-create the
password file
Step 7 Shutdown and Startup Logical Standby Database in Mount Stage
SQL> SHUTDOWN;
SQL> STARTUP MOUNT;

Step 8 Adjust Initialization Parameter on Logical Standby Database

LOG_ARCHIVE_DEST_1= 'LOCATION=/u01/arch/online/
VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=standby'
LOG_ARCHIVE_DEST_2= 'SERVICE=prim1 LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=prod'
LOG_ARCHIVE_DEST_3= 'LOCATION=/u01/arch/standby/
VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=standby'

Step 9 Open the Logical Standby Database


SQL> ALTER DATABASE OPEN RESETLOGS;

Step 10 Start Logical Apply on Standby

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;

if you request IMMEDIATE option, you have to create standby redo-logs on standby site. Good
idea is to create them on all instances, where role reversal may take place and if higher protection
mode is required
Lets test logical standby is working properly, what archivelogs are registered on standby
SQL>

select SEQUENCE#, FIRST_TIME, NEXT_TIME, DICT_BEGIN, DICT_END, APPLIED from dba_logstdby_log order by SEQ UENCE#;

SEQUENCE# FIRST_TIM NEXT_TIME DIC DIC APPLIED


---------- --------- --------- --- --- -------12 29-APR-09 29-APR-09 YES YES CURRENT
13 29-APR-09 29-APR-09 NO NO CURRENT
14 29-APR-09 29-APR-09 NO NO CURRENT
15 29-APR-09 29-APR-09 NO NO CURRENT

Make a logswitch on primary


SQL>
System altered.
alter system switch logfile;

We are interested to see what the logical standby is doing, we may query coordinator status
SQL>

select name, value from v$logstdby_stats where name='coordinator state';

NAME VALUE
-------------------- -------------------coordinator state IDLE

or all participants (coordinator, reader,builder,preparer, applier)


SQL>

select type,high_scn,status from v$logstdby;

TYPE HIGH_SCN STATUS


-------------------- ---------- -----------------------------COORDINATOR 161732 ORA-16116: no work available
READER 161732 ORA-16240: Waiting for logfile (thread# 1, sequence# 17)
BUILDER 161569 ORA-16116: no work available
PREPARER 161568 ORA-16116: no work available
ANALYZER 161061 ORA-16116: no work available
APPLIER 160773 ORA-16116: no work available
APPLIER 161028 ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
10 rows selected.

You can query v$logstdby_process for the last applied SCN on the standby site
SQL>

select applied_scn,latest_s cn from v$logstdby_progress;

APPLIED_SCN LATEST_SCN
----------- ---------161731 161731
What is a Cloning?
Cloning is a process to create an exact copy of oracle database without using
export/import. This method is used by DBAs to update test or development environments
from production.
In cloning process, we will copy the datafiles from our production database and use it to
create your test database. So your test database will be a mirror image of the production in
all respects, except of course, for the database name. We will be changing that.
We can create clone instance/ database on our production server as well as different server
but as per my recommendation we should not be done on our production server.
Why Export/import is not suitable for Cloning?
Export/Import process is very bulky if you want to use export / import you have to do the
data refresh frequently and import takes a long time if your database is of good size. You

can of course, resort to exporting and importing tablespaces if they are self contained and
you are running version 8i or greater, enterprise edition. A faster alternative is to clone the
database.
Method behind the Cloning?
The method used here is actually a backup/recovery of the production database on the test
server. We just recreate the controlfile on the test machine using a new database name and
do a recovery. The datafiles from the production database can be from a hot backup, a cold
backup or an RMAN backup. If the database was open during backup (hot or RMAN), you will
need all the archivelogs since the time the backup started to recover the database to its
present state (that is, do a complete recovery). Throughout the article, I will refer to
environment variables by their UNIX notation (example $ORACLE_HOME). Replace this with
the Windows equivalent if you are using NT/2000 (example $ORACLE_HOME becomes
%ORACLE_HOME%).
Reason for Cloning?
In every oracle development and production environment there will become the need to
transport the entire database from one physical machine to another. This copy may be
used for development, production testing, beta testing, etc, but rest assured that this need
will arise and management will ask you to perform this task quickly.
Steps involve in cloning?
1. Relocating an Oracle database to another machine or Relocation a datfile to different
mount point if you want to clone in same production machine.
2. Moving Oracle database to new Storage media.
3. Renaming Oracle database.
Difference between cloning and Refreshing ?
A Cloning process includes a copy of Oracle Home (Binaries) and Database files backup to
prepare the instance on Another Server.
A Database refresh is referring to as a database clone. However, we dont clone Oracle
Home only we clone the database as refresh.
The difference between cloning and refreshing is that clone process include Oracle Home
and Database, where as refreshing process only include database clone.

DatabaseCloningtoDifferentHostthroughRMAN
Assumed Source database names:
Host: SUN1
Primary Database SID: PROD
Data file mount Point: /DB/PROD
Assuming Target Database:

Host: SUN2
Primary Database SID: CLONE
Data file mount Point: /DB/CLONE
Steps 1 Create init file for Clone instance
Set init parameter in init file.
DB_NAME=PROD to DB_NAME=CLONE
DB_FILE_NAME_CONVERT <----if using different mount point for database------->
LOG_FILE_NAME_CONVERT <----if using different mount point for database------>
Step 2 create Mount point for RMAN Backup set and Restore RMAN backup on Target Server
Note: RMAN backup set mount point must be same.
Step 3 Startup Clone Instance with Nomount option
Export ORACLE_SID=CLONE
Sqlplus /nolog
Conn / as sysdba
Startup nomount
Step 4 invoke RMAN
$ rman target sys/anup123@prod auxiliary /
Step 5 Execute bellow mention command on Rmount MAN prompt.
If database using same mount point:
RMAN> duplicate target database to CLONE nofilenamecheck;
If database using different mount point:
RMAN> duplicate target database to CLONE nofilenamecheck;

DatabaseCloningtoSameHostthroughRMAN
Assumed Source database names:
Host: SUN1
Primary Database SID: PROD
Data file mount Point: /DB/PROD
Assuming Target Database:
Host: SUN1
Primary Database SID: CLONE
Data file mount Point: /DB/CLONE
Step 1 Take RMAN backup of Production backup
++++++++++++++ Backup Scripts+++++++++++++++++++
run
{
allocate channel d1 type disk MAXPIECESIZE 5G;
allocate channel d2 type disk MAXPIECESIZE 5G;

backup full tag full_db format '/export/home/oracle/db_%t_%s_p%p'(database);


backup format '/export/home/oracle/ctl_%U' current controlfile;
SQL 'ALTER SYSTEM ARCHIVE LOG CURRENT';
BACKUP format '/export/home/oracle/Arch_%U' (ARCHIVELOG ALL);
release channel d1;
release channel d2;
}
++++++++++++++End Backup Script+++++++++++++++++
Step 2 If your Primary database using Spfile , then create pfile.
SQL> create pfile from spfile.
Note: This will generate initPROD.ora file
Step3 Rename initPROD.ora to initCLONE.ora file and update bellow parameter:

DB_NAME=PROD to CLONE
CONTROL_FILES=/DB/CLONE
DB_FILE_NAME_CONVERT=(/DB/PROD/,/DB/CLONE/)
LOG_FILE_NAME_CONVERT=(/DB/PROD/,/DB/CLONE/)
Step4 Startup Clone Instance with Nomount option
Export ORACLE_SID=CLONE
Sqlplus /nolog
Conn / as sysdba
Startup nomount
Step5 invoke RMAN
$ rman target sys/anup123@prod auxiliary /
Step 4 Execute bellow mention command on RMAN prompt.
RMAN> duplicate target database to CLONE;

DatabaseCloningtoSameHostthroughColdBackup
Source database Details:
Host: SUN1
Database SID=PROD
Data file Mount Point = /DB/PROD
Target (Clone) Database Details:

Host: SUN1
Database SID=CLONE
Data file Mount Point: /DB/CLONE
Step 1 Create control file scripts on Primary instance.
SQL> alter database backup controlfile to trace;
Note: Above command will generate a cerate controlfile scripts in USER_DUMP_DISTINATION
folder.
Step 2 Update create controle file scripts.
Step 3 If your Primary database using Spfile , then create pfile.
SQL> if your database is using spfile then create pfile from spfile.
Note: this Will generate initPROD.ora file
Step4 Rename initPROD.ora to initCLONE.ora file and update DB_NAME and CONTROL_FILE
parameter.
Step 5 Set ORACLE_SID=CLONE on terminal and execute create controlfile script.
# export ORACLE_SID=CLONE
#sqlplus /nolog
SQL> conn / as sysdba
SQL>@
Step 6 open the database with resetlogs options
SQL> alter database open resetlogs;

OracleRACLogDirectory
Each component in the CRS stack has its respective directories created under the CRS home:
The Cluster Ready Services Daemon (crsd) Log Files
$CRS home/log/hostname/crsd
Oracle Cluster Registry (OCR) Log Files
$CRS Home/log/hostname/client
Cluster Synchronization Services (CSS) Log Files
$CRS Home/log/hostname/cssd
Event Manager (EVM) Log Files
$CRS Home/log/hostname/evmd

RACG Log Files


$ORACLE_HOME/log/hostname/racg

ManagingOCRandVotingDisk
OCR:
Oracle Cluster registry (OCR) store the cluster configuration information and database configuration
information like cluster node list, cluster database instance to node mapping and CRS application
resource profile.
OCR location is specified during CRS installation. OCR.loc file indicate the OCR device location.
OCR.loc file located in /etc/oracle on linux system and /var/opt/oracle on Solaris system.
We create OCR in shared disk storage that must be accessible to all cluster nodes.
The daemon OCSSd manages the configuration info in OCR and maintains the changes to cluster in the
registry.
How to check health of OCR? We just use ocrcheck utility.
How to take Backup of OCR? There are two methods of backup. The first method uses automatically
and the second method uses manually created logical OCR export files.
Automatically:
Oracle automatically takes a backup of OCR to default location on every four hours.
Oracle always retains the last three backup copies of the OCR.
Default location is $CRS_HOME/cdata/cluster_name where cluster_name is the name of your cluster.
We can change backup default location of OCR by using ocrconfig command. (Example: $ ocrconfig
backuploc
The CRSD process that also creates and retains an OCR backup for each full day and at the end of
each week.
We cannot customize the backup frequencies or the number of files that Oracle retains.
Manually:
We can take export backup of OCR after making changes by using oraconfig command. (Example:
ocrconfig export )
How to Recovering the OCR?
In event of a failure, before you attempt to restore the OCR, ensure that the OCR is unavailable.
Run the following command to check the status of the OCR: ocrcheck
If this command does not display the message 'Device/File integrity check succeeded' for at least one
copy of the OCR, then both the primary OCR and the OCR mirror have failed. You must restore the OCR
from a backup.

Restoring the OCR from Automatically Generated OCR Backups:


Step 1 Identify the available OCR backups using the ocrconfig command:
# ocrconfig -showbackup
Step 2 Review the contents of the backup using the following ocrdump command, where file_name is the
name of the OCR backup file:
$ ocrdump -backupfile file_name
Step 3 As the root user, stop Oracle Clusterware on all the nodes in your Oracle RAC cluster by
executing the following command:
# crsctl stop crs
Step 4 Repeat this command on each node in your Oracle RAC cluster.
Step 5 As the root user, restore the OCR by applying an OCR backup file.
# ocrconfig -restore file_name
Step 6 As the root user, restart Oracle Clusterware on all the nodes in your cluster by restarting each
node, or by running the following command:
# crsctl start crs
Repeat this command on each node in your Oracle RAC cluster.
Step 7 Use the Cluster Verify Utility (CVU) to verify the OCR integrity. Run the following command, where
the -n all argument retrieves a list of all the cluster nodes that are configured as part of your cluster:
$ cluvfy comp ocr -n all [-verbose]
Recovering the OCR from an OCR Export File:
We use the ocrconfig -import command to restore the OCR
Step 1 Log in as a Root User. Stop oracle Clusterware on all nodes.
Step 2 Restore the OCR data by importing the contents of the OCR export file using the following
command, where:
ocrconfig -import
Step 3 Start oracle Clusterware on all nodes.
crsctl start crs
Step 4 Use the CVU to verify the OCR integrity.
cluvfy comp ocr -n all [-verbose]
How to Adding an OCR Location
You can add an OCR location after an upgrade or after completing the Oracle RAC installation. If you

already mirror the OCR, then you do not need to add an OCR location; Oracle Clusterware automatically
manages two OCRs when you configure normal redundancy for the OCR. Oracle RAC environments do
not support more than two OCRs, a primary OCR and a secondary OCR.
Run the following command to add an OCR location:
ocrconfig -replace ocr
Run the following command to add an OCR mirror location:
ocrconfig -replace ocrmirror
How to Replacing an OCR
If you need to change the location of an existing OCR, or change the location of a failed OCR to the
location of a working one, you can use the following procedure as long as one OCR file remains online.
Step 1 Use the OCRCHECK utility to verify that a copy of the OCR other than the one you are going to
replace is online using the following command:
ocrcheck
Step 2 Verify that Oracle Clusterware is running on the node on which the you are going to perform the
replace operation using the following command:
crsctl check crs
Step 3 Run the following command to replace the OCR:
ocrconfig -replace ocr
Run the following command to replace an OCR mirror location:
ocrconfig -replace ocrmirror destination_file
How to Repairing an Oracle Cluster Registry Configuration on a Local Node
You may need to repair an OCR configuration on a particular node if your OCR configuration changes
while that node is stopped. For example, you may need to repair the OCR on a node that was shut down
while you were adding, replacing, or removing an OCR. To repair an OCR configuration, run the following
command on the node on which you have stopped the Oracle Clusterware daemon:
ocrconfig repair ocrmirror device_name
How to Removing an Oracle Cluster Registry
To remove an OCR location, at least one OCR must be online. You can remove an OCR location to
reduce OCR-related overhead or to stop mirroring your OCR because you moved your the OCR to a
redundant storage system, such as a redundant array of independent disks (RAID).
To remove an OCR location from your Oracle RAC environment:
Step 1 Use the OCRCHECK utility to ensure that at least one OCR other than the OCR that you are
removing is online.
ocrcheck

Step 2 Run the following command on any node in the cluster to remove one copy of the OCR:
ocrconfig -replace ocr
Step 3 This command updates the OCR configuration on all the nodes on which Oracle Clusterware is
running.
Voting Disks
The voting disk records node membership information.
A node must be able to access more than half of the voting disks at any time.
Backing up Voting Disks
The node membership information does not usually change; you do not need to back up the voting disk
every day. However, back up the voting disks at the following times:
After installation
After adding nodes to or deleting nodes from the cluster
After performing voting disk add or delete operations
How to take backup?
dd if=/dev/rdsk/c0d1s1 of=/tmp/voting.dmp
When you use the dd command for making backups of the voting disk, the backup can be performed
while the Cluster Ready Services (CRS) process is active; you do not need to stop the crsd.bin process
before taking a backup of the voting disk.
Recovering Voting Disks
If a voting disk is damaged, and no longer usable by Oracle Clusterware, you can recover the voting disk
if you have a backup file. Run the following command to recover a voting disk where backup_file_name is
the name of the voting disk backup file and voting_disk_name is the name of the active voting disk:
dd if=backup_file_name of=voting_disk_name
Adding and Removing Voting Disks
You can dynamically add and remove voting disks after installing Oracle RAC. Do this using the following
commands where path is the fully qualified path for the additional voting disk. Run the following command
as the root user to add a voting disk:
crsctl add css votedisk path
Run the following command as the root user to remove a voting disk:
crsctl delete css votedisk path
crsctl query css votedisk

AdministeringClusterReadyServices(CRS)
We use Cluster Control Utility CRSCTL to perform administrative operation of oracle clusterware. It is
located in $CRS_HOME/bin and must be executed by the root user.
1. To check the current state of all oracle clusterware daemon:
$ ./crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
2. You can also check the state of individual oracle clusterware daemon:
$ ./crsctl check cssd
CSS appears healthy
$ ./crsctl check crsd
CRS appears healthy
$ ./crsctl check evmd
EVM appears healthy
3. To start oracle clusterware
$ ./crsctl start crs
Attempting to start CRS stack
The CRS stack will be started shortly
4. To stop oracle clusterware
$ ./crsctl stop crs
Stopping resources.
Successfully stopped CRS resources
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
5. To disable oracle clusterware:
# ./crsctl disable crs
6. To enable oracle clusterware:
$ ./crsctl enable crs
7. To list the module for debugging in CSS

$ ./crsctl lsmodules css


The following are the CSS modules ::
CSSD
COMMCRS
COMMNS
8. CRS_STAT: It reports the current state of resources configured in the OCR.
$ ./crs_stat -t
Name Type Target State Host

ora.C1.inst application ONLINE ONLINE rac1


ora.C2.inst application ONLINE ONLINE rac2
ora.AC1.srv application ONLINE ONLINE rac1
ora.RAC.abc.cs application ONLINE ONLINE rac1
ora.RAC.db application ONLINE ONLINE rac2
ora.AC1.srv application ONLINE ONLINE rac1
ora.ice2.cs application ONLINE ONLINE rac1
ora.AC1.srv application ONLINE ONLINE rac1
9. CRS_STOP: This command used to stop resource or cluster member.
$ ./crs_stop ora.rac1.ons
Attempting to stop `ora.rac1.ons` on member `rac1`
Stop of `ora.rac1.ons` on member `rac1` succeeded.
10. CRS_START: This command used to start resource or cluster member.
$ ./crs_start ora.rac1.ons
Attempting to start `ora.rac1.ons` on member `rac1`
Start of `ora.rac1.ons` on member `rac1` succeeded.
11. OCRCHECK : It verifies the integrity of the OCR.
$ ./ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 5237072
Used space (kbytes) : 9360
Available space (kbytes) : 5227712
ID : 794527192
Device/File Name : /apps/oracle/oradata/ocr
Device/File integrity check succeeded
Cluster registry integrity check succeeded
12 Check that all nodes have joined the cluster.
$./olsnodes

13 Oracle interface Configuration (oifcfg)


$./oifcfg getif
Oracle Interface Configure Utility
This command should return values for global public and global cluster_interconnect.
If the command does not return a value for global cluster_interconnect, enter the following command to
delete and set the desired interface:
$ ./oifcfg delif global
$./oifcfg setif global /:public
$./oifcfg setif global /:cluster_interconnect\
Cluster Name Check Utility
This utility orints the cluster name information:
$./cemutlo n -w

AdministratingServices
The following tools are available for administrating services.
DBCA
OEM
DBMS_SERVICE
Server Control Utility
Here we will discuss only about Server Control Utility (SRVCTL).
We can use SRVCTL to Add, start, stop, and Enable, Disable and remove instances and
services.
Command Syntax:
Srvctl add
The SRVCTL add command add configuration in the OCR.
Add
Add
Add
Add

Database: - srvctl add database -d -o


Instance: - srvctl add instance -d -i -n
nodeapps: - srvctl add nodeapps -n -o -A /255.255.255.0
asm: - srvctl add asm -n -i -o

srvctl config
The SRVCTL config command displays the configuration stored in the OCR.
Config database: - srvctl config database -d
config nodeapps:- srvctl config nodeapps -n
config asm:- srvctl config asm -n
config listener:- srvctl config listener -n

srvctl start
Start
Start
Start
Start
Start

database:- srvctl start database -d -o open


instance:- srvctl start instance -d -i
nodeapps:- srvctl start nodeapps -n
asm:- srvctl start asm -n -i
listener:- srvctl start listener -n

srvctl stop
Stop database: - srvctl stop database -d
Stop instance: - srvctl stop instance -d -i
Stop nodeapps: - srvctl stop nodeapps -n
Stop asm: - srvctl stop asm -n -i
Stop listener:- srvctl stop listener -n
srvctl status
srvctl
srvctl
srvctl
srvctl
srvctl
srvctl
srvctl
srvctl

status
status
status
status
status
status
status
status

database
database -d -v
instance
instance -d -i -v
nodeapps
nodeapps -n
asm
asm -n

srvctl remove
srvctl
srvctl
srvctl
srvctl
srvctl
srvctl
srvctl
srvctl
srvctl

remove
remove
remove
remove
remove
remove
remove
remove
remove

database
database -d
instance
instance -d -i
nodeapps
nodeapps -n
asm
asm -n -i
listener -n node1 -l lsnr01

AlterNet method to remove listener by using crs_unregister


1. $ crs_stat | grep NAME\= | grep lsnr
NAME=ora.rac1.LISTENER_RAC1.lsnr
NAME=ora.rac2.LISTENER_RAC2.lsnr
then
$ crs_unregister ora.rac1.LISTENER_RAC1.lsnr
$ crs_unregister ora.rac2.LISTENER_RAC2.lsnr

srvctl enable
srvctl
srvctl
srvctl
srvctl
srvctl
srvctl

enable
enable
enable
enable
enable
enable

database
database -d
instance
instance -d -i
asm
asm -n -i

srvctl disable
srvctl
srvctl
srvctl
srvctl
srvctl
srvctl

disable
disable
disable
disable
disable
disable

database
database -d
instance
instance -d -i
asm
asm -n -i

ManagingUNDO,TemporaryandRedologsinRAC
Environment
In the oracle RAC Environment, each instance store transaction undo data in its dedicated
undo Tablespace. We can set the undo Tablespace for each instance by setting the
undo_tablespace parameter and undo_management to be the same across all the instances.
Example:
.undo_tablespace=undo_tbs1
.undo_tablesapce=undo_tbs2
Managing Temporary Tablespace
In an RAC environment, a user will always use the same assigned temporary Tablespace
irrespective of the instance being used. Each instance creates a temporary segment in the
temporary Tablespace it is using. If an instance is running a big sort operation requires a
large temporary tablesapce , it can reclaim the space used by other instances temporary
segments in that tablesapce.
Main Point:
All instance share the same temporary Tablespace
Size should be at least equal to the concurrent maximum requirement of the entire instance.
Administrating Online redologs
Each instance has exclusive write access to its own online redolog files. An instance can read
another instance current on line redologs file to perform instance recovery it that instance
has terminated abnormally. Online redologs file needs to be located on a shared storage
device and can not be on a local node.
How to Enable Archiving in the RAC Environment
Step 1 Log in Node1.
Step 2 set cluster_database=false in parameter file.
Step 3 shut down all the instances.
Srvctl stop database d
Step 4 mount the database

SQL> startup mount


Step 5 Enable Archiving
SQL> alter database archivelog;
Step 6 Change cluser_database=true in parameter file
Step 7 Shutdown the instance
SQL> shutdown immediate
Step 8 Start all the instance.
Scrvctl start database d
How to Enable Archiving in the RAC Environment
Step 1 log in node 1
Step 2 Verify that the database is running in Archive log mode.
Step 3 Set parameter cluster_database =false
SQL> alter system set cluster_database=falce scope=spfile sid=prod1
Step 4 set parameter DB_RECOVERY_FILE_DEST_SIZE and DB_RECOVERYU_FILE_DEST
SQL> alter system set DB_RECOVERY_FILE_DEST_SIZE=200M scope=spfile;
SQL> alter system set DB_RECOVERY_FILE_DEST=/dev/rdsk/c0d3s1 scope=spfile;
Step 5 Shut down all instance.
# srvctl stop database d
Step 6 Mount the database
SQL> statup mount
Step 7 Enable the flashback.
SQL> alter database flashback on;
Step 8 Set parameter cluster_database = true
SQL> alter system set cluster_database=falce scope=spfile sid=prod1
Step 9 Shutdown instance
SQL> shutdown
Step 10 Start all instance
$ srvctl start database d >db_name>

Vous aimerez peut-être aussi