Vous êtes sur la page 1sur 24

Oracle Database 10g Migration to

Automatic Storage Management


An Oracle White Paper
August 2004
Oracle Database 10g Migration to
Automatic Storage Management

Executive Overview.......................................................................................... 3
Introduction ....................................................................................................... 3
Database Migration to ASM overview ........................................................... 3
Cold Migration .............................................................................................. 3
Hot Migration................................................................................................ 4
Database Migration – Detailed Steps ............................................................. 5
Assumptions .................................................................................................. 5
Choosing the correct method ..................................................................... 6
Cold Migration.......................................................................................... 6
Hot Migration ........................................................................................... 6
Cold Migration to ASM ............................................................................... 8
1. Cold Migration - Pre Migration ......................................................... 8
2. Cold Migration - First Outage – Recovery Area moves to ASM.. 9
3. Cold Migration - Database Backup ................................................. 11
4. Cold Migration - Second Outage – Data Area moves to ASM... 12
5. Cold Migration - Post Migration ..................................................... 15
Hot Migration to ASM............................................................................... 16
1. Hot Migration - Pre Migration......................................................... 16
2. Hot Migration - The Switch ............................................................. 18
3. Hot Migration - Post Migration....................................................... 20
Conclusion........................................................................................................ 21
Appendix 1: Migrating ASM Disk Groups Back to Original Database
Storage............................................................................................................... 22

Oracle Database 10g Migration to Automatic Storage Management Page 2


Oracle Database 10g Migration to
Automatic Storage Management

EXECUTIVE OVERVIEW
In Oracle Database 10g, storage management and provisioning for the database has
becomes much more simplified with a new feature called Automatic Storage
Management (ASM). ASM provides filesystem and volume manager capabilities
built into the Oracle database kernel. With this capability, ASM simplifies storage
management tasks, such as creating/laying out databases and disk space
management. Since ASM allows disk management to be done using familiar
create/alter/drop SQL statements, DBAs do not need to learn a new skill set or
make crucial decisions on provisioning.
This white paper describes two methods of migrating the Oracle database from its
current storage to ASM.

INTRODUCTION
With the introduction of Oracle Database 10g, Oracle now provides Automatic
Storage Management (ASM) that is optimized for Oracle files. You may decide to
migrate to ASM with a piecemeal approach by allocating new Oracle data files into
ASM as the database grows, however to receive the full benefits of ASM such as
the ability to add or remove storage from the database configuration with
automated balancing of the distribution of the data files without downtime, the
entire database should be migrated to ASM. This white paper introduces two
methods to migrate an existing database completely from legacy storage to ASM.

DATABASE MIGRATION TO ASM OVERVIEW


There are two migration methods, cold migration and hot migration, described in
this white paper. Both of these migration methods use Recovery Manager (RMAN)
as RMAN is the utility that must be used to recover a database that is located on an
ASM disk group. The migration method to use depends upon the amount of
storage and processing capacity available, and the amount of down time that can be
tolerated.

Cold Migration
The cold migration method is used when there is insufficient unallocated disk space
available to hold a full copy of the database. This method consists of two phases:

Oracle Database 10g Migration to Automatic Storage Management Page 3


the first phase migrates the flash recovery area, which contains recovery related files
such as archived redo logs and database backups. The second phase migrates the
data and redo log files. Both phases of the cold migration can be completed during
a single maintenance window or spread over the course of multiple days or even
weeks.

Hot Migration
The hot migration method is used when there is sufficient, unallocated disk space
that can be used for ASM. The amount of disk space required for hot migration
depends on the backup strategy and the amount of disk space used by disk-based
backups. The minimum amount of unallocated disk space required is equivalent to
at least the size of the database. This method consists of a preparation phase that
builds and configures the ASM storage while the database remains online followed
by a short outage phase that switches the database to the new ASM storage while
the database is offline.

Oracle Database 10g Migration to Automatic Storage Management Page 4


DATABASE MIGRATION – DETAILED STEPS
The following pages describe the migration of an Oracle Database 10g database
from Unix File Systems to ASM. The database is a single instance database residing
on Redhat Linux, but the procedure is identical for users migrating from a Cluster
File System or RAW partitions, irrespective of the Operating System1.

Assumptions
The following assumptions have been made about the environment.
The database area (init.ora:db_create_file_dest) for the database resides on the
Unix file system /oradata, and is striped and mirrored over 4 individual disks.
The recovery area (init.ora:db_recovery_file_dest) for the database resides on
the Unix file system /flash_recovery, and is striped and mirrored over 4
individual disks.
The database has two redo log members per group, one on the /oradata file
system and the other on the /flash_recovery file system.
The database has two control files, one on the /oradata file system and the other
on the /flash_recovery file system.
There are 8 additional disks available for the Hot Migration.
The following picture represents the disk layout before and after the migration is
completed.

Note: For more information on ASM best practices and failure groups, please take
a look at the following links:
ASM on OTN
Oracle Database Administrator's Guide, Chapter 12

1The commands may differ from those described below depending upon the
Operating System.

Oracle Database 10g Migration to Automatic Storage Management Page 5


Choosing the correct method
Use the table below to assist in determining the available migration methods based
upon the amount of disk space required for the migration

Disk Space Availability Method Outage Time

Not enough space to Cold First Outage


contain the database and Migration
Proportional to storage
recovery area
reconfiguration and log file
member recreation

Second Outage
Proportional to database
restore and recovery

Sufficient space to create a Hot Migration Only One Outage


second copy of the
Proportional to database
database and recovery area
recovery from disk

Cold Migration

Cold Migration is required when there is not enough disk space to make an ASM
disk group large enough to contain a copy of the database and recovery area files
during the course of the migration. This can be done using RMAN with two
methods: disk or tape.

Cold Migration using Disk

Cold Migration using disk relies on the ability to create a full RMAN backup in the
Flash Recovery Area to speed up the time take for the database to be restored
during the second outage.

Cold Migration using Tape

Cold Migration using tape is the slowest method to migrate to ASM and requires
the use of tape devices to hold the backup of the database. The method described
below assumes that the default device type for backup’s has been set to tape using
the RMAN configure command.

Hot Migration

Hot Migration is possible when there is sufficient disk space available to create an
ASM disk group to contain a duplicate of the database during the course of the
migration.

Oracle Database 10g Migration to Automatic Storage Management Page 6


Hot Migration to New Storage

Hot Migration to new storage relies upon the additional allocation of disk space to
be available on a permanent basis. This method is ideal in the event that new
storage is being added to hold the database, and the current storage is remaining in
place to be used to hold disk backups.

Hot Migration with Current Storage

Hot Migration with current storage utilizes the temporary presence of additional
storage. The method has additional steps that are documented in Appendix 1 of the
paper, and is used when the disk space is available for a limited amount of time.
The ultimate goal is to migrate the database back to the original storage system after
it is configured with ASM.

Oracle Database 10g Migration to Automatic Storage Management Page 7


Cold Migration to ASM
There are five phases to the migration of a database from Unix File Systems to
ASM.
1. Cold Migration - Pre Migration
2. Cold Migration - First Outage – Recovery Area moves to ASM
3. Cold Migration - Database Backup
4. Cold Migration - Second Outage – Data Area moves to ASM
5. Cold Migration - Post Migration

1. Cold Migration - Pre Migration

During this phase of the migration, there is no outage to the database.


Create the ASM Instance init.ora file
Create an SPFILE for the ASM Instance
Start the ASM Instance
Save any backups to tape

Create the ASM Instance init.ora file

The ASM Instance is created using a special init.ora file. A typical ASM init.ora file
is as follows;
*.instance_type='asm'
*.remote_login_passwordfile='SHARED'
*.large_pool_size=12M
*.asm_power_limit=10
*.background_dump_dest='/u01/app/oracle/admin/+ASM/bdump'
*.core_dump_dest='/u01/app/oracle/admin/+ASM/cdump'
*.user_dump_dest='/u01/app/oracle/admin/+ASM/udump'
*.asm_diskstring='/dev/raw/raw*'
Oracle defaults the ASM instance to have an Oracle SID of ‘+ASM’

Create an SPFILE for the ASM Instance

It is considered best practice to create an spfile for the ASM Instance so that the
asm_diskgroups parameter is automatically updated when a new disk group is
created. With this parameter set, the proper ASM disk groups are mounted when
the ASM Instance is subsequently started.
$ export ORACLE_SID=+ASM
$ sqlplus “/as sysdba”
Connected to an idle instance.

SQL> create spfile from pfile;

File created.

Start the ASM Instance

The ASM instance is started no mount in the same way that any other Oracle
Instance may be started.
SQL> startup nomount

Oracle Database 10g Migration to Automatic Storage Management Page 8


ASM instance started

Total System Global Area 100663296 bytes


Fixed Size 783672 bytes
Variable Size 99879624 bytes
Database Buffers 0 bytes
Redo Buffers 0 bytes

These backups will not be used to aid


Save any backups to tape
the migration of the database to ASM. Prior to the start of the first outage, any backups that need to be saved should be
However, it is strongly recommended copied to tape or to an alternate location. The following example assumes that
that these backups be saved to an
RMAN is being used to transfer the backups to tape.
alternate location such as tape,
because the contents of the current Save current backup sets to tape
Recovery Area will be destroyed
RMAN> backup device type sbt backupset all delete input;
during this migration.
Save current data file copies to tape
RMAN> backup device type sbt datafilecopy all delete input;
Save current archive log files to tape
RMAN> backup device type sbt archivelog all delete input;

2. Cold Migration - First Outage – Recovery Area moves to ASM

During this phase, the database must be shutdown so that the storage currently
used for the recovery area may be reformatted for use by ASM.
Clear the old Recovery Area
Prepare the disks for use by ASM
Change the permissions on the disk device file
Create the Recovery Area Disk Group
Prepare the Production Database to use the ASM disk group

Clear the old Recovery Area

Remove the redo log members from the /flash_recovery file system. Query
v$logfile and for all files residing in the /flash_recovery file system, and then
drop the redo log member. This should be completed for both online and
standby redo log files.
SQL> alter database drop logfile member
'/flash_recovery/ORCL/onlinelog/o1_mf_1_0fpqygx6_.log';
Shutdown the database and then remount
RMAN> shutdown immediate;

RMAN> startup mount


Backup and remove any final archive log files that might have been generated.
RMAN> backup device type sbt archivelog all delete input;
Disable Flashback Database – if it is currently in use.
SQL> alter database flashback off;
Query the v$controlfile view, and redefine the control_file parameter in the
spfile so that the database will not reference the /flash_recovery file system
copy.
SQL> select name from v$controlfile;

Oracle Database 10g Migration to Automatic Storage Management Page 9


NAME
-------------------------------------------------------------------
---------
/oradata/ORCL/controlfile/o1_mf_0fpqyfw7_.ctl
/flash_recovery/ORCL/controlfile/o1_mf_0fpqyg20_.ctl

SQL> alter system set control_files=


'/oradata/ORCL/controlfile/o1_mf_0fpqyfw7_.ctl' scope=spfile;
Shutdown the database
SQL> shutdown immediate;
Unmount the flash recovery area file system
# umount /flash_recovery

Prepare the disks for use by ASM

This step varies depending on the operating system. For example, on Linux, the
device that the original file system was built on must be removed, the RAID device
must be stopped, and then the disks must have a RAW device created over the
block device.

Change the permissions on the disk device file

The ASM Instance runs as the Oracle user and therefore the permissions of the
disk device files for all the disks that will be used by ASM should be changed so
that the ASM instance has write access.
# chown oracle:dba /dev/raw/raw[5-8]
# chmod 640 /dev/raw/raw[5-8]

Create the Recovery Area Disk Group

Using the disk device files created previously, the ASM disk group for the Recovery
Area can be built. Assuming the Recovery Area is to be built on 4 disks with two
failure groups the command would be.
SQL> create diskgroup RECOVERY_AREA normal redundancy
failgroup controller1 disk ‘/dev/raw/raw5’,’/dev/raw/raw6’
failgroup controller2 disk ‘/dev/raw/raw7’,’/dev/raw/raw8’;

Diskgroup created.

Prepare the Production Database to use the ASM disk group

With the creation of the ASM disk group, the Oracle database instance must be
instructed to use the ASM disk group for recovery related files.
The Oracle database instance must be mounted to allow the changes to proceed
SQL> startup mount
Change the “db_recovery_file_dest” parameter to point to the
When an ASM Disk Group is used in
RECOVERY_AREA disk group.
an Oracle database, the disk group
name is prefixed by a “+” sign. SQL> alter system set db_recovery_file_dest='+RECOVERY_AREA'
scope=both;
Re-enable flashback database if required
SQL> alter database flashback on;
Re-establish the redo logfile members back into the database.
SQL> alter database add logfile member '+RECOVERY_AREA' to group 1;

SQL> alter database add logfile member '+RECOVERY_AREA' to group 2;

SQL> alter database add logfile member '+RECOVERY_AREA' to group 3;

Oracle Database 10g Migration to Automatic Storage Management Page 10


Re-establish the second control file back into the database
SQL> select name from v$controlfile;

NAME
-------------------------------------------------------------------
--------
/oradata/ORCL/controlfile/o1_mf_0fpqyfw7_.ctl

SQL> alter system set control_files=


'/oradata/ORCL/controlfile/mycontrol.ctl',
'+RECOVERY_AREA/mycontrol.ctl' scope=spfile;

SQL> shutdown

SQL> startup nomount;

RMAN> restore controlfile from


'/oradata/ORCL/controlfile/o1_mf_0fpqyfw7_.ctl';

RMAN> alter database mount;


Open the database
SQL> alter database open;
This concludes the first outage. The database is now available and recovery related
files would be written to the ASM disk group.

3. Cold Migration - Database Backup

The database is available and the Oracle instance is using the original storage for
the database area and the ASM disk group (+RECOVERY_AREA) for flash recovery area
files. We can now prepare for the second phase of the migration.
Enable optimized incremental backups
Make the initial database backup to the ASM Disk Group
Remove the redo logfile member
Create an incremental backup of the database to the ASM Disk Group

Enable optimized incremental backups

Oracle 10g introduced optimized incremental backups via the use of the block
change-tracking file. If block change tracking has not been enabled previously on
the database, then it should be enabled for the duration of the ASM migration. The
use of the block change-tracking file will reduce the time that the final incremental
backups take to run.
SQL> alter database enable block change tracking;

If there is insufficient space in the


Make the initial database backup to the ASM Disk Group
recovery area for a full database This phase will make the initial backup set of the Oracle database.
backup, this backup must be directed
RMAN> backup incremental level 0 database tag ‘ASM_Migration’;
to a tape device. This backup, along
with the level 1 backup taken below, Remove the redo logfile member
will be restored during the migration
Remove the redo log members from the /oradata file system. Query v$logfile and
of the database to ASM
for all files residing in the /oradata file system, and drop the redo log member.
SQL> alter database drop logfile member
'/oradata/ORCL/onlinelog/o1_mf_3_0fpqyp56_.log';

Oracle Database 10g Migration to Automatic Storage Management Page 11


Create an incremental backup of the database to the ASM Disk Group
It is advisable that this phase is
This phase will create an incremental backup of the database in the
repeated on a regular basis in the lead
RECOVERY_AREA ASM Disk Group.
up to the scheduled outage. When the
final transfer occurs, which requires Using RMAN create an incremental level 1 backup of the database, and restore
an outage to the database, the time for
the backup into the database copy.
the outage will be reduced if there is
RMAN> backup incremental level 1 database tag 'ASM_Migration';
less data needing to be recovered.
Ideally, this command should 4. Cold Migration - Second Outage – Data Area moves to ASM
complete just before the scheduled
outage commences.
This is the start of the second and longer outage phase.
Prepare the control file for ASM Disk Groups
Drop the temporary files that reside on the file system
Shutdown the database and unmount the file system
Prepare the disks for use by ASM
Migrate the Control File to the official locations on both ASM Disk Groups
Restore the database files from the backup
Restore the temporary files

Prepare the control file for ASM Disk Groups

Ensure that the directory structure exists in the new ASM Disk Group for the
control files.
SQL> alter database backup controlfile to ‘+RECOVERY_AREA’;
Determine the value of the db_unique_name init.ora parameter.
If the DB_UNIQUE_NAME is not set,
SQL> show parameter db_unique_name
then the DB_UNIQUE_NAME defaults
NAME TYPE VALUE
to the value of the DB_NAME
------------------------------------ ----------- ------------------
parameter. --------
db_unique_name string ORCL

SQL> show parameter db_name

NAME TYPE VALUE


------------------------------------ ----------- ------------------
--------
db_name string ORCL
Now we must capture the current control file name
The control file that resides in the
SQL> select name from v$controlfile;
+RECOVERY_AREA disk group will be
NAME
referenced in a later step in this
-------------------------------------------------------------------
section. --------
/oradata/ORCL/controlfile/mycontrol.ctl
+RECOVERY_AREA/mycontrol.ctl
Finally, we must update the control_file parameter in the spfile
SQL> alter system set control_files =
’+RECOVERY_AREA/mycontrol.ctl’ scope=spfile;

Drop the temporary files that reside on the file system

First restart the database to a mount state


SQL> shutdown immediate;

Oracle Database 10g Migration to Automatic Storage Management Page 12


SQL> startup mount;
then, identify the current temporary files and their file size
SQL> select bytes, name from v$tempfile;

BYTES NAME
---------- --------------------------------------------------------
----
20971520 /oradata/ORCL/datafile/o1_mf_temp_0fpr0dbs_.tmp
Drop the temporary files
SQL> alter database tempfile
‘/oradata/ORCL/datafile/o1_mf_temp_0fpr0dbs_.tmp’ drop;

Shutdown the database and unmount the file system

We now need to shutdown the database in preparation for the creation of the
ASM DATA_AREA Disk Group
SQL> shutdown immediate
Unmount the Data Area file system
# umount /oradata

Prepare the disk for ASM Usage

Repeat Prepare the disks for use by ASM and Change the permissions on the
disk device file on page 10 for the new disks that will be added to the Data
Area Disk Group.
Create the Data Area Disk Group - Assuming that the Data Area is to be built
on the following 4 disks, with two failure groups
SQL> create diskgroup DATA_AREA normal redundancy
failgroup controller1 disk ‘/dev/raw/raw1’,’/dev/raw/raw2’
failgroup controller2 disk ‘/dev/raw/raw3’,’/dev/raw/raw4’;

Diskgroup created.

Migrate the Control File to the official locations on both ASM Disk Groups

Mount the database


SQL> startup mount
We must first ensure that the directory structure exists in the newly created ASM
Disk Group for the control files.
SQL> alter database backup controlfile to ‘+DATA_AREA’;
Now we must update the control_file parameter in the spfile
Substitute the {DB_UNIQUE_NAME}
SQL> alter system set control_files =
with the value previously identified. In ’+DATA_AREA/{DB_UNIQUE_NAME}/CONTROLFILE/mycontrol.ctl’,
’+RECOVERY_AREA/{DB_UNIQUE_NAME}/CONTROLFILE/mycontrol.ctl’
this example the DB_UNIQUE_NAME
scope=spfile;
is ORCL
First step is to shutdown the database cleanly.
SQL> shutdown immediate
Now we need to startup the instance with the nomount option in preparation
for restoring the control files.
SQL> startup nomount
Restore one of the original control files into the new control file locations.
RMAN> restore controlfile from '+RECOVERY_AREA/mycontrol.ctl';

Starting restore at 25-JUN-04


using target database controlfile instead of recovery catalog

Oracle Database 10g Migration to Automatic Storage Management Page 13


allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=106 devtype=DISK

channel ORA_DISK_1: copied controlfile copy


output filename=+DATA_AREA/ORCL/controlfile/cmycontrol.ctl
output filename=+RECOVERY_AREA/ORCL/controlfile/mycontrol.ctl
Finished restore at 25-JUN-04

Restore the database files from the backup

Update the configuration


SQL> alter system set db_create_file_dest='+DATA_AREA';
Bring the database to a mounted state
RMAN> alter database mount;
Restore the database to the DATA_AREA Disk Group
For each file in the database, issue the
RMAN> run
“set newname for datafile n to new;” {
set newname for datafile 1 to new;
command.
set newname for datafile 2 to new;
set newname for datafile 3 to new;
set newname for datafile 4 to new;
set newname for datafile 5 to new;
restore database;
switch datafile all;
recover database;
}
The Block Change Tracking file must now either be dropped, or relocated to the
ASM Disk Group
RMAN> sql "alter database disable block change tracking";

sql statement: alter database disable block change tracking

RMAN> sql "alter database enable block change tracking";

sql statement: alter database enable block change tracking


And the database can be opened.
RMAN> alter database open;

Restore the temporary files

In the alert log, a message reporting that a number of temporary tablespaces


contain no datafiles
*******************************************************************
**
WARNING: The following temporary tablespaces contain no files.
This condition can occur when a backup controlfile has
been restored. It may be necessary to add files to these
tablespaces. That can be done using the SQL statement:

ALTER TABLESPACE <tablespace_name> ADD TEMPFILE

Alternatively, if these temporary tablespaces are no


longer
needed, then they can be dropped.
Empty temporary tablespace: TEMP
*******************************************************************
**
Add the temporary file to the necessary tablespaces
SQL> alter tablespace temp add tempfile;
Alternatively, if the temporary file size is to be controlled
SQL> alter tablespace temp add tempfile size 20480k;

Oracle Database 10g Migration to Automatic Storage Management Page 14


Restore the redo log file members

At present, there is only one redo log file member per group that resides in the
ASM RECOVERY_AREA Disk Group.
Re-establish the second redo log member on the +DATA_AREA diskgroup for
all online and standby redo log groups.
SQL> alter database add logfile member '+DATA_AREA' to group 1;

5. Cold Migration - Post Migration

All that remains is to validate that all files have been moved to the ASM Disk
Groups
We can now query the database and ensure that all files reside in either the
DATA_AREA or RECOVERY_AREA ASM Disk Group
SQL> select name from v$datafile
union
select name from v$tempfile
union
select member from v$logfile
union
select name from v$controlfile
union
select filename from v$block_change_tracking
union
select name from v$flashback_database_logfile;

NAME
-------------------------------------------------------------------
---------
+DATA_AREA/orcl/changetracking/ctf.262.1
+DATA_AREA/orcl/controlfile/mycontrol.ctl
+DATA_AREA/orcl/datafile/sysaux.260.1
+DATA_AREA/orcl/datafile/system.258.1
+DATA_AREA/orcl/datafile/undotbs1.259.1
+DATA_AREA/orcl/datafile/users.261.1
+DATA_AREA/orcl/onlinelog/group_1.264.1
+DATA_AREA/orcl/onlinelog/group_2.265.1
+DATA_AREA/orcl/onlinelog/group_3.266.1
+DATA_AREA/orcl/tempfile/temp.263.1
+RECOVERY_AREA/orcl/controlfile/mycontrol.ctl
+RECOVERY_AREA/orcl/flashback/log_1.256.1
+RECOVERY_AREA/orcl/onlinelog/group_1.257.1
+RECOVERY_AREA/orcl/onlinelog/group_2.258.1
+RECOVERY_AREA/orcl/onlinelog/group_3.259.

Oracle Database 10g Migration to Automatic Storage Management Page 15


Hot Migration to ASM
There are three phases to the migration of a database from Unix File Systems to
ASM.
1. Hot Migration - Pre Migration
2. Hot Migration - The Switch
3. Hot Migration - Post Migration
Optionally, if the additional storage required by the hot migration needs to be
removed, perform Appendix 1: Migrating ASM Disk Groups Back to Original
Database Storage.

1. Hot Migration - Pre Migration

During this phase of the migration, there is no outage to the primary database.
Prepare the ASM Instance
Create the Data Area Disk Group
Create the Recovery Area Disk Group
Prepare the Production Database for ASM Disk Group usage
Migrate the current RMAN backups to the Recovery Area
Make the initial copy of the Oracle Datafiles
Migrate the Oracle Redo Log and Standby Redo Log files to ASM Disk Groups
Migrate the tempfiles to ASM Disk Groups
Refresh the previous copy of the Oracle Datafiles

Prepare the ASM Instance

Repeat “Prepare the disks for use by ASM” through “Start the ASM Instance”
commencing on page 10 for the new disks that will be added to the Data
Area Disk Group.

The new disk devices being added to


Create the Data Area Disk Group
the system are /dev/raw/raw9 through Assuming that the Data Area is to be built on the following 4 disks, with two
/dev/raw/raw16 failure groups
SQL> create diskgroup DATA_AREA normal redundancy
failgroup controller1 disk ‘/dev/raw/raw9’,’/dev/raw/raw10’
failgroup controller2 disk ‘/dev/raw/raw11’,’/dev/raw/raw12’;

Diskgroup created.

Create the Recovery Area Disk Group

Assuming that the Recovery Area is to be built on the following 4 disks, with
two failure groups
SQL> create diskgroup RECOVERY_AREA normal redundancy
failgroup controller1 disk ‘/dev/raw/raw13’,’/dev/raw/raw14’

Oracle Database 10g Migration to Automatic Storage Management Page 16


failgroup controller2 disk ‘/dev/raw/raw15’,’/dev/raw/raw16’;

Diskgroup created.

Prepare the Production Database for ASM Disk Group usage

The next phase is to advise the production database that ASM Disk Groups should
be used for all new data files as well as all recovery area usage.
Change the db_create_file_dest init.ora parameter to point to the
DATA_AREA diskgroup.
SQL> alter system set db_create_file_dest='+DATA_AREA' scope=both;
Change the db_recovery_file_dest init.ora parameter to point to the
RECOVERY_AREA diskgroup.
SQL> alter system set db_recovery_file_dest='+RECOVERY_AREA'
scope=both;

Migrate the current RMAN backups to the Recovery Area

This phase will migrate all the current RMAN backups in the recovery area to the
ASM Disk Group.
Move current backup sets to the ASM disk groups
RMAN> backup backupset all delete input;
Move current data file copies to the ASM disk groups
RMAN> backup as copy datafilecopy all delete input;
Move current archive log files
RMAN> backup as copy archivelog all delete input;
If Database Block Change Tracking has been enabled previously, the file must
The Block Change Tracking file cannot be recreated in the ASM Disk Groups.
be moved to the ASM Disk Group,
SQL> alter database disable block change tracking;
which means that the all Level 1
backups taken after the Block Change Database altered.
Tracking file has been recreated will SQL> alter database enable block change tracking;
not be able to exploit the Block
Database altered.
Change Tracking file.
If Database Block Change Tracking has not been enabled previously on the
In order to exploit the Block Change database, then it must be enabled for the duration of the ASM Migration.
Tracking file, a new level 0 backup SQL> alter database enable block change tracking;
must be taken.
Database altered.

Make the initial copy of the Oracle Datafiles

This phase will make copies of all the Oracle datafiles in to the DATA_AREA
ASM Disk Group
Using RMAN backup the database using the ‘AS COPY’ syntax.
RMAN> backup device type disk incremental level 0 as copy tag
'ASM_Migration' database format '+DATA_AREA';

Migrate the Oracle Redo Log and Standby Redo Log files to ASM Disk Groups

This phase will move the Oracle redo log files and Oracle standby redo log files to
ASM Disk Groups. How the new redo log files are added, depends upon the

Oracle Database 10g Migration to Automatic Storage Management Page 17


number of redo log members that control file can support. The following example
assumes a maximum of 3 redo log members per group.
Identify the current redo log members
SQL> select member from v$logfile;

MEMBER
-------------------------------------------------------------------
---------
/oradata/ORCL/onlinelog/o1_mf_1_0fs38tdh_.log
/flash_recovery/ORCL/onlinelog/o1_mf_1_0fs38tyq_.log
/oradata/ORCL/onlinelog/o1_mf_2_0fs38vmw_.log
/flash_recovery/ORCL/onlinelog/o1_mf_2_0fs393bj_.log
/oradata/ORCL/onlinelog/o1_mf_3_0fs3942r_.log
/flash_recovery/ORCL/onlinelog/o1_mf_3_0fs39c12_.log

Since you cannot drop a member from For each redo log group,
the current logfile group, you must
Drop one of the two current redo log members.
switch logs at least once.
SQL> alter database drop logfile member
'/flash_recovery/ORCL/onlinelog/o1_mf_1_0fs38tyq_.log';
Add the two new redo log members
SQL> alter database add logfile member
'+DATA_AREA','+RECOVERY_AREA' to group 1;

Before a logfile member can be And finally drop the other original redo log member
dropped, a new member must have SQL> alter database drop logfile member
been initialized, so each logfile group '/oradata/ORCL/onlinelog/o1_mf_1_0fs38tdh_.log';

must be switched into at a minimum.


Migrate the tempfiles to ASM Disk Groups

First identify the current temporary files and their sizes,


SQL> select bytes, name from v$tempfile;

BYTES NAME
---------- --------------------------------------------------------
---------
20971520 /oradata/ORCL/datafile/o1_mf_temp_0fs3bq8w_.tmp
Add the new temporary file
SQL> alter tablespace temp add tempfile size 20m;
Then remove the original temporary file
SQL> alter database tempfile
'/oradata/ORCL/datafile/o1_mf_temp_0fs3bq8w_.tmp' drop;

It is advisable that this phase is


Refresh the previous copy of the Oracle Datafiles
repeated on a regular basis leading up This phase will refresh the copies of all the Oracle datafiles in the DATA_AREA
to the scheduled outage. When the ASM Disk Group
final transfer occurs, which requires
an outage to the database, the time for Using RMAN create an incremental level 1 backup of the database, and restore
the outage will be reduced if recovery the backup into the database copy.
is minimized. Ideally, this command
RMAN> run {
should be completed just before the backup incremental level 1 for recover of copy with tag
'ASM_Migration' database;
scheduled outage commences.
recover copy of database with tag 'ASM_Migration';
}

2. Hot Migration - The Switch

This is the start of the outage phase, which should be kept to a minimum of steps
Prepare the control_file from ASM Disk Groups

Oracle Database 10g Migration to Automatic Storage Management Page 18


Switching the database data files to ASM Disk Groups
Move the Flashback Database Log Files

Prepare the control_file from ASM Disk Groups

We must first ensure that the directory structure exists in the new ASM Disk
Groups for the control files.
SQL> alter database backup controlfile to ‘+DATA_AREA’;
SQL> alter database backup controlfile to ‘+RECOVERY_AREA’;
We must now determine the value of the db_unique_name init.ora parameter.
If the DB_UNIQUE_NAME is not set,
SQL> show parameter db_name
then the DB_UNIQUE_NAME defaults
NAME TYPE VALUE
to the value of the DB_NAME
------------------------------------ ----------- ------------------
parameter. ---------
db_name string ORCL
SQL> show parameter db_unique_name

NAME TYPE VALUE


------------------------------------ ----------- ------------------
---------
db_unique_name string ORCL
SQL>
Now we must update the control_file parameter in the spfile
SQL> alter system set control_files =
’+DATA_AREA/{DB_UNIQUE_NAME}/CONTROLFILE/mycontrol.ctl’,
’+RECOVERY_AREA/{DB_UNIQUE_NAME}/CONTROLFILE/mycontrol.ctl’
scope=spfile;
Finally, we must capture the current control file name
SQL> select name from v$controlfile;

NAME
-------------------------------------------------------------------
--------
/oradata/ORCL/controlfile/o1_mf_0fs38sx3_.ctl
/flash_recovery/ORCL/controlfile/o1_mf_0fs38t2w_.ctl

Switching the database data files to ASM Disk Groups

First step is to shutdown the database cleanly.


RMAN> shutdown immediate;
Now we need to nomount the instance in preparation for restoring the control
files.
RMAN> startup nomount
Restore one of the original control files into the new control file locations.
RMAN> restore controlfile from
'/oradata/ORCL/controlfile/o1_mf_0fs38sx3_.ctl';

Starting restore at 25-JUN-04


allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=270 devtype=DISK

channel ORA_DISK_1: copied controlfile copy


output filename=+DATA_AREA/orcl/controlfile/mycontrol.ctl
output filename=+RECOVERY_AREA/orcl/controlfile/mycontrol.ctl
Finished restore at 25-JUN-04
Now we need to mount the database
RMAN> alter database mount;
Switch the database to use the backup datafiles copies created previously.
RMAN> switch database to copy;

Oracle Database 10g Migration to Automatic Storage Management Page 19


Since there is a degree of change between the time the last incremental backup
was taken and the database being shutdown, we must recover the datafile
copies.
RMAN> recover database;

Move the Flashback Database Log Files

We must now disable and re-enable Flashback Database so that the flashback log
files are recreated in the ASM Recovery Area disk group.
RMAN> sql "alter database flashback off";

sql statement: alter database flashback off

RMAN> sql "alter database flashback on";

sql statement: alter database flashback on


And then we can open the database
RMAN> alter database open;

3. Hot Migration - Post Migration

All that remains is to remove the block change-tracking file and to validate that all
files have been moved to the ASM Disk Groups
If Block Change Tracking was enabled for the purpose of the migration, then
this should now be disabled
SQL> alter database disable block change tracking;
We can now query the database and ensure that all files reside in either the
DATA_AREA or RECOVERY_AREA ASM Disk Group
SQL> select name from v$controlfile
union
select name from v$datafile
union
select name from v$tempfile
union
select member from v$logfile
union
select filename from v$block_change_tracking
union
select name from v$flashback_database_logfile;

NAME
-------------------------------------------------------------------
---------
+DATA_AREA/orcl/changetracking/ctf.256.1
+DATA_AREA/orcl/controlfile/mycontrol.ctl
+DATA_AREA/orcl/datafile/sysaux.259.1
+DATA_AREA/orcl/datafile/system.257.1
+DATA_AREA/orcl/datafile/undotbs1.258.1
+DATA_AREA/orcl/datafile/users.260.1
+DATA_AREA/orcl/onlinelog/group_1.263.1
+DATA_AREA/orcl/onlinelog/group_2.264.1
+DATA_AREA/orcl/onlinelog/group_3.265.1
+DATA_AREA/orcl/tempfile/temp.266.1
+RECOVERY_AREA/orcl/controlfile/mycontrol.ctl
+RECOVERY_AREA/orcl/flashback/log_1.276.1
+RECOVERY_AREA/orcl/onlinelog/group_1.265.1
+RECOVERY_AREA/orcl/onlinelog/group_2.266.1
+RECOVERY_AREA/orcl/onlinelog/group_3.267.1

Oracle Database 10g Migration to Automatic Storage Management Page 20


CONCLUSION
This white paper has provided two different methods of moving an existing Oracle
Database 10g database from a traditional file system configuration into Oracle’s
Automatic Storage Management, depending upon the amount of storage available
to the customer and the amount of down time the customer can tolerate.

Oracle Database 10g Migration to Automatic Storage Management Page 21


APPENDIX 1: MIGRATING ASM DISK GROUPS BACK TO ORIGINAL
DATABASE STORAGE
Following the completion of the HOT Migration to Automatic Storage
Management, the database and recovery area is residing on the new storage. If it is
necessary to move the database back to the original storage, the following sequence
of events can be performed online. This might be the case if you borrowed storage
to accomplish this migration that needs to be returned.
Login to the ASM Instance
$ export ORACLE_SID=+ASM
$ sqlplus “/as sysdba”
Identify the ASM Disk Name, Disk Group Name of the disks to be removed
SQL> select failgroup,name from v$asm_disk

FAILGROUP NAME
----------------------------- ------------------------------
CONTROLLER2 RECOVERY_AREA_0003
CONTROLLER2 RECOVERY_AREA_0002
CONTROLLER1 RECOVERY_AREA_0001
CONTROLLER1 RECOVERY_AREA_0000
CONTROLLER2 DATA_AREA_0003
CONTROLLER2 DATA_AREA_0002
CONTROLLER1 DATA_AREA_0001
CONTROLLER1 DATA_AREA_0000
Now we can modify the disk group, remove the temporary storage and add the
original storage back into the ASM Disk Group. Before the disks can be
added into the appropriate disk group, the original storage must be formatted
for use by ASM.
SQL> alter diskgroup data_area
drop disk data_area_0000, data_area_0001, data_area_0002,
data_area_0003
add failgroup controller1 disk ‘/dev/raw/raw1’,’/dev/raw/raw2’
failgroup controller2 disk ‘/dev/raw/raw3’,’/dev/raw/raw4’;

Diskgroup altered.

SQL> alter diskgroup data_area


drop disk recovery_area_0000, recovery_area_0001,
recovery_area_0002, recovery_area_0003
add failgroup controller1 disk ‘/dev/raw/raw5’,’/dev/raw/raw6’
failgroup controller2 disk ‘/dev/raw/raw7’,’/dev/raw/raw8’;

Diskgroup altered.
To check the status of the rebalance operation, that occurs in the background,
query the v$asm_operation view;
SQL> select * from v$asm_operation;

GROUP_NUMBER OPERA STAT POWER ACTUAL SOFAR EST_WORK EST_RATE


EST_MINUTES
------------ ----- ---- ------ ------ ----- -------- -------- -----
------
1 REBAL RUN 10 10 72 2114 1254
1
Once the rebalance operation has been completed indicated by no information
being returned from the v$asm_operation view, query the v$asm_disk view
again to verify that the diskgroup is using the original disks.
SQL> select path,failgroup,name from v$asm_disk

PATH FAILGROUP NAME


--------------- --------------- --------------------
/dev/raw/raw9

Oracle Database 10g Migration to Automatic Storage Management Page 22


/dev/raw/raw10
/dev/raw/raw11
/dev/raw/raw12
/dev/raw/raw13
/dev/raw/raw14
/dev/raw/raw15
/dev/raw/raw16
/dev/raw/raw8 CONTROLLER2 RECOVERY_AREA_0007
/dev/raw/raw7 CONTROLLER2 RECOVERY_AREA_0006
/dev/raw/raw6 CONTROLLER1 RECOVERY_AREA_0005
/dev/raw/raw5 CONTROLLER1 RECOVERY_AREA_0004
/dev/raw/raw4 CONTROLLER2 DATA_AREA_0007
/dev/raw/raw3 CONTROLLER2 DATA_AREA_0006
/dev/raw/raw2 CONTROLLER1 DATA_AREA_0005
/dev/raw/raw1 CONTROLLER1 DATA_AREA_0004
The temporary storage may now be removed from the system.

Oracle Database 10g Migration to Automatic Storage Management Page 23


Oracle Database 10g Migration to Automatic Storage Management
August 2004
Author: Andrew Babb
Contributing Authors:, Ray Dutcher, Susan Kornberg, Lawrence To, Doug Utzig, James Viscusi, Pradeep Bhat, Tammy Bednar, Paul Manning

Oracle Corporation
World Headquarters
500 Oracle Parkway
Redwood Shores, CA 94065
U.S.A.

Worldwide Inquiries:
Phone: +1.650.506.7000
Fax: +1.650.506.7200
www.oracle.com

Oracle is a registered trademark of Oracle Corporation. Various


product and service names referenced herein may be trademarks
of Oracle Corporation. All other product and service names
mentioned may be trademarks of their respective owners.

Copyright © 2001 Oracle Corporation


All rights reserved.