Vous êtes sur la page 1sur 17

oracle 2 node RAC upgrade from 10.2.0.1 to 10.2.0.4: **************************************************** NOTE: 1> In Patchset up gradation the patch p6810189_10204_Linux-x86.

zip is same for Clusterware and RAC database. (A) Upgrade Clusterware software (CRS_HOME): =============================================== >> 1. Upgrading Oracle Clusterware software must be at the same or newer level a s the Oracle software in the Oracle RAC Oracle home. Therefore, you should alway s upgrade Oracle Clusterware before you upgrade Oracle RAC. This patch does not allow you to upgrade Oracle RAC before you upgrade Oracle Clusterware. I strongly advise you before upgrading you must read the patchset release notes or Doc ID 316900.1 1. Download and extract the patch set installation software: Download the patch for 10.2.0.4 patch number is p6810189_10204_Linux-x86.zip . (P8202632_10205_LINUX.zip for 10.2.0.5) the patch set installation archive to a directory that is not the Oracle home directory or under the Oracle home direc tory. Enter the following command to unzip and extract the installation files: $ unzip p6810189_10204_Linux-x86 >> 2. Manage your data with Time Zone before upgrade -----------------------------------------------------The 10.2.0.4 patch set includes an update to the Oracle time zone definitions to Version 4. This version of the time zone definitions includes the changes to da ylight saving time in the USA in 2007 and other updates. From 9i onwards Oracle has 2 data types that may have data stored affected by a update of the RDBMS DST (Daylight Saving Time) definitions, those are TIMESTAMP WITH LOCAL TIME ZONE (TSLTZ) and TIMESTAMP WITH TIME ZONE (TSTZ). If you have TZ data stored in your database you need to go through the following steps to ensure the integrity of your data while the database upgrade. Check which TIMEZONE version file you are currently using. SQL> select version from v$timezone_file; VERSION ---------2 If this gives 4 then you may simply proceed with the upgrade even if you have TZ data. If this gives higher then 4, look at the Meta link note: Note 553812.1 If this gives lower then 4, perform the following steps: (according Metalink Not e 553812.1) Download utltzpv4.sql and run it. SQL> @utltzpv4.sql DROP TABLE sys.sys_tzuv2_temptab CASCADE CONSTRAINTS * ERROR at line 1: ORA-00942: table or view does not exist Table created. DROP TABLE sys.sys_tzuv2_affected_regions CASCADE CONSTRAINTS * ERROR at line 1: ORA-00942: table or view does not exist

Table created. Your current timezone version is 2! Do a select * from sys.sys_tzuv2_temptab; to see if any TIMEZONE data is affecte d by version 4 transition rules. Any table with YES in the nested_tab column (last column) needs a manual check a s these are nested tables. PL/SQL procedure successfully completed. Commit complete. /* Once the script finishes successfully execute the following query */ column table_owner format a4 column column_name format a18 SQL> select * from sys_tzuv2_temptab; TABL ---SYS SYS SYS SYS SYS SYS SYS TABLE_NAME -----------------------------SCHEDULER$_JOB SCHEDULER$_JOB SCHEDULER$_JOB SCHEDULER$_JOB SCHEDULER$_JOB SCHEDULER$_JOB_RUN_DETAILS SCHEDULER$_JOB_RUN_DETAILS COLUMN_NAME ROWCOUNT NES ------------------ ---------- --LAST_ENABLED_TIME 3 LAST_END_DATE 1 LAST_START_DATE 1 NEXT_RUN_DATE 1 START_DATE 1 REQ_START_DATE 11 START_DATE 11

7 rows selected. If it returns no rows, there is nothing that needs to be done. Just proceed with the upgrade. If it returns the detail of columns that contain TZ data which may be affected b y the upgrade, see metalink Note: Note 553812.1 As My understand the Metalink Note 553812.1 states that if you see SYS owned SCH EDULER objects then ignore them and proceed, if you have seen user data or user created jobs here then you need to take a backup of data before upgrade and rest ore it back after you upgrade, And Remove any user created jobs and re-create th em after the upgrade. As per the output of the table, we only have sys owned objects to we are go to p roceed from here without any actions. >> 3. Clusterware Patchset Installation: -------------------------------------------Oracle 9i onwards we have rolling upgrade and non rolling upgrade: Rolling Upgrade (No Downtime) Non Rolling Upgrade (Complete Downtime) > Rolling Upgrade (No Downtime) for clusterware: By default, Oracle supports rolling upgrade for Oracle Clusterware. Shutdown The Oracle Instance on Node 1 Apply the patch to the Oracle Instance on Node 1 Start the Oracle Instance on Node 1 Shutdown the Oracle Instance on Node 2 Apply the patch to the Oracle Instance on Node2 Start the oracle Instance on Node2 --> The following steps only on one node:

In my case I am going to upgrade first master node (node1) Shut down all processes in the Oracle home on the node that might be accessing a database, for example Oracle Enterprise Manager Database Control or iSQL*Plus: $ emctl stop dbconsole $ isqlplusctl stop racdb1@node1{~}$ crsctl query crs softwareversion CRS software version on node [node1] is [10.2.0.1.0] node1-> crsctl query crs activeversion CRS active version on the cluster is [10.2.0.1.0] racdb1@node1{~}$ export PATH=$PATH:$ORACLE_HOME/OPatch racdb1@node1{~}$ opatch lsinventory Invoking OPatch 10.2.0.1.0 Oracle interim Patch Installer version 10.2.0.1.0 Copyright (c) 2005, Oracle Corporation. All rights reserved.. Oracle Home : /u01/app/oracle/product/10.2.0/db_1 Central Inventory : /u01/app/oracle/oraInventory from : /u01/app/oracle/product/10.2.0/db_1/oraInst.loc OPatch version : 10.2.0.1.0 OUI version : 10.2.0.1.0 OUI location : /u01/app/oracle/product/10.2.0/db_1/oui Log file location : /u01/app/oracle/product/10.2.0/db_1/cfgtoollogs/opatch/opatc h-30_Mar_08_16-45-06-GMT+05_Fri.log -------------------------------------------------------------------------------Installed Top-level Products (1): Oracle Database 10g There are 1 products installed in this Oracle Home. There are no Interim patches installed in this Oracle Home. Rac system comprising of multiple nodes Local node = node1 Remote node = node2 -------------------------------------------------------------------------------OPatch succeeded. racdb1@node1{~}$ Now, Stop service, db instance, asm and nodeapps on 1st Node (node1): racdb1@node1{~}$ srvctl stop instance -d racdb -i racdb1 racdb1@node1{~}$ srvctl stop asm -n node1 racdb1@node1{~}$ srvctl stop nodeapps -n node1 NOTE: If the database is on Instance1,Relocate database to second instance racdb1@node1{/u01/app/oracle/product/10.2.0/crs_1/bin}$ crs_relocate ora.racdb.d b Attempting to stop `ora.racdb.db` on member `node1` Stop of `ora.racdb.db` on member `node1` succeeded. Attempting to start `ora.racdb.db` on member `node2` Start of `ora.racdb.db` on member `node2` succeeded. racdb1@node1{/u01/app/oracle/product/10.2.0/crs_1/bin}$ racdb1@node1{~}$ crs_stat -t Name Type Target State Host -----------------------------------------------------------ora....SM1.asm application OFFLINE OFFLINE 10.2.0.1.0

ora....E1.lsnr application ora.node1.gsd application ora.node1.ons application ora.node1.vip application ora....SM2.asm application ora....E2.lsnr application ora.node2.gsd application ora.node2.ons application ora.node2.vip application ora.racdb.db application ora....acdb.cs application ora....db1.srv application ora....acdb.cs application ora....db2.srv application ora....b1.inst application ora....b2.inst application racdb1@node1{~}$

OFFLINE OFFLINE OFFLINE OFFLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE OFFLINE ONLINE

OFFLINE OFFLINE OFFLINE OFFLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE OFFLINE ONLINE

node2 node2 node2 node2 node2 node2 node1 node2 node2 node2 node2

NOTE : Don't stop the CRS daemons. They would be brought down later. racdb1@node1{~}$ crsctl check crs CSS appears healthy CRS appears healthy EVM appears healthy racdb1@node1{~}$ >> 4. Files/Directories that need to be backed up before upgrade: ------------------------------------------------------------------As per as Note: 754369.1Oracle recommended you consider using operating system (OS) level backup tools a nd strategies whenever possible to backup the whole node for faster restore and recovery of Oracle Installation to previous consistent state. When OS level backup of whole node is not a option due to time, space constraint s or otherwise, you can backup following list of files: Scripts init. (init.cssd, init.crsd, etc)

These files are used to start the daemons (CRS stack), On Linux they are located under /etc/init.d directory. inittab file : Hear is where the scripts init. s started by init. Are registered, Oracle clusterware i

Control files: these files are used to control some aspects of oracle clusterwar e like enable/disable and other. These files also known as SCLS_SRC files under /etc/oracle or /var/opt/oracle Example: For linux.. /etc/init.d/init.cssd /etc/init.d/init.crs /etc/init.d/init.crsd /etc/init.d/init.evmd /etc/oracle /oracle/inittab Use cp/tar to backup the above files

Oracle recommends that you create a backup of the Oracle Inventory, CRS_HOME, OR ACLE_HOME, Oracle 10g Database before you install the patch set. Please ensure t hat you create a backup of these components on all nodes before applying the pat ch. Copy oracle homes on all nodes: # cd /oracle/product/10.2.0/ crs_1 asm db_1 oraInventory # cp -Rp /home/oracle/oraInventory /home/oracle/old_oraInventory # tar czf /home/oracle/OraCRSHomebkp.tar.gz crs_1 # tar czf /home/oracle/OraDBHomebkp.tar.gz db_1 Now, backup the clusterware componentsThe clusterware components are OCR & VOTEDISKs Use dd OS command for backup of voting/ocr disk files if that are placed on raw st orage device. Take backup of OCR: ocrcheck ocrconfig -showbackup racdb1@node1{~}$ ocrconfig -export /home/oracle/OcrExpBkp.dump Take backup of Votedisk: $crsctl query css votedisk racdb1@node1{/home/oracle}$ crsctl query css votedisk 0. 0 /dev/raw/raw6 located 1 votedisk(s). Ex: $ dd if=votedisk_name of=backup_votedisk_name bs=4k racdb1@node1{/home/oracle}$ dd if=/dev/raw/raw6 of=/home/oracle/VoteDiskBkp bs=4 k 52220+0 records in 52220+0 records out 213893120 bytes (214 MB) copied, 78.0538 seconds, 2.7 MB/s racdb1@node1{/home/oracle}$ Now, finally backup the database using RMAN: racdb2@node2{~}$ rman target / RMAN> backup database format '/home/oracle/RMN-Full-Bkp_%U' plus archivelog; >> 5. Unzip the patch (p6810189_10204_Linux-x86.zip) and invoke runInstaller: ------------------------------------------------------------------------------This runInstaller will linstall Oracle database 10g release 2 Patchset 3 (10.2.0 .4). The patchset document says the patch can be applied as rolling upgrade but when you invoke the runInstaller for cluster upgrade, you won't get the option to sel ect one of the two nodes (in 2 node cluster). Both the nodes are automatically s elected and grayed out (we can't make any change). In the screenshot as shown below both the nodes are grayed out and there is not option to select one node (this was my assumpiton intially that rolling should a pply one by one with node selection option)..Click Next to move ahead... Once the installer completes, it will give you message as follows and prompts to run the following two scripts as Root.(Each node one at a time)

--> Run as Root

################## Note: The following instructions are displayed on the Oracle Universal Installer scree n: To complete the installation of this patch set, perform the following tasks on e ach node: On Node1: a. Log in as the root user and enter the following command to shut down the Ora cle Clusterware: # /u01/app/oracle/product/10.2.0/crs_1/bin/crsctl stop crs c. Run the root102.sh script to automatically start the Oracle Clusterware on the patched node: # /u01/app/oracle/product/10.2.0/crs_1/install/root102.sh #################### /u01/app/oracle/product/10.2.0/crs_1/bin/crsctl stop crs /u01/app/oracle/product/10.2.0/crs_1/install/root102.sh racdb1@node1{/u01/app/oracle/product/10.2.0/crs_1/bin}$ ./crsctl check crs CSS appears healthy CRS appears healthy EVM appears healthy racdb1@node1{/u01/app/oracle/product/10.2.0/crs_1/bin}$ ./crsctl stop crs Stopping resources. Successfully stopped CRS resources Stopping CSSD. Shutting down CSS daemon. Shutdown request successfully issued. racdb1@node1{/u01/app/oracle/product/10.2.0/crs_1/bin}$ racdb1@node1{/u01/app/oracle/product/10.2.0/crs_1/install}$ ./root102.sh Creating pre-patch directory for saving pre-patch clusterware files Completed patching clusterware files to /u01/app/oracle/product/10.2.0/crs_1 Relinking some shared libraries. Relinking of patched files is complete. WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root WARNING: directory '/u01/app/oracle/product' is not owned by root WARNING: directory '/u01/app/oracle' is not owned by root WARNING: directory '/u01/app' is not owned by root WARNING: directory '/u01' is not owned by root Preparing to recopy patched init and RC scripts. Recopying init and RC scripts. Startup will be queued to init within 30 seconds. Starting up the CRS daemons. Waiting for the patched CRS daemons to start. This may take a while on some systems. . 10204 patch successfully applied. clscfg: EXISTING configuration version 3 detected. clscfg: version 3 is 10G Release 2. Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node <nodenumber>: <nodename> <private interconnect name> <hostname> node 1: node1 node1-priv node1 Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. clscfg -upgrade completed successfully racdb1@node1{/u01/app/oracle/product/10.2.0/crs_1/install}$

(NOTE : The above scripts will automatically startup the clusterware on node 1 ( and nodeapps,asm and instance if they are enabled for startup). Notice that data base also has been relocated to node1 and instance is also up and running, so th at steps can be performed on second insntance.See below: racdb1@node1{/u01/app/oracle/product/10.2.0/crs_1/bin}$ crs_stat -t Name Type Target State Host -----------------------------------------------------------ora....SM1.asm application ONLINE ONLINE node1 ora....E1.lsnr application ONLINE ONLINE node1 ora.node1.gsd application ONLINE ONLINE node1 ora.node1.ons application ONLINE ONLINE node1 ora.node1.vip application ONLINE ONLINE node1 ora....SM2.asm application ONLINE ONLINE node2 ora....E2.lsnr application ONLINE OFFLINE ora.node2.gsd application ONLINE ONLINE node2 ora.node2.ons application ONLINE ONLINE node2 ora.node2.vip application ONLINE ONLINE node2 ora.racdb.db application ONLINE ONLINE node1 ora....acdb.cs application ONLINE UNKNOWN node2 ora....db1.srv application ONLINE ONLINE node2 ora....acdb.cs application ONLINE ONLINE node2 ora....db2.srv application ONLINE ONLINE node2 ora....b1.inst application ONLINE ONLINE node1 ora....b2.inst application ONLINE UNKNOWN node2 racdb1@node1{/u01/app/oracle/product/10.2.0/crs_1/bin}$ NOTE: DO NOT FORGET to relocate the services to node1 which were moved to node2 when your instance 1 (racdb1) went down in the earlier steps. If you do not relo cate the services and stop the resources on node2 to run the 'root102.sh', users will not be able to connect to services. Perform below steps on node1: srvctl relocate service -d racdb -s finance_racdb -i racdb2 -t racdb1 srvctl relocate service -d racdb -s hr_racdb -i racdb2 -t racdb1 node1-> crs_stat -t Name Type Target State Host -----------------------------------------------------------ora....SM1.asm application ONLINE ONLINE node1 ora....E1.lsnr application ONLINE ONLINE node1 ora.node1.gsd application ONLINE ONLINE node1 ora.node1.ons application ONLINE ONLINE node1 ora.node1.vip application ONLINE ONLINE node1 ora....SM2.asm application ONLINE ONLINE node2 ora....E2.lsnr application ONLINE ONLINE node2 ora.node2.gsd application ONLINE ONLINE node2 ora.node2.ons application ONLINE ONLINE node2 ora.node2.vip application ONLINE ONLINE node2 ora.racdb.db application ONLINE ONLINE node1 ora....acdb.cs application ONLINE ONLINE node2 ora....db1.srv application ONLINE ONLINE node1 ora....acdb.cs application ONLINE ONLINE node2 ora....db2.srv application ONLINE ONLINE node1 ora....b1.inst application ONLINE ONLINE node1 ora....b2.inst application ONLINE ONLINE node2 node1->

NOTE: Now the software version of clusterware change to 10.2.0.4 but it would be active only after upgrade of second node. See below: racdb1@node1{/u01/app/oracle/product/10.2.0/crs_1/bin}$ ./crsctl query crs softw areversion CRS software version on node [node1] is [10.2.0.4.0] racdb1@node1{/u01/app/oracle/product/10.2.0/crs_1/bin}$ ./crsctl query crs activ eversion CRS active version on the cluster is [10.2.0.1.0] Now is the time to repeat the scripts on 2nd node(node2). Before that shutdown t he resources i.e. Stop service, db instance, asm and nodeapps on node2: racdb2@node2{/u01/app/oracle/product/10.2.0/crs_1/bin}$ srvctl stop instance -d racdb -i racdb2 racdb2@node2{/u01/app/oracle/product/10.2.0/crs_1/bin}$ srvctl stop asm -n node2 racdb2@node2{/u01/app/oracle/product/10.2.0/crs_1/bin}$ srvctl stop nodeapps -n node2 node2-> crs_stat -t Name Type Target State Host -----------------------------------------------------------ora....SM1.asm application ONLINE ONLINE node1 ora....E1.lsnr application ONLINE ONLINE node1 ora.node1.gsd application ONLINE ONLINE node1 ora.node1.ons application ONLINE ONLINE node1 ora.node1.vip application ONLINE ONLINE node1 ora....SM2.asm application OFFLINE OFFLINE ora....E2.lsnr application OFFLINE OFFLINE ora.node2.gsd application OFFLINE OFFLINE ora.node2.ons application OFFLINE OFFLINE ora.node2.vip application OFFLINE OFFLINE ora.racdb.db application ONLINE ONLINE node1 ora....acdb.cs application ONLINE ONLINE node2 ora....db1.srv application ONLINE ONLINE node1 ora....acdb.cs application ONLINE ONLINE node2 ora....db2.srv application ONLINE ONLINE node1 ora....b1.inst application ONLINE ONLINE node1 ora....b2.inst application OFFLINE OFFLINE node2-> Now; a. Log in as the root user and enter the following command to shut down the Ora cle Clusterware: # /u01/app/oracle/product/10.2.0/crs_1/bin/crsctl stop crs node2-> crsctl stop crs Stopping resources. Successfully stopped CRS resources Stopping CSSD. Shutting down CSS daemon. Shutdown request successfully issued. node2-> c. Run the root102.sh script to automatically start the Oracle Clusterware on the patched node: # /u01/app/oracle/product/10.2.0/crs_1/install/root102.sh node2-> ./root102.sh Creating pre-patch directory for saving pre-patch clusterware files

Completed patching clusterware files to /u01/app/oracle/product/10.2.0/crs_1 Relinking some shared libraries. Relinking of patched files is complete. WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root WARNING: directory '/u01/app/oracle/product' is not owned by root WARNING: directory '/u01/app/oracle' is not owned by root WARNING: directory '/u01/app' is not owned by root WARNING: directory '/u01' is not owned by root Preparing to recopy patched init and RC scripts. Recopying init and RC scripts. Startup will be queued to init within 30 seconds. Starting up the CRS daemons. Waiting for the patched CRS daemons to start. This may take a while on some systems. . . 10204 patch successfully applied. clscfg: EXISTING configuration version 3 detected. clscfg: version 3 is 10G Release 2. Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node <nodenumber>: <nodename> <private interconnect name> <hostname> node 2: node2 node2-priv node2 Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. clscfg -upgrade completed successfully node2-> Check below to confirm the same: node2-> crs_stat -t Name Type Target State Host -----------------------------------------------------------ora....SM1.asm application ONLINE ONLINE node1 ora....E1.lsnr application ONLINE ONLINE node1 ora.node1.gsd application ONLINE ONLINE node1 ora.node1.ons application ONLINE ONLINE node1 ora.node1.vip application ONLINE ONLINE node1 ora....SM2.asm application ONLINE ONLINE node2 ora....E2.lsnr application ONLINE ONLINE node2 ora.node2.gsd application ONLINE ONLINE node2 ora.node2.ons application ONLINE ONLINE node2 ora.node2.vip application ONLINE ONLINE node2 ora.racdb.db application ONLINE ONLINE node1 ora....acdb.cs application ONLINE ONLINE node1 ora....db1.srv application ONLINE ONLINE node1 ora....acdb.cs application ONLINE ONLINE node1 ora....db2.srv application ONLINE ONLINE node1 ora....b1.inst application ONLINE ONLINE node1 ora....b2.inst application ONLINE ONLINE node2 node2-> Now, again relocate the service hr_racdb to instance2 as this instance2 is the p referred instance for this service.srvctl relocate node2-> service -d racdb -s hr_racdb -i racdb1 -t racdb2 Confirm the same: node2-> crs_stat -t

Name Type Target State Host -----------------------------------------------------------ora....SM1.asm application ONLINE ONLINE node1 ora....E1.lsnr application ONLINE ONLINE node1 ora.node1.gsd application ONLINE ONLINE node1 ora.node1.ons application ONLINE ONLINE node1 ora.node1.vip application ONLINE ONLINE node1 ora....SM2.asm application ONLINE ONLINE node2 ora....E2.lsnr application ONLINE ONLINE node2 ora.node2.gsd application ONLINE ONLINE node2 ora.node2.ons application ONLINE ONLINE node2 ora.node2.vip application ONLINE ONLINE node2 ora.racdb.db application ONLINE ONLINE node1 ora....acdb.cs application ONLINE ONLINE node1 ora....db1.srv application ONLINE ONLINE node1 ora....acdb.cs application ONLINE ONLINE node1 ora....db2.srv application ONLINE ONLINE node2 ora....b1.inst application ONLINE ONLINE node1 ora....b2.inst application ONLINE ONLINE node2 node2-> Run below two commands to cofirm that CRS has been updated: node1-> crsctl query crs softwareversion CRS software version on node [node1] is [10.2.0.4.0] node1-> crsctl query crs activeversion CRS active version on the cluster is [10.2.0.4.0] node1-> You are done here!! Enjoy!!! (B) Upgrade RDBMS Software which which contains ASM_HOME also (ORACLE_HOME): =========================================================================== perform below steps on node1$ emctl stop dbconsole $ isqlplusctl stop $srvctl stop service -d racdb $srvctl stop database -d racdb $srvctl stop asm -n node1 $srvctl stop asm -n node2 $srvctl stop nodeapps -n node1 $srvctl stop nodeapps -n node2 $srvctl stop listener -n node1 $srvctl stop listener -n node2 At the end of the installation, installer will prompt you to perform below steps as ROOT: ############## Run below scripts on both the nodenode1-> /u01/app/oracle/product/10.2.0/db_1/.root.sh Running Oracle10 root.sh script... The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/oracle/product/10.2.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]: The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying dbhome to /usr/local/bin ... The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying oraenv to /usr/local/bin ... The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying coraenv to /usr/local/bin ... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed. node1-> ################## Now, confirm if patch set has been applied successfully on ORACLE_HOME: racdb1@node1{~}$ export PATH=$PATH:$ORACLE_HOME/OPatch racdb1@node1{~}$ opatch lsinventory -all Invoking OPatch 10.2.0.4.2 Oracle Interim Patch Installer version 10.2.0.4.2 Copyright (c) 2007, Oracle Corporation. All rights reserved. Oracle Home : /u01/app/oracle/product/10.2.0/db_1 Central Inventory : /u01/app/oracle/oraInventory from : /etc/oraInst.loc OPatch version : 10.2.0.4.2 OUI version : 10.2.0.4.0 OUI location : /u01/app/oracle/product/10.2.0/db_1/oui Log file location : /u01/app/oracle/product/10.2.0/db_1/cfgtoollogs/opatch/opatc h2013-03-11_13-34-53PM.log Lsinventory Output file location : /u01/app/oracle/product/10.2.0/db_1/cfgtoollo gs/opatch/lsinv/lsinventory2013-03-11_13-34-53PM.txt -------------------------------------------------------------------------------List of Oracle Homes: Name Location CRS_HOME /u01/app/oracle/product/10.2.0/crs_1 ORACLE_HOME /u01/app/oracle/product/10.2.0/db_1 Installed Top-level Products (2): Oracle Database 10g Oracle Database 10g Release 2 Patch Set 3 There are 2 products installed in this Oracle Home. There are no Interim patches installed in this Oracle Home. Rac system comprising of multiple nodes Local node = node1 Remote node = node2 10.2.0.1.0 10.2.0.4.0

-------------------------------------------------------------------------------OPatch succeeded. racdb1@node1{~}$ At this time all the resources will be offline: node1-> crs_stat -t Name Type Target State Host -----------------------------------------------------------ora....SM1.asm application OFFLINE OFFLINE ora....E1.lsnr application OFFLINE OFFLINE ora.node1.gsd application OFFLINE OFFLINE ora.node1.ons application OFFLINE OFFLINE ora.node1.vip application OFFLINE OFFLINE ora....SM2.asm application OFFLINE OFFLINE ora....E2.lsnr application OFFLINE OFFLINE ora.node2.gsd application OFFLINE OFFLINE ora.node2.ons application OFFLINE OFFLINE ora.node2.vip application OFFLINE OFFLINE ora.racdb.db application OFFLINE OFFLINE ora....acdb.cs application OFFLINE OFFLINE ora....db1.srv application OFFLINE OFFLINE ora....acdb.cs application OFFLINE OFFLINE ora....db2.srv application OFFLINE OFFLINE ora....b1.inst application OFFLINE OFFLINE ora....b2.inst application OFFLINE OFFLINE node1-> (C) Manually Upgrading a RAC Database : ================================================ After you install the patch set, you must perform the following steps on every d atabase associated with the upgraded Oracle home: Perform below steps on node1: racdb1@node1{~}$srvctl racdb1@node1{~}$srvctl racdb1@node1{~}$srvctl racdb1@node1{~}$srvctl racdb1@node1{~}$srvctl racdb1@node1{~}$srvctl start start start start start start listener -n node1 listener -n node2 nodeapps -n node1 nodeapps -n node2 asm -n node1 asm -n node2

DO NOT START database instance and database services. node1-> crs_stat -t Name Type Target State Host -----------------------------------------------------------ora....SM1.asm application ONLINE ONLINE node1 ora....E1.lsnr application ONLINE ONLINE node1 ora.node1.gsd application ONLINE ONLINE node1 ora.node1.ons application ONLINE ONLINE node1 ora.node1.vip application ONLINE ONLINE node1 ora....SM2.asm application ONLINE ONLINE node2 ora....E2.lsnr application ONLINE ONLINE node2 ora.node2.gsd application ONLINE ONLINE node2 ora.node2.ons application ONLINE ONLINE node2 ora.node2.vip application ONLINE ONLINE node2

ora.racdb.db ora....acdb.cs ora....db1.srv ora....acdb.cs ora....db2.srv ora....b1.inst ora....b2.inst node1->

application application application application application application application

OFFLINE OFFLINE OFFLINE OFFLINE OFFLINE OFFLINE OFFLINE

OFFLINE OFFLINE OFFLINE OFFLINE OFFLINE OFFLINE OFFLINE

node1-> ps -ef | grep pmon oracle 5611 1 0 13:50 ? oracle 6492 16895 0 13:51 pts/1

00:00:00 asm_pmon_+ASM1 00:00:00 grep pmon

NOTE : In order to update the database catalog, we will set the CLUSTER_DATABASE parameter as false to start up just one instance; you have to do this only in t he instance you will work but the whole database catalog upgrade procedure shoul d be done for EVERY DATABASE IN RAC. node1-> sqlplus / as sysdba SQL> startup nomount SQL> alter system set cluster_database=false scope=spfile; System altered. SQL> shutdown immediate NOTE : Before running the database catalog upgrade script we will start up the i nstance in upgrade mode, and as a good practice, run the upgrade information scr ipt to check if everything is fine to upgrade the catalog. node1-> sqlplus / as sysdba SQL*Plus: Release 10.2.0.4.0 - Production on Mon Mar 11 13:57:17 2013 Copyright (c) 1982, 2007, Oracle. All Rights Reserved. Connected to an idle instance. SQL> startup upgrade ORACLE instance started. Total System Global Area 218103808 bytes Fixed Size 1266680 bytes Variable Size 104860680 bytes Database Buffers 109051904 bytes Redo Buffers 2924544 bytes Database mounted. Database opened. SQL> @?/rdbms/admin/utlu102i.sql Oracle Database 10.2 Upgrade Information Utility 03-11-2013 13:58:39 . ********************************************************************** Database: ********************************************************************** --> name: RACDB --> version: 10.2.0.1.0 --> compatible: 10.2.0.1.0 --> blocksize: 8192 . ********************************************************************** Tablespaces: [make adjustments in the current environment] **********************************************************************

--> SYSTEM tablespace is adequate for the upgrade. .... minimum required size: 486 MB .... AUTOEXTEND additional space required: 6 MB --> UNDOTBS1 tablespace is adequate for the upgrade. .... minimum required size: 402 MB .... AUTOEXTEND additional space required: 377 MB --> SYSAUX tablespace is adequate for the upgrade. .... minimum required size: 253 MB .... AUTOEXTEND additional space required: 13 MB --> TEMP tablespace is adequate for the upgrade. .... minimum required size: 58 MB .... AUTOEXTEND additional space required: 38 MB . ********************************************************************** Update Parameters: [Update Oracle Database 10.2 init.ora or spfile] ********************************************************************** -- No update parameter changes are required. . ********************************************************************** Renamed Parameters: [Update Oracle Database 10.2 init.ora or spfile] ********************************************************************** -- No renamed parameters found. No changes are required. . ********************************************************************** Obsolete/Deprecated Parameters: [Update Oracle Database 10.2 init.ora or spfile] ********************************************************************** -- No obsolete parameters found. No changes are required . ********************************************************************** Components: [The following database components will be upgraded or installed] ********************************************************************** --> Oracle Catalog Views [upgrade] VALID --> Oracle Packages and Types [upgrade] VALID --> JServer JAVA Virtual Machine [upgrade] VALID --> Oracle XDK for Java [upgrade] VALID --> Oracle Java Packages [upgrade] VALID --> Oracle Text [upgrade] VALID --> Oracle XML Database [upgrade] VALID --> Real Application Clusters [upgrade] VALID --> Oracle Workspace Manager [upgrade] VALID --> Oracle Data Mining [upgrade] VALID --> OLAP Analytic Workspace [upgrade] VALID --> OLAP Catalog [upgrade] VALID --> Oracle OLAP API [upgrade] VALID --> Oracle interMedia [upgrade] VALID --> Spatial [upgrade] VALID --> Expression Filter [upgrade] VALID --> EM Repository [upgrade] VALID --> Rule Manager [upgrade] VALID . PL/SQL procedure successfully completed. SQL> NOTE : After this we will execute the upgrade catalog script; remember, you have to execute this script for every database you have in your RAC cluster, but jus t in one instance per database: Now, proceed further as follows:

SQL> spool patch.log SQL> @?/rdbms/admin/catupgrd.sql SQL> spool off As stated in the final message of the catupgrd.sql script check that all the com ponents are present, has a valid status and proper version, and just to be sure check the patch.log file for ORA error messages. At this point we have almost finished applying the 10.2.0.4 patch set, and we wi ll recompile and look for invalid objects by running below script: SQL> shutdown immediate SQL> exit node1-> sqlplus '/ as sysdba' SQL> startup SQL> @?/rdbms/admin/utlrp.sql Now, confirm the same by using below query: SQL> column COMP_NAME format a50 SQL> column VERSION format a15 SQL> SELECT COMP_NAME, VERSION, STATUS FROM SYS.DBA_REGISTRY; COMP_NAME -------------------------------------------------Oracle Enterprise Manager Spatial Oracle interMedia OLAP Catalog Oracle XML Database Oracle Text Oracle Expression Filter Oracle Rule Manager Oracle Workspace Manager Oracle Data Mining Oracle Database Catalog Views COMP_NAME -------------------------------------------------Oracle Database Packages and Types JServer JAVA Virtual Machine Oracle XDK Oracle Database Java Packages OLAP Analytic Workspace Oracle OLAP API Oracle Real Application Clusters 18 rows selected. SQL> If everything looks fine, we just have to restore the CLUSTER_DATABASE parameter value to true and start the RAC database and services: SQL> ALTER SYSTEM SET CLUSTER_DATABASE=TRUE SCOPE=spfile; SQL> Shutdown immediate; SQL> exit VERSION --------------10.2.0.4.0 10.2.0.4.0 10.2.0.4.0 10.2.0.4.0 10.2.0.4.0 10.2.0.4.0 10.2.0.4.0 10.2.0.4.0 10.2.0.4.3 10.2.0.4.0 10.2.0.4.0 VERSION --------------10.2.0.4.0 10.2.0.4.0 10.2.0.4.0 10.2.0.4.0 10.2.0.4.0 10.2.0.4.0 10.2.0.4.0 STATUS ----------VALID VALID VALID VALID VALID VALID VALID VALID VALID VALID VALID STATUS ----------VALID VALID VALID VALID VALID VALID VALID

racdb1@node1{~}$srvctl start database -d racdb racdb1@node1{~}$srvctl statrt instance -d racdb node1-> crs_stat -t Name Type Target State Host -----------------------------------------------------------ora....SM1.asm application ONLINE ONLINE node1 ora....E1.lsnr application ONLINE ONLINE node1 ora.node1.gsd application ONLINE ONLINE node1 ora.node1.ons application ONLINE ONLINE node1 ora.node1.vip application ONLINE ONLINE node1 ora....SM2.asm application ONLINE ONLINE node2 ora....E2.lsnr application ONLINE ONLINE node2 ora.node2.gsd application ONLINE ONLINE node2 ora.node2.ons application ONLINE ONLINE node2 ora.node2.vip application ONLINE ONLINE node2 ora.racdb.db application ONLINE ONLINE node2 ora....acdb.cs application ONLINE ONLINE node1 ora....db1.srv application ONLINE ONLINE node1 ora....acdb.cs application ONLINE ONLINE node2 ora....db2.srv application ONLINE ONLINE node2 ora....b1.inst application ONLINE ONLINE node1 ora....b2.inst application ONLINE ONLINE node2 node1-> Confirm that you are all done: ----------------------------------node1-> sqlplus / as sysdba SQL*Plus: Release 10.2.0.4.0 - Production on Mon Mar 11 15:15:35 2013 Copyright (c) 1982, 2007, Oracle. All Rights Reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production With the Partitioning, Real Application Clusters, Oracle Label Security, OLAP, Data Mining Scoring Engine and Real Application Testing options SQL> node1-> opatch lsinventory -all Invoking OPatch 10.2.0.4.2 Oracle Interim Patch Installer version 10.2.0.4.2 Copyright (c) 2007, Oracle Corporation. All rights reserved. Oracle Home : /u01/app/oracle/product/10.2.0/db_1 Central Inventory : /u01/app/oracle/oraInventory from : /etc/oraInst.loc OPatch version : 10.2.0.4.2 OUI version : 10.2.0.4.0 OUI location : /u01/app/oracle/product/10.2.0/db_1/oui Log file location : /u01/app/oracle/product/10.2.0/db_1/cfgtoollogs/opatch/opatc h2013-03-11_15-16-38PM.log Lsinventory Output file location : /u01/app/oracle/product/10.2.0/db_1/cfgtoollo gs/opatch/lsinv/lsinventory2013-03-11_15-16-38PM.txt --------------------------------------------------------------------------------

List of Oracle Homes: Name Location CRS_HOME /u01/app/oracle/product/10.2.0/crs_1 ORACLE_HOME /u01/app/oracle/product/10.2.0/db_1 Installed Top-level Products (2): Oracle Database 10g Oracle Database 10g Release 2 Patch Set 3 There are 2 products installed in this Oracle Home. There are no Interim patches installed in this Oracle Home. Rac system comprising of multiple nodes Local node = node1 Remote node = node2 -------------------------------------------------------------------------------OPatch succeeded. node1-> So, finally you are all set to go from here with your 10.2.0.4 clustered databas e with 2 node. Enjoy!!!!!! NOTE: If you want to apply interim patch (one Off patch or PSU), please follow b elow link: http://cronostech.blogspot.in/2011/12/patching-oracle-rac-database.html#commentform 10.2.0.1.0 10.2.0.4.0

***********************************

Vous aimerez peut-être aussi