Vous êtes sur la page 1sur 9

======================================================================

DBA Stop Steps


======================================================================
DBE STEPS BEFORE PATCHING
==========================
a) Check the databases, services and cluster status on the server.

$ sudo su - oracle

$ pmon
$ srvctl status service -d <database_name> (for all databases)
$ asm
$ crsctl check cluster all (for all nodes in same cluster)
$ crsctl check cluster (for local/current node)
$ crsctl stat res -t

b) Check the listeners running on the cluster

$ lmon
or
$ ps eaf |grep tns

c) Blackout OEM using GUI Mode

NOTE: Make sure to set an END time for the blackout based on the
outage window.

d) Stop OEM Agent

$ sudo su - oracle
$ $AGENT_HOME/bin
$ ./emctl status agent
$ ./emctl stop agent
$ ./emctl status agent
e) Blackout DBMON

$ export PATH=$PATH:/var/opt/OV/bin/instrumentation
$ sudo /var/opt/OV/bin/instrumentation/dbspicol OFF
$ sudo mv /var/opt/OV/dbspi/db_mon.cfg
/var/opt/OV/dbspi/db_mon.cfg.keep
$ sudo mv /var/opt/OV/dbspi/dbmon-event-ASM.cfg
/var/opt/OV/dbspi/dbmon-event-ASM.cfg.keep
$ sudo mv /var/opt/OV/dbspi/dbmon-event-CRS.cfg
/var/opt/OV/dbspi/dbmon-event-CRS.cfg.keep
f) Comment out ORACLE's crontab lines on the server

# Comment out the lines If any backup configured for Database and
Archive log only.

g) stop/relocate Golden Gate processes and singleton services.

a. Check the current running databases


$ ps ef |grep smon
b. Set the environment and check the service status on the server
$ srvctl status service d <database name>
c. If you find any singleton service(s) then we have to relocate
to next available node.
$ srvctl relocate service -d <database_name> -s <singleton
service> -i <current_node> -n <new_node>

Note: If XAG configured, Have to relocate the GG service to next


available node and no need to stop gg on current node and restart on
another available node on the same cluster.
Example:
$ sudo su oracle
$ oracle@safhli03:~ $ ps -ef |grep smon
oracle 21174 1 0 Feb18 ? 00:00:00 asm_smon_+ASM1
root 21268 1 4 Feb18 ? 04:04:38
/u01/app/12.1.0.2/grid/bin/osysmond.bin
oracle 26121 20181 0 01:35 pts/1 00:00:00 grep smon
oracle 29152 1 0 Feb18 ? 00:00:23 ora_smon_FUNCIA1
oracle@safhli03:~ $ . oraenv
ORACLE_SID = [oracle] ? +ASM1
The Oracle base has been set to /u01/app/oracle
oracle@safhli03:~ $ crsctl stat res -t
----------------------------------------------------------------------
----------
Name Target State Server State
details
----------------------------------------------------------------------
----------
Local Resources
----------------------------------------------------------------------
----------
ora.DATA.dg
ONLINE ONLINE safhli03 STABLE
ONLINE ONLINE safhli04 STABLE
ora.DECOM.dg
ONLINE ONLINE safhli03 STABLE
ONLINE ONLINE safhli04 STABLE
ora.FLASH.dg
ONLINE ONLINE safhli03 STABLE
ONLINE ONLINE safhli04 STABLE
ora.GOLDENGATE.dg
ONLINE ONLINE safhli03 STABLE
ONLINE ONLINE safhli04 STABLE
ora.GRID1.dg
ONLINE ONLINE safhli03 STABLE
ONLINE ONLINE safhli04 STABLE
ora.GRID2.dg
ONLINE ONLINE safhli03 STABLE
ONLINE ONLINE safhli04 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE safhli03 STABLE
ONLINE ONLINE safhli04 STABLE
ora.REDO1.dg
ONLINE ONLINE safhli03 STABLE
ONLINE ONLINE safhli04 STABLE
ora.net1.network
ONLINE ONLINE safhli03 STABLE
ONLINE ONLINE safhli04 STABLE
ora.ons
ONLINE ONLINE safhli03 STABLE
ONLINE ONLINE safhli04 STABLE
ora.funcia.funci_apt_s.svc
1 ONLINE ONLINE safhli03 STABLE
2 ONLINE ONLINE safhli04 STABLE
ora.funcia.funci_as.svc
1 ONLINE ONLINE safhli03 STABLE
2 ONLINE ONLINE safhli04 STABLE
ora.funcia.funci_brnd.svc
1 ONLINE ONLINE safhli03 STABLE
2 ONLINE ONLINE safhli04 STABLE
1 ONLINE ONLINE safhli04 STABLE
ora.safhli03.vip
1 ONLINE ONLINE safhli03 STABLE
ora.safhli04.vip
1 ONLINE ONLINE safhli04 STABLE
ora.scan1.vip
1 ONLINE ONLINE safhli03 STABLE
ora.scan2.vip
1 ONLINE ONLINE safhli04 STABLE
procwatcher
1 ONLINE ONLINE safhli04 STABLE
2 ONLINE ONLINE safhli03 STABLE
xag.safhli03_gg-vip.vip
1 ONLINE ONLINE safhli04 STABLE
xag.safhli03_gg.goldengate
1 ONLINE ONLINE safhli04 STABLE
----------------------------------------------------------------------
----------
Check status --
$/u01/app/12.1.0.2/grid/bin/agctl status goldengate safhli03_gg
Relocate GG to another node (may take few minutes)
$/u01/app/12.1.0.2/grid/bin/agctl relocate goldengate safhli03_gg --
<Node2>
d) If there is a vip configuration exists for gg high availability
but no XAG service exists, stop gg processes, stop mgr then relocate
of gg vip to another available node in the same cluster. After
relocating vip of gg to another node then you start mgr and start gg
processes. For stop/start gg processes and manager go to below step(h)
e) If there is no vip configuration and XAG service exists for gg
high availability then go to step(h) to stop/start gg processes and
mgr
h) Stop GG on current node and start GG on next available node

#Stop GG on current node


GGSCI> stop er *
GGSCI> stop MGR
# Go to another node on the same cluster
# Start GG on next node.
GGSCI> start MGR
GGSCI> start er *
NOTE: If this is part of a VCS cluster, check with UE what the
failover node will be, as no database shutdown is done since they will
be failed over to other node.
i) Verify GG processes are running on node after restarting

Example:
$ sudo su - gger
Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING
REPLICAT RUNNING RAVSP3FC 00:00:44 00:00:00
REPLICAT RUNNING RAVSP4FC 00:00:49 00:00:00
REPLICAT RUNNING RFUNCPCD 00:00:00 00:00:04
REPLICAT RUNNING RFUNCPCF 00:00:00 00:00:04
REPLICAT RUNNING RFUNCPCG 00:00:00 00:00:05
REPLICAT RUNNING RFUNCPCH 00:00:00 00:00:06
j) Umount ACFS device.

If required, Have to unmount /gger. If you do not unmount


$ crsctl stop crs will take care this.

k) Disable nodeapps (Applicable for below 11g)

$ srvctl stop nodeapps -n <node_name>

l) Stop and disable CRS

$ sudo crsctl stop crs


$ sudo crsctl disable crs

m) Stop oswatcher

$ sudo su - oracle
$ nohup /dbscripts/oracle/osw/stopOSWbb.sh
n) Stop/kill any RMAN backups are running

o) Confirm with UE team to Apply the patch.


DBE STEPS AFTER PATCHING
=========================
a) Start oswatcher

$ sudo su - oracle
$ nohup /dbscripts/oracle/osw/startOSWbb.sh

b) Enable and Start CRS

$ sudo crsctl check cluster


$ sudo crsctl enable crs
$ sudo crsctl start crs
$ sudo crsctl check cluster
$ sudo crsctl stat res -t

c) Enable nodeapps (Applicable for below 11g Version)

$ srvctl start nodeapps -n <node_name>

d) Verify ACFS device is mounted

# Need to mount /gger mount point, If you Umounted or does not


mount by clusterware auto start.

e) Re-enable RMAN jobs if stoped

If any RMAN backup stoped, Need to start manually after patching


activity completes.

f) Relocate singleton database services back again. Check previous


state and move them back to the exact same state:

$ srvctl relocate service -d <database_name> -s <singleton service>


-c <current_node> -n <new_node>

g) Verify Listener is running


$ lmon
$ ps eaf |grep tns

h) Verify the databases and services running in the cluster

$ srvctl status service -d <database_name> (for each database)

i) start/relocate Golden Gate replication processes back again.

#Stop GG on node 2
GGSCI> stop er *
GGSCI> stop MGR
Go to first node where the patching completes on the same cluster
# Start GG
GGSCI> start MGR
GGSCI> start er *

j) Uncomment ORACLE's crontab lines

# Please uncommend crontab lines if you commented before


patching on this node.

k) Remove DBMON blackout

$ export PATH=$PATH:/var/opt/OV/bin/instrumentation
$ sudo /var/opt/OV/bin/instrumentation/dbspicol ON

$ sudo mv /var/opt/OV/dbspi/db_mon.cfg.keep
/var/opt/OV/dbspi/db_mon.cfg
$ sudo mv /var/opt/OV/dbspi/dbmon-event-ASM.cfg.keep
/var/opt/OV/dbspi/dbmon-event-ASM.
$ sudo mv /var/opt/OV/dbspi/dbmon-event-CRS.cfg.keep
/var/opt/OV/dbspi/dbmon-event-CRS.cfg

l) Remove OEM blackout using GUI


m) Start OEM Agent (if it does not start automatically after server
reboots)

$ sudo su - oracle
$ $AGENT_HOME/bin
$ ./emctl start agent
$ ./emctl start agent
n) Chek cluster and resource status

$ sudo su oracle
$ asm
$ crsctl check cluster all (for all nodes in cluster)
$ crsctl check cluster (for local/current node)
$ crsctl stat res -t

o) Perform database checkouts

We have to check out all DB/GG resource, RMAN, mount points and
etc., are should be up and running on the cluster.

======================================================================
End of DBA Stop Steps
======================================================================