Vous êtes sur la page 1sur 138

ACTIVE PASSIVE FAILOVER CLUSTER 12c

CONFIGURATION FOR ORACLE DATABASES


(Marko uti, March 2014, v1.0)
Table of Contents
I. Disclaimer
II. Introduction
III. Operating System Configuration
IV. Shared Storage Configuration
V. Install Grid Infrastructure 12cR1
VI. Install 12cR1 Oracle EE Software
VII. Create 12cR1 EE Database
VIII. Configure Active/Passive clustering for single instance database using dedicated VIP for listeners
IX. Add 11gR2 EE Oracle software and database to cluster
X. Configure Active/Passive clustering for new database
XI. Failover Tests
XII. Configure Active/Passive clustering for single instance database using SCAN listeners

I. DISCLAIMER
Before I start I want to note that this configuration is created and tested only in test environment. Please
do not configure your production environment blindly using this document. I don't know will it work or
what problems you could experience. This configuration is not supported by Oracle. I don't accept
responsibility for any loss or damage arising from the use of this document.

II. INTRODUCTION
Oracle offers several solutions to ensure High Availability of the database like RAC, RAC OneNode or
Dataguard. These solutions are very popular, stable and robust but not free. On the other hand Oracle
Clusterware can be used to protect any resource free of charge if you have valid Oracle Unbreakable
Linux support and at least one of servers in cluster licensed for Oracle database (SE or EE).
Oracle Clusterware 12cR1 is part of 12cR1 Grid Infrastructure powerful and stable cluster manager
usually used in combination with RAC.
My goal is to use this powerful software to create Active/Passive failover cluster to protect one or more
single instance databases. To accomplish this I will use custom scripts to manage resources.
If you decide to run similar configuration in production environment you will have Oracle support for
operating system, Grid Infrastructure software or RDBMS. But you won't have support for custom scripts
used for managing resources.

III. OPERATING SYSTEM CONFIGURATION


This configuration is created using VirtualBox 4.3.10. You can find many guides on how to install
operating system for Oracle RAC so I will skip OS installation. For this configuration I've used guide from:
http://www.lab128.com/rac12_installation_using_vb/article_text.html.
OS: Oracle Enterprise Linux 6.4
Node1
Name: cluster1
IP public: 192.168.56.71
IP private: 192.168.10.1
Node2
Name: cluster2
IP public: 192.168.56.72
IP private: 192.168.10.2

Run Oracle RDBMS Server 12cR1 pre-install RPM package "oracle-rdbms-server-12cR1-preinstall" to


configure your operating system.
Firewall is disabled and ASMLib installed and configured.
NTP is deconfigured and Oracle Cluster Time Synchronization Service (ctssd) will synchronize times
between nodes.
Run on both nodes to configure NTP:
# service ntpd stop
Shutting down ntpd:
# chkconfig ntpd off
# mv /etc/ntp.conf /etc/ntp.conf.orig
# rm /var/run/ntpd.pid

Create on both nodes directory structure where Oracle software will be installed:
mkdir
mkdir
chown
chown
chmod

-p /u01/app/12.1.0/grid_1
-p /u01/app/oracle
-R oracle:oinstall /u01
oracle:oinstall /u01/app/oracle
-R 775 /u01/

[FAILED]

Edit "/etc/hosts" on both nodes and add following information:


# Private
192.168.10.1
192.168.10.2

cluster1-priv.localdomain
cluster2-priv.localdomain

cluster1-priv
cluster2-priv

# Public
192.168.56.71
192.168.56.72

cluster1.localdomain
cluster2.localdomain

cluster1
cluster2

# Virtual
192.168.56.81
192.168.56.82

cluster1-vip.localdomain
cluster2-vip.localdomain

cluster1-vip
cluster2-vip

# SCAN
192.168.56.91
192.168.56.92
192.168.56.93

clus-scan.localdomain
clus-scan.localdomain
clus-scan.localdomain

clus-scan
clus-scan
clus-scan

Oracle strongly recommends that you don't configure SCAN VIP addresses in /etc/hosts file. In
production use Round Robin DNS. As this is test environment and I don't have DNS server running I won't
follow Oracle recommendation.

Install operating system package "cvuqdisk". Without "cvuqdisk" Cluster Verification Utility cannot
discover shared disks.
On both nodes:
# cd /media/sf_SharedFolderVM/grid/rpm
# CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
# rpm -Uvh cvuqdisk*
Preparing...
########################################### [100%]
1:cvuqdisk
########################################### [100%]

IV. SHARED STORAGE CONFIGURATION


Oracle Clusterware files include voting disks, used to monitor cluster node status, and Oracle Cluster
Registry (OCR) which contains configuration information about the cluster. The voting disks and OCR are
shared files and you must put them on shared storage.

Shut down both VMs and create shareable disks.


In "cluster1" storage settings click on "Controller:SATA" and select "Add Hard Disk".

Select "Create new disk".

Choose VDI (VirtualBox Disk Image).

FIxed size.

Specify file location and size.

Create another shareable disk which will be used for FRA ASM diskgroup.

To make this disks shareable open Virtual Media Manager.

Select disks and click "Modify".


Then select "Shareable" attribute for both disks.

Now you can attach disks to "Cluster2" node.


Click "Cluster2" -> Settings -> Storage -> Controller: SATA -> Add Hard Disk
Choose existing disk.

Add both disks.

You should see something like this.

Now we need to partition new disks.


Start both VMs.
List disks:
# ls /dev/sd*
/dev/sda /dev/sda1

/dev/sda2

/dev/sdb

/dev/sdc

New disks are "/dev/sdb" and "/dev/sdc".

As a "root" user just on one node partition disks using fdisk.

# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF
disklabel
Building a new DOS disklabel with disk identifier 0x713f1a5d.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): n
Command action
e
extended
p
primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1044, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-1044, default 1044):
Using default value 1044
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.

# fdisk /dev/sdc
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF
disklabel
Building a new DOS disklabel with disk identifier 0xa5594e81.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): n
Command action
e
extended
p
primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1305, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-1305, default 1305):
Using default value 1305
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.

Create ASMLib Disks "DATADISK1" and "FRADISK1" just on one node.


# oracleasm createdisk DATADISK1 /dev/sdb1
Writing disk header: done
Instantiating disk: done
# oracleasm createdisk FRADISK1 /dev/sdc1
Writing disk header: done
Instantiating disk: done

Reboot second node and on both nodes check if disks are visible to ASM.
[root@cluster1 ~]# oracleasm listdisks
DATADISK1
FRADISK1
[root@cluster2 ~]# oracleasm listdisks
DATADISK1
FRADISK1

V. INSTALL GRID INFRASTRUCTURE 12CR1


Run installation as "oracle" user.
$ cd /media/sf_SharedFolderVM/grid/
$ ./runInstaller

Skip software updates -> Next

Install and Configure Oracle Grid Infrastructure for a Cluster -> Next

Configure a Standard cluster -> Next

Advanced Installation -> Next

Select language -> Next

Enter Cluster Name, SCAN Name, SCAN Port and uncheck Configure GNS -> Next

Add second node to cluster.

Click "SSH connectivity..." and enter password for OS user "oracle".

Click "Setup" to configure SSH connectivity.

Click "Test" to test SSH connectivity.

Specify Public and Private and then click Next.

Select No for "Configure Grid Infrastructure Management Repository" -> Next

Click Yes.

Use Standard ASM for storage -> Next

Select Change Discovery Path...

Enter "/dev/oracleasm/disks" then OK.

Select "/dev/oracleasm/disks/DATADISK1" and for Redundancy choose External -> Next

Enter password for ASM instance.

Click Yes to ignore warning for weak password.

Do not use intelligent Platform Management Interface (IPMI) -> Next

Select operating system groups -> Next

Ignore warning and click Yes.

Specify Oracle base and Software location.

Specify Oracle Inventory directory -> Next

Choose "Automatically run configuration scripts" and enter password for "root" user -> Next

Chose "Ignore All" to ignore these warnings -> Next

Click "Yes" to ignore warning.

Check summary, save response file if you want and then click "Install".

Progress...

Click "Yes" to execute configuration scripts as "root".

It is expected for Oracle Cluster Verification Utility to fail due to low Host RAM and DATADISK1 group
mismatch.

You can ignore these errors.


# less /u01/app/oraInventory/logs/installActions2014-03-27_08-57-31PM.log
|grep ERROR
INFO: INFO: ERROR: [Result.addErrorDescription:607] PRVF-7530 : Sufficient physical
memory is not available on node "cluster2" [Required physical memory = 4GB
(4194304.0KB)]
INFO: INFO: ERROR: [Result.addErrorDescription:607] PRVF-7530 : Sufficient physical
memory is not available on node "cluster1" [Required physical memory = 4GB
(4194304.0KB)]
INFO:
ERRORMSG(cluster2): PRVF-7530 : Sufficient physical memory is not
available on node "cluster2" [Required physical memory = 4GB (4194304.0KB)]
INFO:
ERRORMSG(cluster1): PRVF-7530 : Sufficient physical memory is not
available on node "cluster1" [Required physical memory = 4GB (4194304.0KB)]
INFO: INFO: ERROR: [Result.addErrorDescription:607] PRVF-9992 : Group of device
"/dev/oracleasm/disks/DATADISK1" did not match the expected group. [Expected = "dba";
Found = "oinstall"] on nodes: [cluster1, cluster2]
INFO: INFO: ERROR: [Result.addErrorDescription:607] PRVF-9992 : Group of device
"/dev/oracleasm/disks/DATADISK1" did not match the expected group. [Expected = "dba";
Found = "oinstall"] on nodes: [cluster1, cluster2]
INFO: INFO: ERROR: [Result.addErrorDescription:618] PRVF-9992 : Group of device
"/dev/oracleasm/disks/DATADISK1" did not match the expected group. [Expected = "dba";
Found = "oinstall"] on nodes: [cluster1, cluster2]
INFO: INFO: ERROR: [Result.addErrorDescription:618] PRVF-9992 : Group of device
"/dev/oracleasm/disks/DATADISK1" did not match the expected group. [Expected = "dba";
Found = "oinstall"] on nodes: [cluster1, cluster2]
INFO: INFO: ERROR: [Result.addErrorDescription:607] PRVF-9802 : Attempt to get udev
information from node "cluster1" failed
INFO: INFO: ERROR: [Result.addErrorDescription:607] PRVF-9802 : Attempt to get udev
information from node "cluster2" failed
INFO: INFO: ERROR: [Result.addErrorDescription:618] PRVF-9802 : Attempt to get udev
information from node "cluster2" failed
INFO: INFO: ERROR: [Result.addErrorDescription:618] PRVF-9802 : Attempt to get udev
information from node "cluster1" failed
INFO: INFO: ERROR: [Result.addErrorDescription:618] PRVF-9802 : Attempt to get udev
information from node "cluster2" failed
INFO: INFO: ERROR: [Result.addErrorDescription:618] PRVF-9802 : Attempt to get udev
information from node "cluster1" failed
INFO:
ERRORMSG(cluster2): PRVF-9992 : Group of device
"/dev/oracleasm/disks/DATADISK1" did not match the expected group. [Expected = "dba";
Found = "oinstall"] on nodes: [cluster1, cluster2]
INFO:
ERRORMSG(cluster2): PRVF-9802 : Attempt to get udev information from
node "cluster2" failed
INFO:
ERRORMSG(cluster1): PRVF-9992 : Group of device
"/dev/oracleasm/disks/DATADISK1" did not match the expected group. [Expected = "dba";
Found = "oinstall"] on nodes: [cluster1, cluster2]
INFO:
ERRORMSG(cluster1): PRVF-9802 : Attempt to get udev information from
node "cluster1" failed
INFO: INFO: ERROR: [Result.addErrorDescription:607] PRVF-7530 : Sufficient physical
memory is not available on node "cluster2" [Required physical memory = 4GB
(4194304.0KB)]
INFO: INFO: ERROR: [Result.addErrorDescription:607] PRVF-7530 : Sufficient physical
memory is not available on node "cluster1" [Required physical memory = 4GB
(4194304.0KB)]
INFO:
ERRORMSG(cluster2): PRVF-7530 : Sufficient physical memory is not
available on node "cluster2" [Required physical memory = 4GB (4194304.0KB)]
INFO:
ERRORMSG(cluster1): PRVF-7530 : Sufficient physical memory is not
available on node "cluster1" [Required physical memory = 4GB (4194304.0KB)]
INFO: INFO: ERROR: [Result.addErrorDescription:607] PRVF-9992 : Group of device
"/dev/oracleasm/disks/DATADISK1" did not match the expected group. [Expected = "dba";
Found = "oinstall"] on nodes: [cluster1, cluster2]
INFO: INFO: ERROR: [Result.addErrorDescription:607] PRVF-9992 : Group of device
"/dev/oracleasm/disks/DATADISK1" did not match the expected group. [Expected = "dba";
Found = "oinstall"] on nodes: [cluster1, cluster2]
INFO: INFO: ERROR: [Result.addErrorDescription:618] PRVF-9992 : Group of device

"/dev/oracleasm/disks/DATADISK1" did not match the expected group. [Expected = "dba";


Found = "oinstall"] on nodes: [cluster1, cluster2]
INFO: INFO: ERROR: [Result.addErrorDescription:618] PRVF-9992 : Group of device
"/dev/oracleasm/disks/DATADISK1" did not match the expected group. [Expected = "dba";
Found = "oinstall"] on nodes: [cluster1, cluster2]
INFO: INFO: ERROR: [Result.addErrorDescription:607] PRVF-9802 : Attempt to get udev
information from node "cluster1" failed
INFO: INFO: ERROR: [Result.addErrorDescription:607] PRVF-9802 : Attempt to get udev
information from node "cluster2" failed
INFO: INFO: ERROR: [Result.addErrorDescription:618] PRVF-9802 : Attempt to get udev
information from node "cluster2" failed
INFO: INFO: ERROR: [Result.addErrorDescription:618] PRVF-9802 : Attempt to get udev
information from node "cluster1" failed
INFO: INFO: ERROR: [Result.addErrorDescription:618] PRVF-9802 : Attempt to get udev
information from node "cluster2" failed
INFO: INFO: ERROR: [Result.addErrorDescription:618] PRVF-9802 : Attempt to get udev
information from node "cluster1" failed
INFO:
ERRORMSG(cluster2): PRVF-9992 : Group of device
"/dev/oracleasm/disks/DATADISK1" did not match the expected group. [Expected = "dba";
Found = "oinstall"] on nodes: [cluster1, cluster2]
INFO:
ERRORMSG(cluster2): PRVF-9802 : Attempt to get udev information from
node "cluster2" failed
INFO:
ERRORMSG(cluster1): PRVF-9992 : Group of device
"/dev/oracleasm/disks/DATADISK1" did not match the expected group. [Expected = "dba";
Found = "oinstall"] on nodes: [cluster1, cluster2]
INFO:
ERRORMSG(cluster1): PRVF-9802 : Attempt to get udev information from
node "cluster1" failed

Click "Next" to complete installation.

Click "Yes" to continue.

Close

Check Cluster status as root user.


# cd /u01/app/12.1.0/grid_1/bin/
# ./crsctl stat res -t
------------------------------------------------------------------------------Name
Target State
Server
State details
------------------------------------------------------------------------------Local Resources
------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.asm
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
Started,STABLE
ora.net1.network
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.ons
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
------------------------------------------------------------------------------Cluster Resources
------------------------------------------------------------------------------ora.LISTENER_SCAN1.lsnr
1
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER_SCAN2.lsnr
1
ONLINE ONLINE
cluster1
STABLE
ora.LISTENER_SCAN3.lsnr
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.cvu
1
ONLINE ONLINE
cluster1
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
ora.scan1.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.scan2.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.scan3.vip
1
ONLINE ONLINE
cluster1
STABLE
-------------------------------------------------------------------------------

Create FRA disk group using ASM configuration assistant as "oracle" user.

[oracle@cluster1 ~]$ . oraenv


ORACLE_SID = [oracle] ? +ASM1
The Oracle base has been set to /u01/app/oracle
[oracle@cluster1 ~]$ asmca

Click Create.

Specify DIsk Group Name, select External(None) redundancy and select disk -> OK.

Exit

VI. INSTALL 12CR1 ORACLE EE SOFTWARE


Create owner for Database Software as "root" user on both nodes.
# useradd -m -u 502 -g oinstall -G dba,vboxsf -d /home/orcl12 -s /bin/bash
orcl12
# passwd orcl12
Changing password for user orcl12.
New password:
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: is too simple
Retype new password:
passwd: all authentication tokens updated successfully.

Add to "/etc/security/limits.conf":
orcl12
orcl12
orcl12
orcl12

soft
hard
soft
hard

nproc
nproc
nofile
nofile

2047
16384
1024
65536

Create directories.
# mkdir -p /u01/app/orcl12
# chown -R orcl12:oinstall /u01/app/orcl12
# chmod -R 775 /u01/app/orcl12

Add environment variables for "orcl12" user .bash_profile on both nodes

# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User
export
export
export
export

specific environment and startup programs


ORACLE_BASE=/u01/app/orcl12
ORACLE_HOME=$ORACLE_BASE/product/12.1.0/dbhome_1
ORACLE_PATH=$ORACLE_HOME/rdbms/admin
ORACLE_SID=test12

export PATH=.:${PATH}:$HOME/bin:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch
export PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
export NLS_DATE_FORMAT="DD.MM.YYYY HH24:MI:SS"

Login as "orcl12" OS user and start runInstaller.


$ cd /media/sf_SharedFolderVM/database/
$ ./runInstaller

Skip security updates.

Click "Yes" to ignore warning.

Skip software updates -> Next

Install database software only -> Next

Choose "Oracle Real Application Clusters database installation" -> Next

Select both nodes and click SSH connectivity....

For SSH connectivity enter OS user password and click setup.

SSH connectivity established.

Test and Next.

Choose languages -> Next

Choose Enterprise Edition -> Next.

Specify Oracle base and software location.

Specify Operating System privileged groups.

Specify Ignore All -> Next

Click Yes to continue.

Save Response File if you want and click Install.

Progress.

Run script as root on both nodes and then click OK.

Finish

VII. CREATE 12CR1 EE DATABASE

I will use DBCA to create database just on first node as this will be single instance database (not RAC).

Create Database -> Next

Advanced Mode -> Next

Choose "Oracle Single Instance Database" and "Custom Database" -> Next

Enter Oracle SID -> Next

Uncheck "Configure Enterprise Manager (EM) Database Express" -> Next

Enter password and ignore warning for weak passwords -> Next

We will create listener later so leave this checkboxes unchecked -> Next

Specify location for database and recovery files -> Next

Uncheck all database components -> Next

Specify Memory Parameters

Specify "Character Sets" -> Next

Create Database -> Next

Pre Requisite Checks Passed, check Summary -> Finish

Progress

Finished

VIII. Configure Active/Passive clustering for single instance


database using dedicated VIP for listeners

I've created SCAN addresses, since SCAN concept is essential part during the Oracle Grid Infrastructure
installation. OUI will not let you continue with installation until you have provided a suitable SCAN
resolution.
As I don't need SCAN concept for this configuration I will remove SCAN listeners and IP addresses.
This step isn't essential for configuration to work and if you want you can leave SCAN listeners and IP's.

# ./crsctl stat res -t


-------------------------------------------------------------------------------Name
Target State
Server
State details
-------------------------------------------------------------------------------Local Resources
-------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.asm
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
Started,STABLE
ora.net1.network
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.ons
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
-------------------------------------------------------------------------------Cluster Resources
-------------------------------------------------------------------------------ora.LISTENER_SCAN1.lsnr
1
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER_SCAN2.lsnr
1
ONLINE ONLINE
cluster1
STABLE
ora.LISTENER_SCAN3.lsnr
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.cvu
1
ONLINE ONLINE
cluster1
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
ora.scan1.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.scan2.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.scan3.vip
1
ONLINE ONLINE
cluster1
STABLE
--------------------------------------------------------------------------------

I have three SCAN listeners and three SCAN IP addresses which I want to remove.
Stop SCAN
# ./srvctl stop scan -f

Check
[root@cluster1 bin]# ./crsctl stat res -t
------------------------------------------------------------------------------Name
Target State
Server
State details
------------------------------------------------------------------------------Local Resources
------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.asm
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
Started,STABLE
ora.net1.network
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.ons
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
------------------------------------------------------------------------------Cluster Resources
------------------------------------------------------------------------------ora.LISTENER_SCAN1.lsnr
1
OFFLINE OFFLINE
cluster2
STABLE
ora.LISTENER_SCAN2.lsnr
1
OFFLINE OFFLINE
cluster1
STABLE
ora.LISTENER_SCAN3.lsnr
1
OFFLINE OFFLINE
cluster1
STABLE
ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.cvu
1
ONLINE ONLINE
cluster1
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
ora.scan1.vip
1
OFFLINE OFFLINE
cluster2
STABLE
ora.scan2.vip
1
OFFLINE OFFLINE
cluster1
STABLE
ora.scan3.vip
1
OFFLINE OFFLINE
cluster1
STABLE
-------------------------------------------------------------------------------

Remove the default SCAN listener and remove SCAN


# ./srvctl remove scan_listener
Remove scan listener? (y/[n]) y
# ./srvctl remove scan
Remove the scan? (y/[n]) y
Status
You can notice that I've removed SCAN listener and IP addresses.
# ./crsctl stat res -t
------------------------------------------------------------------------------Name
Target State
Server
State details
------------------------------------------------------------------------------Local Resources
------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.FRA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.asm
ONLINE ONLINE
cluster1
Started,STABLE
ONLINE ONLINE
cluster2
Started,STABLE
ora.net1.network
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.ons
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
------------------------------------------------------------------------------Cluster Resources
------------------------------------------------------------------------------ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.cvu
1
ONLINE ONLINE
cluster1
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
-------------------------------------------------------------------------------

Now we can remove SCAN addresses as "root" user from "/etc/hosts" on both nodes.
# vi /etc/hosts
(remove)
# SCAN
192.168.56.91
192.168.56.92
192.168.56.93

clus-scan.localdomain
clus-scan.localdomain
clus-scan.localdomain

clus-scan
clus-scan
clus-scan

In listener.ora remove SCAN listeners configuration as "oracle" user.


$ cd /u01/app/12.1.0/grid_1/network/admin/
$ vi listener.ora
(remove this lines from listener.ora)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN3 = ON
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN2 = ON
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN3 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN2=OFF
by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
LISTENER_SCAN3 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN3))
)
)
LISTENER_SCAN2 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN2))
)
)
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)

# line added

Add dedicated VIP and LISTENER for database.


This VIP will be used in the listener configuration.
I will use "192.168.56.51" IP for my database so add this address to /etc/hosts file as root on both nodes.
# vi /etc/hosts
(add)
192.168.56.51

test12-vip

1) Run this only on one node as "root" to create new CRS resource for VIP name.
# ./appvipcfg create -network=1 -ip=192.168.56.51 -vipname=test12.vip user=root
Production Copyright 2007, 2008, Oracle.All rights reserved
2014-03-28 11:07:20: Creating Resource Type
2014-03-28 11:07:20: Executing /u01/app/12.1.0/grid_1/bin/crsctl add type
app.appvip_net1.type -basetype ora.cluster_vip_net1.type -file
/u01/app/12.1.0/grid_1/crs/template/appvip.type
2014-03-28 11:07:20: Executing cmd: /u01/app/12.1.0/grid_1/bin/crsctl add type
app.appvip_net1.type -basetype ora.cluster_vip_net1.type -file
/u01/app/12.1.0/grid_1/crs/template/appvip.type
2014-03-28 11:07:20: Create the Resource
2014-03-28 11:07:20: Executing /u01/app/12.1.0/grid_1/bin/crsctl add resource
test12.vip -type app.appvip_net1.type -attr
"USR_ORA_VIP=192.168.56.51,START_DEPENDENCIES=hard(ora.net1.network)
pullup(ora.net1.network),STOP_DEPENDENCIES=hard(ora.net1.network),ACL='owner:r
oot:rwx,pgrp:root:r-x,other::r--,user:root:rx',HOSTING_MEMBERS=cluster1.localdomain,APPSVIP_FAILBACK="
2014-03-28 11:07:20: Executing cmd: /u01/app/12.1.0/grid_1/bin/crsctl add
resource test12.vip -type app.appvip_net1.type -attr
"USR_ORA_VIP=192.168.56.51,START_DEPENDENCIES=hard(ora.net1.network)
pullup(ora.net1.network),STOP_DEPENDENCIES=hard(ora.net1.network),ACL='owner:r
oot:rwx,pgrp:root:r-x,other::r--,user:root:rx',HOSTING_MEMBERS=cluster1.localdomain,APPSVIP_FAILBACK="

# ./crsctl setperm resource test12.vip -u user:oracle:r-x

Test start and relocate resource:


# ./crsctl start resource test12.vip
CRS-2672: Attempting to start 'test12.vip' on 'cluster2'
CRS-2676: Start of 'test12.vip' on 'cluster2' succeeded
# ./crsctl relocate resource test12.vip
CRS-2673: Attempting to stop 'test12.vip' on 'cluster2'
CRS-2677: Stop of 'test12.vip' on 'cluster2' succeeded
CRS-2672: Attempting to start 'test12.vip' on 'cluster1'
CRS-2676: Start of 'test12.vip' on 'cluster1' succeeded
# ping test12-vip
PING test12-vip (192.168.56.51) 56(84) bytes of data.
64 bytes from test12-vip (192.168.56.51): icmp_seq=1 ttl=64 time=0.029
ms
64 bytes from test12-vip (192.168.56.51): icmp_seq=2 ttl=64 time=0.029
ms
64 bytes from test12-vip (192.168.56.51): icmp_seq=3 ttl=64 time=0.029
ms
2) In ORACLE_HOME of "test12" database create entry in listener.ora and tnsnames.ora - on both nodes
as "orcl12" user.
$ cd /u01/app/orcl12/product/12.1.0/dbhome_1/network/admin
$ vi listener.ora
LISTENER_TEST12 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = test12-vip)(PORT = 1521))
)
)
$ scp listener.ora
orcl12@cluster2:/u01/app/orcl12/product/12.1.0/dbhome_1/network/admin/listener.ora

$ vi tnsnames.ora
TEST12 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = test12-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = test12)
)
)
LISTENER_TEST12 = (ADDRESS = (PROTOCOL = TCP)(HOST = test12-vip)(PORT = 1521))
$ scp tnsnames.ora
orcl12@cluster2:/u01/app/orcl12/product/12.1.0/dbhome_1/network/admin/tnsnames.ora

3) Create "act_listener.pl" script into $GRID_HOME/crs/public directory on both nodes as "oracle" user.
$ cd /u01/app/12.1.0/grid_1/crs/public/
$ vi act_listener.pl
#!/usr/bin/perl
#
NOTES
#
Edit the perl installation directory as appropriate.
#
#
Place this file in <CRS_HOME>/crs/public/
$ORACLE_HOME = "$ENV{_CRS_ORACLE_HOME}";
$ORA_LISTENER_NAME = "$ENV{_CRS_ORA_LISTENER_NAME}";
if ($#ARGV != 0 ) {
print "usage: start stop check required \n";
exit;
}
$command = $ARGV[0];
# start listener
if ($command eq "start") {
system ("
ORACLE_HOME=$ORACLE_HOME
export ORACLE_HOME
ORA_LISTENER_NAME=$ORA_LISTENER_NAME
export ORA_LISTENER_NAME
#
export TNS_ADMIN=$ORACLE_HOME/network/admin # optionally set TNS_ADMIN here
$ORACLE_HOME/bin/lsnrctl start $ORA_LISTENER_NAME");
}
# stop listener
if ($command eq "stop") {
system ("
ORACLE_HOME=$ORACLE_HOME
export ORACLE_HOME
ORA_LISTENER_NAME=$ORA_LISTENER_NAME
export ORA_LISTENER_NAME
#
export TNS_ADMIN=$ORACLE_HOME/network/admin # optionally set TNS_ADMIN here
$ORACLE_HOME/bin/lsnrctl stop $ORA_LISTENER_NAME");
}
# check listener
if ($command eq "check") {
check_listener();
}
sub check_listener {
my($check_proc_listener,$process_listener) = @_;
$process_listener = "$ORACLE_HOME/bin/tnslsnr $ORA_LISTENER_NAME -inherit";
$check_proc_listener = qx(ps -ae -o args | grep -w "tnslsnr $ORA_LISTENER_NAME"
| grep -v grep | head -n 1 );
chomp($check_proc_listener);
if ($process_listener eq $check_proc_listener) {
exit 0;
} else {
exit 1;
}
}
# clean listener
if ($command eq "clean") {
my $kill_proc = qx(ps -aef | grep -w "tnslsnr $ORA_LISTENER_NAME" | grep -v
grep | head -n 1 | awk '{print \$2}'| xargs kill -9 $1);
exit 0;
}

Grant execute permissions.


$ chmod ug+x act_listener.pl
Copy file to another node.
$ scp act_listener.pl oracle@cluster2:/u01/app/12.1.0/grid_1/crs/public
act_listener.pl

Test script
$ export _CRS_ORACLE_HOME=/u01/app/orcl12/product/12.1.0/dbhome_1
$ export _CRS_ORA_LISTENER_NAME=LISTENER_TEST12
$ perl act_listener.pl start
LSNRCTL for Linux: Version 12.1.0.1.0 - Production on 28-MAR-2014 12:32:00
Copyright (c) 1991, 2013, Oracle.

All rights reserved.

Starting /u01/app/orcl12/product/12.1.0/dbhome_1/bin/tnslsnr: please wait...


TNSLSNR for Linux: Version 12.1.0.1.0 - Production
System parameter file is
/u01/app/orcl12/product/12.1.0/dbhome_1/network/admin/listener.ora
Log messages written to
/u01/app/orcl12/diag/tnslsnr/cluster1/listener_test12/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.51)(PORT=1521)))
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=test12-vip)(PORT=1521)))
STATUS of the LISTENER
-----------------------Alias
LISTENER_TEST12
Version
TNSLSNR for Linux: Version 12.1.0.1.0 - Production
Start Date
28-MAR-2014 12:32:01
Uptime
0 days 0 hr. 0 min. 0 sec
Trace Level
off
Security
ON: Local OS Authentication
SNMP
OFF
Listener Parameter File
/u01/app/orcl12/product/12.1.0/dbhome_1/network/admin/listener.ora
Listener Log File
/u01/app/orcl12/diag/tnslsnr/cluster1/listener_test12/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.51)(PORT=1521)))
The listener supports no services
The command completed successfully

$ perl act_listener.pl stop


LSNRCTL for Linux: Version 12.1.0.1.0 - Production on 28-MAR-2014 12:32:16
Copyright (c) 1991, 2013, Oracle.

All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=test12vip)(PORT=1521)))
The command completed successfully

4) Create new cluster_resource type "custom_listener" as "oracle" user.


$ ./crsctl add type custom_listener -basetype cluster_resource -attr
"ATTRIBUTE=ORACLE_HOME,TYPE=string"
$ ./crsctl modify type custom_listener -attr
"ATTRIBUTE=ACTION_SCRIPT,TYPE=string,DEFAULT_VALUE=/u01/app/12.1.0/grid_1/crs/
public/act_listener.pl"
$ ./crsctl modify type custom_listener -attr
"ATTRIBUTE=ORA_LISTENER_NAME,TYPE=string,DEFAULT_VALUE=NULL"

Cluster-aware resource types are aware of the cluster environment and are subject to cardinality and
cross-server switchover and failover.

5) Create new crs resource of custom_listener type for "LISTENER_TEST12" listener as "oracle" user.
$ ./crsctl add resource test12.listener -type custom_listener \
-attr "PLACEMENT=favored, HOSTING_MEMBERS='cluster1
cluster2',CHECK_INTERVAL=30,RESTART_ATTEMPTS=2, \
START_DEPENDENCIES=hard(test12.vip),STOP_DEPENDENCIES=hard(test12.vip), \
ORACLE_HOME=/u01/app/orcl12/product/12.1.0/dbhome_1,ORA_LISTENER_NAME=LISTENER
_TEST12"

6) Set "local_listener" Oracle instance parameter for "test12" database


SQL> alter system set local_listener='LISTENER_TEST12' scope=both;
SQL> alter system register;

7) Tests
Status of new "test12.listener" resource.
$ ./crsctl stat res test12.listener
NAME=test12.listener
TYPE=custom_listener
TARGET=OFFLINE
STATE=OFFLINE

Start listener.
$ ./crsctl start res test12.listener
CRS-2672: Attempting to start 'test12.listener' on 'cluster1'
CRS-2676: Start of 'test12.listener' on 'cluster1' succeeded

Relocate listener.
$ ./crsctl relocate res test12.listener -f
CRS-2673: Attempting to stop 'test12.listener' on 'cluster1'
CRS-2677: Stop of 'test12.listener' on 'cluster1' succeeded
CRS-2673: Attempting to stop 'test12.vip' on 'cluster1'
CRS-2677: Stop of 'test12.vip' on 'cluster1' succeeded
CRS-2672: Attempting to start 'test12.vip' on 'cluster2'
CRS-2676: Start of 'test12.vip' on 'cluster2' succeeded
CRS-2672: Attempting to start 'test12.listener' on 'cluster2'
CRS-2676: Start of 'test12.listener' on 'cluster2' succeeded

Clusterware status.
$ ./crsctl stat res -t
------------------------------------------------------------------------------Name
Target State
Server
State details
------------------------------------------------------------------------------Local Resources
------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.asm
ONLINE ONLINE
cluster1
Started,STABLE
ONLINE ONLINE
cluster2
Started,STABLE
ora.net1.network
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.ons
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
------------------------------------------------------------------------------Cluster Resources
------------------------------------------------------------------------------ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.cvu
1
ONLINE ONLINE
cluster2
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
test12.listener
1
ONLINE ONLINE
cluster2
STABLE
test12.vip
1
ONLINE ONLINE
cluster2
STABLE
-------------------------------------------------------------------------------

Add resource for single instance database "test12"


1) Create "action_script_db.sh" file in "/u01/app/12.1.0/grid_1/crs/public" directory on both nodes:
#!/bin/sh
export
export
export
export
export

ORA_CRS_HOME=$_CRS_ORA_CRS_HOME
ORACLE_HOME=$_CRS_ORACLE_HOME
ORACLE_SID=$_CRS_ORACLE_SID
SPFILE=$_CRS_SPFILE
LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH

check_dbstatus() {
OUTPUT=`$ORACLE_HOME/bin/sqlplus -s /nolog<<EOF
connect / as sysdba
set echo off define off heading off pagesize 0
SET LINESIZE 100
COLUMN l_output FORMAT A100
SELECT i.status || ' ' || pa.value || '/' || 'ora_' || p.spid || '.aud' AS l_output
FROM v\\$session s,
v\\$process p,
v\\$parameter pa,
v\\$instance i
WHERE pa.name = 'audit_file_dest'
AND s.paddr = p.addr
AND s.sid = (select sid from v\\$mystat where rownum=1) and s.audsid =
sys_context('userenv','sessionid');
exit
EOF`
DBSTATUS=$(echo $OUTPUT | awk '{print $1}')
AUDITFILE=$(echo $OUTPUT | awk '{print $2}')
rm -f $AUDITFILE 2>/dev/null
if [ "$DBSTATUS" == "OPEN" ]
then
return 0
else
return 1
fi
}
case $1 in
'start')
$ORACLE_HOME/bin/sqlplus /nolog <<EOF
connect / as sysdba
startup
EOF
check_dbstatus
RET=$?
;;
'stop')
$ORACLE_HOME/bin/sqlplus /nolog <<EOF
connect / as sysdba
shutdown immediate
EOF
NUM=`ps -ef | grep -i smon_${ORACLE_SID} | grep -v grep | wc -l`
if [ $NUM = 0 ]; then
RET=0

else
RET=1
fi
;;
'clean')
$ORACLE_HOME/bin/sqlplus /nolog <<EOF
connect / as sysdba
shutdown abort
EOF
##for i in `ps -ef | grep -i mon_${ORACLE_SID} | awk '{print $2}' ` ;do kill -9 $i;
done
NUM=`ps -ef | grep -i smon_${ORACLE_SID} | grep -v grep | wc -l`
if [ $NUM = 0 ]; then
RET=0
else
RET=1
fi
;;
'check')
check_dbstatus
RET=$?
;;
'*')
RET=0
;;
esac
if [ $RET -eq 0 ]; then
exit 0
else
exit 1
fi

Set execute permission.


$ chmod ug+x action_script_db.sh

Copy script to second node.


$ scp action_script_db.sh oracle@cluster2:/u01/app/12.1.0/grid_1/crs/public
action_script_db.sh
100% 1648
1.6KB/s
00:00

2) Create file with attributes for single instance database resource.


$ cd /u01/app/12.1.0/grid_1/crs/public/
$ vi test12_instance_db.res
TYPE=single_instance_db
DESCRIPTION=test12 Oracle Database Resource
ACL=owner:orcl12:rwx,pgrp:oinstall:r-x,other::r-ACTION_SCRIPT=/u01/app/12.1.0/grid_1/crs/public/action_script_db.sh
AGENT_FILENAME=%CRS_HOME%/bin/scriptagent
PLACEMENT=restricted
ACTIVE_PLACEMENT=0
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=60
DEGREE=1
ENABLED=1
HOSTING_MEMBERS=cluster1 cluster2
LOGGING_LEVEL=1
RESTART_ATTEMPTS=1
START_DEPENDENCIES=hard(ora.DATA.dg,ora.FRA.dg,test12.listener)
pullup(ora.DATA.dg,ora.FRA.dg,test12.listener)
START_TIMEOUT=0
STOP_DEPENDENCIES=hard(test12.listener,intermediate:ora.asm,shutdown:or
a.DATA.dg,ora.FRA.dg)
STOP_TIMEOUT=0
UPTIME_THRESHOLD=1h
ORACLE_SID=test12
ORACLE_HOME=/u01/app/orcl12/product/12.1.0/dbhome_1
ORA_CRS_HOME=/u01/app/12.1.0/grid_1
SPFILE=+DATA/test12/spfiletest12.ora

3) Create new resource type for single instance database as "oracle" user

$ ./crsctl add type single_instance_db -basetype cluster_resource attr


"ATTRIBUTE=ORACLE_SID,TYPE=string,FLAGS=REQUIRED,ATTRIBUTE=ORACLE_HOME,
TYPE=string,FLAGS=REQUIRED,ATTRIBUTE=SPFILE,TYPE=string,FLAGS=REQUIRED,
ATTRIBUTE=ORA_CRS_HOME,TYPE=string,FLAGS=REQUIRED"

4) Add resource for "test12" database as "root"


./crsctl add resource test12.db -type single_instance_db -file
/u01/app/12.1.0/grid_1/crs/public/test12_instance_db.res
CRS-2510: Resource 'ora.FRA.dg' used in dependency 'hard' does not exist or is
not registered
CRS-2514: Dependency attribute specification 'hard' is invalid in resource
'test12.db'
CRS-4000: Command Add failed, or completed with errors.

Just dismount and mount FRA diskgroup to overcome this error:


$ sqlplus / as sysasm
SQL> alter diskgroup FRA dismount;
Diskgroup altered.
SQL> alter diskgroup FRA mount;
Diskgroup altered.

Now execute command above:


# ./crsctl add resource test12.db -type single_instance_db -file
/u01/app/12.1.0/grid_1/crs/public/test12_instance_db.res

5) Create directories for trace files and audit dumps and copy parameter file to second node
$ mkdir -p /u01/app/orcl12/diag/rdbms/test12/test12/trace
$ mkdir -p /u01/app/orcl12/admin/test12/adump
$ scp /u01/app/orcl12/product/12.1.0/dbhome_1/dbs/inittest12.ora
orcl12@cluster2:$ORACLE_HOME/dbs

Add "/etc/oratab" entry for "test12" database to second node:


$ vi /etc/oratab
(add)
test12:/u01/app/orcl12/product/12.1.0/dbhome_1:N:

6) Tests
Status of single instance database resource.
# ./crsctl stat res test12.db
NAME=test12.db
TYPE=single_instance_db
TARGET=OFFLINE
STATE=OFFLINE

Start "test12" instance on "cluster1" node.


# ./crsctl start res test12.db -n cluster1 -f
CRS-2673: Attempting to stop 'test12.listener' on 'cluster2'
CRS-2677: Stop of 'test12.listener' on 'cluster2' succeeded
CRS-2673: Attempting to stop 'test12.vip' on 'cluster2'
CRS-2677: Stop of 'test12.vip' on 'cluster2' succeeded
CRS-2672: Attempting to start 'test12.vip' on 'cluster1'
CRS-2676: Start of 'test12.vip' on 'cluster1' succeeded
CRS-2672: Attempting to start 'test12.listener' on 'cluster1'
CRS-2676: Start of 'test12.listener' on 'cluster1' succeeded
CRS-2672: Attempting to start 'test12.db' on 'cluster1'
CRS-2676: Start of 'test12.db' on 'cluster1' succeeded

Virtual IP, listener and database instance are started on "cluster1" node.
# ./crsctl stat res -t
------------------------------------------------------------------------------Name
Target State
Server
State details
------------------------------------------------------------------------------Local Resources
------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.FRA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE OFFLINE
cluster2
STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.asm
ONLINE ONLINE
cluster1
Started,STABLE

ONLINE ONLINE
cluster2
Started,STABLE
ora.net1.network
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.ons
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
------------------------------------------------------------------------------Cluster Resources
------------------------------------------------------------------------------ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.cvu
1
ONLINE ONLINE
cluster2
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
test12.db
1
ONLINE ONLINE
cluster1
STABLE
test12.listener
1
ONLINE ONLINE
cluster1
STABLE
test12.vip
1
ONLINE ONLINE
cluster1
STABLE
-------------------------------------------------------------------------------

Relocate database instance to another node.


# ./crsctl relocate resource test12.db -f
CRS-2673: Attempting to stop 'test12.db' on 'cluster1'
CRS-2677: Stop of 'test12.db' on 'cluster1' succeeded
CRS-2673: Attempting to stop 'test12.listener' on 'cluster1'
CRS-2677: Stop of 'test12.listener' on 'cluster1' succeeded
CRS-2673: Attempting to stop 'test12.vip' on 'cluster1'
CRS-2677: Stop of 'test12.vip' on 'cluster1' succeeded
CRS-2672: Attempting to start 'test12.vip' on 'cluster2'
CRS-2676: Start of 'test12.vip' on 'cluster2' succeeded
CRS-2672: Attempting to start 'test12.listener' on 'cluster2'
CRS-2676: Start of 'test12.listener' on 'cluster2' succeeded
CRS-2672: Attempting to start 'test12.db' on 'cluster2'
CRS-2676: Start of 'test12.db' on 'cluster2' succeeded

# ./crsctl stat res -t


------------------------------------------------------------------------------Name
Target State
Server
State details
------------------------------------------------------------------------------Local Resources
------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.FRA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.asm
ONLINE ONLINE
cluster1
Started,STABLE
ONLINE ONLINE
cluster2
Started,STABLE
ora.net1.network
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.ons
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
------------------------------------------------------------------------------Cluster Resources
------------------------------------------------------------------------------ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.cvu
1
ONLINE ONLINE
cluster2
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
test12.db
1
ONLINE ONLINE
cluster2
STABLE
test12.listener
1
ONLINE ONLINE
cluster2
STABLE
test12.vip
1
ONLINE ONLINE
cluster2
STABLE
-------------------------------------------------------------------------------

Oracle reallocated virtual IP, listener and database instance to second node.

Failover tests with listener


Kill listener process twice
# ps -ef|grep LISTENER_TEST12|grep -v grep
oracle
20206
1 0 20:23 ?
00:00:00
/u01/app/orcl12/product/12.1.0/dbhome_1/bin/tnslsnr LISTENER_TEST12 inherit
# kill -9 20206

# ./crsctl stat res test12.listener -v


NAME=test12.listener
TYPE=custom_listener
LAST_SERVER=cluster2
STATE=ONLINE on cluster2
TARGET=ONLINE
CARDINALITY_ID=1
CREATION_SEED=28
RESTART_COUNT=1
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=test12.listener 1 1
INCARNATION=9
LAST_RESTART=03/28/2014 20:28:03
LAST_STATE_CHANGE=03/28/2014 20:28:03
STATE_DETAILS=
INTERNAL_STATE=STABLE
INSTANCE_COUNT=1

Listener restarted - check new PID and kill process again


# ps -ef|grep LISTENER_TEST12|grep -v grep
oracle
20648
1 0 20:28 ?
00:00:00
/u01/app/orcl12/product/12.1.0/dbhome_1/bin/tnslsnr LISTENER_TEST12 inherit
# kill -9 20648

# ./crsctl stat res test12.listener -v


NAME=test12.listener
TYPE=custom_listener
LAST_SERVER=cluster2
STATE=ONLINE on cluster2
TARGET=ONLINE
CARDINALITY_ID=1
CREATION_SEED=28
RESTART_COUNT=2
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=test12.listener 1 1
INCARNATION=9
LAST_RESTART=03/28/2014 20:28:03
LAST_STATE_CHANGE=03/28/2014 20:29:33
STATE_DETAILS=
INTERNAL_STATE=STARTING
INSTANCE_COUNT=1

# ps -ef|grep LISTENER_TEST12|grep -v grep


oracle
20782
1 0 20:29 ?
00:00:00
/u01/app/orcl12/product/12.1.0/dbhome_1/bin/tnslsnr LISTENER_TEST12 inherit
#
# kiill -9 20782

Clusterware moved all resources to "healthy" host:


# ./crsctl stat res test12.listener -v
NAME=test12.listener
TYPE=custom_listener
LAST_SERVER=cluster1
STATE=ONLINE on cluster1
TARGET=ONLINE
CARDINALITY_ID=1
CREATION_SEED=28
RESTART_COUNT=0
FAILURE_COUNT=1
FAILURE_HISTORY=1396035034:cluster2
ID=test12.listener 1 1
INCARNATION=11
LAST_RESTART=03/28/2014 20:31:16
LAST_STATE_CHANGE=03/28/2014 20:30:54
STATE_DETAILS=
INTERNAL_STATE=STABLE
INSTANCE_COUNT=1

# ./crsctl stat res -t


------------------------------------------------------------------------------Name
Target State
Server
State details
------------------------------------------------------------------------------Local Resources
------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.FRA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.asm
ONLINE ONLINE
cluster1
Started,STABLE
ONLINE ONLINE
cluster2
Started,STABLE
ora.net1.network
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.ons
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
------------------------------------------------------------------------------Cluster Resources
------------------------------------------------------------------------------ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.cvu
1
ONLINE ONLINE
cluster2
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
test12.db
1
ONLINE ONLINE
cluster1
STABLE
test12.listener
1
ONLINE ONLINE
cluster1
STABLE
test12.vip
1
ONLINE ONLINE
cluster1
STABLE
-------------------------------------------------------------------------------

Oracle reallocated virtual IP address, listener and database instance to "cluster1" node.

Failover tests with databse instance

# ./crsctl stat res -t


------------------------------------------------------------------------------Name
Target State
Server
State details
------------------------------------------------------------------------------Local Resources
------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.FRA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.asm
ONLINE ONLINE
cluster1
Started,STABLE
ONLINE ONLINE
cluster2
Started,STABLE
ora.net1.network
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.ons
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
------------------------------------------------------------------------------Cluster Resources
------------------------------------------------------------------------------ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.cvu
1
ONLINE ONLINE
cluster1
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
test12.db
1
ONLINE ONLINE
cluster1
STABLE
test12.listener
1
ONLINE ONLINE
cluster1
STABLE
test12.vip
1
ONLINE ONLINE
cluster1
STABLE
-------------------------------------------------------------------------------

Kill SMON process to force database crash.


# ps -ef|grep smon_test12|grep -v grep
orcl12
3958
1 0 09:29 ?
00:00:00 ora_smon_test12
# kill -9 3958
Oracle restarts database instance after crash.
# ./crsctl stat res -t
------------------------------------------------------------------------------Name
Target State
Server
State details
------------------------------------------------------------------------------Local Resources
------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.FRA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.asm
ONLINE ONLINE
cluster1
Started,STABLE
ONLINE ONLINE
cluster2
Started,STABLE
ora.net1.network
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.ons
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
------------------------------------------------------------------------------Cluster Resources
------------------------------------------------------------------------------ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.cvu
1
ONLINE ONLINE
cluster1
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
test12.db
1
ONLINE OFFLINE
cluster1
STARTING
test12.listener
1
ONLINE ONLINE
cluster1
STABLE
test12.vip
1
ONLINE ONLINE
cluster1
STABLE
------------------------------------------------------------------------------

Kill SMON process again to force crash.


# ps -ef|grep smon_test12|grep -v grep
orcl12
8229
1 0 09:33 ?
00:00:00 ora_smon_test12
# kill -9 8229
Virtual private IP, listener and database instance failover to second "cluster2" node.
# ./crsctl stat res -t
------------------------------------------------------------------------------Name
Target State
Server
State details
------------------------------------------------------------------------------Local Resources
------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.FRA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.asm
ONLINE ONLINE
cluster1
Started,STABLE
ONLINE ONLINE
cluster2
Started,STABLE
ora.net1.network
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.ons
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
------------------------------------------------------------------------------Cluster Resources
------------------------------------------------------------------------------ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.cvu
1
ONLINE ONLINE
cluster1
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
test12.db
1
ONLINE OFFLINE
cluster2
STARTING
test12.listener
1
ONLINE ONLINE
cluster2
STABLE
test12.vip
1
ONLINE ONLINE
cluster2
STABLE
------------------------------------------------------------------------------

IX. Add 11gR2 EE Oracle software and database to cluster

On both nodes create user which will be owner of 11gR2 software


# useradd -m -u 503 -g oinstall -G dba,vboxsf -d /home/orcl11 -s /bin/bash
orcl11
# passwd orcl11
Changing password for user orcl11.
New password:
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: is too simple
Retype new password:
passwd: all authentication tokens updated successfully.

On both nodes add to "/etc/security/limits.conf":


orcl11
orcl11
orcl11
orcl11

soft
hard
soft
hard

nproc
nproc
nofile
nofile

2047
16384
1024
65536

Add directories:
# mkdir -p /u01/app/orcl11
# chown -R orcl11:oinstall /u01/app/orcl11/
# chmod -R 775 /u01/app/orcl11

Add environment variables in orcl11 user .bash_profile on both nodes


# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User
export
export
export

specific environment and startup programs


ORACLE_BASE=/u01/app/orcl11
ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1
ORACLE_PATH=$ORACLE_HOME/rdbms/admin

export ORACLE_SID=test11
export PATH=.:${PATH}:$HOME/bin:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch
export PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
export NLS_DATE_FORMAT="DD.MM.YYYY HH24:MI:SS"

Login as "orcl11" OS user and start runInstaller.


$ cd /media/sf_SharedFolderVM/database/
$ ./runInstaller

Skip "Configure Security Updates" -> Next

Ignore warning and click "Yes"

For Installation Option select "Install Database Software Only" -> Next

Select both nodes to install software -> Next

Click SSH Connectivity, enter password for OS user "orcl11" and "Setup" connectivity

SSH Connectivity successfully established -> "OK" -> Next

Select language -> Next

Select "Enterprise Edition" then "Select Options"

Uncheck all components -> "OK" -> Next

Specify location for Oracle software -> Next

Select "Privilaged Operating System Groups" -> Next

Check "Prerequisite Checks" repot and install missing packages


I have missing packages installed (higher version) so I'll ignore warning -> Next

Summary -> Finish

Progress

Run "Configuration Scripts" as "root" user on both nodes -> OK

Installation complete -> Close

Create database creation scripts using Database Configuration Assistant


$ dbca
Choose "Oracle single instance database" -> Next

"Create Database" -> Next

Select "Custom Database" -> Next

Enter SID and Global Database Name -> Next

Uncheck "Configure Enterprise Manager"

Uncheck "Enable automatic maintenance tasks" -> Next

Enter administrative password -> Next

Ignore warning for weak password -> "Yes"

Select ASM for database files and "+DATA" disk group for database files -> Next

Select "+FRA" ASM disk group for fast recovery files -> Next

Uncheck Components

Under "Standard Database Components" uncheck Oracle JVM and Oracle APEX -> OK -> Next

Set Memory parameters

Database Character Set -> Next

Check Database Storage Configuration -> Next

Select "Generate Database Creation Scripts" and enter location to store scripts -> Finish

Summary -> OK

Scripts generated -> "No"

Edit "postDBCreation.sql" script and remove lines to register database with Clusterware. Later we will
create resources for vip, listener and database instance.
$ cd /u01/app/orcl11/admin/test11/scripts
$ vi postDBCreation.sql
(delete)
host /u01/app/orcl11/product/11.2.0/dbhome_1/bin/srvctl enable database -d test11;
host /u01/app/orcl11/product/11.2.0/dbhome_1/bin/srvctl start database -d test11;

From "test11.sql" delete:


$ cd /u01/app/orcl11/admin/test11/scripts
$ vi test11.sql
(delete)
host /u01/app/orcl11/product/11.2.0/dbhome_1/bin/srvctl add database -d
test11 -o /u01/app/orcl11/product/11.2.0/dbhome_1 -p
+DATA/test11/spfiletest11.ora -n test11 -x cluster1 -a DATA,FRA

Run script "test11.sql " to create database:


$ sqlplus /nolog
SQL> @/u01/app/orcl11/admin/test11/scripts/test11.sql

Create trace and audit directories and copy parameter file to another node:
$ mkdir -p /u01/app/orcl11/diag/rdbms/test11/test11/trace
$ mkdir -p /u01/app/orcl11/admin/test11/adump
$ scp inittest11.ora
orcl11@cluster2:/u01/app/orcl11/product/11.2.0/dbhome_1/dbs
inittest11.ora

Optionally add database to "/etc/oratab" on both nodes:


$ vi /etc/oratab
(add)
test11:/u01/app/orcl11/product/11.2.0/dbhome_1:N:

Create listener.ora and tnsnames.ora in ORACLE_HOME/network/admin for both nodes.


$ cd /u01/app/orcl11/product/11.2.0/dbhome_1/network/admin/
$ vi listener.ora
LISTENER_TEST11 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = test11-vip)(PORT = 1522))
)
)

$ vi tnsnames.ora
TEST11 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = test11-vip)(PORT = 1522))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = test11)
)
)
LISTENER_TEST11 = (ADDRESS = (PROTOCOL = TCP)(HOST = test12-vip)(PORT =
1522))

$ scp listener.ora tnsnames.ora


orcl11@cluster2:/u01/app/orcl11/product/11.2.0/dbhome_1/network/admin

X. Configure Active/Passive clustering for new database

1) In "/etc/hosts" add entry for new VIP address on both nodes


# vi /etc/hosts
(add)
192.168.56.52

test11-vip

2) Create new VIP resource as "root" user on one node.

# ./appvipcfg create -network=1 -ip=192.168.56.52 -vipname=test11.vip user=root


Production Copyright 2007, 2008, Oracle.All rights reserved
2014-03-29 13:52:30: Creating Resource Type
2014-03-29 13:52:30: Executing /u01/app/12.1.0/grid_1/bin/crsctl add type
app.appvip_net1.type -basetype ora.cluster_vip_net1.type -file
/u01/app/12.1.0/grid_1/crs/template/appvip.type
2014-03-29 13:52:30: Executing cmd: /u01/app/12.1.0/grid_1/bin/crsctl add type
app.appvip_net1.type -basetype ora.cluster_vip_net1.type -file
/u01/app/12.1.0/grid_1/crs/template/appvip.type
2014-03-29 13:52:30: Command output:
> CRS-2728: A resource type with the name 'app.appvip_net1.type' is already
registered
> CRS-4000: Command Add failed, or completed with errors.
>End Command output
CRS-2728: A resource type with the name 'app.appvip_net1.type' is already
registered
CRS-4000: Command Add failed, or completed with errors.
2014-03-29 13:52:30: Create the Resource
2014-03-29 13:52:30: Executing /u01/app/12.1.0/grid_1/bin/crsctl add resource
test11.vip -type app.appvip_net1.type -attr
"USR_ORA_VIP=192.168.56.52,START_DEPENDENCIES=hard(ora.net1.network)
pullup(ora.net1.network),STOP_DEPENDENCIES=hard(ora.net1.network),ACL='owner:r
oot:rwx,pgrp:root:r-x,other::r--,user:root:rx',HOSTING_MEMBERS=cluster1.localdomain,APPSVIP_FAILBACK="
2014-03-29 13:52:30: Executing cmd: /u01/app/12.1.0/grid_1/bin/crsctl add
resource test11.vip -type app.appvip_net1.type -attr
"USR_ORA_VIP=192.168.56.52,START_DEPENDENCIES=hard(ora.net1.network)
pullup(ora.net1.network),STOP_DEPENDENCIES=hard(ora.net1.network),ACL='owner:r
oot:rwx,pgrp:root:r-x,other::r--,user:root:rx',HOSTING_MEMBERS=cluster1.localdomain,APPSVIP_FAILBACK="

# ./crsctl setperm resource test12.vip -u user:oracle:r-x

Tests for "test11.vip" virtual IP address.


# ./crsctl start resource test11.vip
CRS-2672: Attempting to start 'test11.vip' on 'cluster2'
CRS-2676: Start of 'test11.vip' on 'cluster2' succeeded
./crsctl
CRS-2673:
CRS-2677:
CRS-2672:
CRS-2676:

relocate resource test11.vip


Attempting to stop 'test11.vip' on 'cluster2'
Stop of 'test11.vip' on 'cluster2' succeeded
Attempting to start 'test11.vip' on 'cluster1'
Start of 'test11.vip' on 'cluster1' succeeded

# ping 192.168.56.52
PING 192.168.56.52 (192.168.56.52) 56(84) bytes of data.
64 bytes from 192.168.56.52: icmp_seq=1 ttl=64 time=0.023 ms
64 bytes from 192.168.56.52: icmp_seq=2 ttl=64 time=0.026 ms
^C
--- 192.168.56.52 ping statistics --2 packets transmitted, 2 received, 0% packet loss, time 1353ms
rtt min/avg/max/mdev = 0.023/0.024/0.026/0.005 ms

Create resource for new listener on one node as "oracle" user


$ ./crsctl add resource test11.listener -type custom_listener \
-attr "PLACEMENT=favored, HOSTING_MEMBERS='cluster1
cluster2',CHECK_INTERVAL=30,RESTART_ATTEMPTS=2, \
START_DEPENDENCIES=hard(test11.vip),STOP_DEPENDENCIES=hard(test11.vip), \
ORACLE_HOME=/u01/app/orcl11/product/11.2.0/dbhome_1,ORA_LISTENER_NAME=LISTENER
_TEST11"

Tests for test11.listener resource.


$ ./crsctl start res test11.listener -f
CRS-2672: Attempting to start 'test11.listener' on 'cluster1'
CRS-2676: Start of 'test11.listener' on 'cluster1' succeeded
$ ./crsctl relocate res test11.listener -f
CRS-2673: Attempting to stop 'test11.listener' on 'cluster1'
CRS-2677: Stop of 'test11.listener' on 'cluster1' succeeded
CRS-2673: Attempting to stop 'test11.vip' on 'cluster1'
CRS-2677: Stop of 'test11.vip' on 'cluster1' succeeded
CRS-2672: Attempting to start 'test11.vip' on 'cluster2'
CRS-2676: Start of 'test11.vip' on 'cluster2' succeeded
CRS-2672: Attempting to start 'test11.listener' on 'cluster2'
CRS-2676: Start of 'test11.listener' on 'cluster2' succeeded

Set "local_listener" Oracle instance parameter for "test11" database


SQL> alter system set local_listener='LISTENER_TEST11' scope=both;
SQL> alter system register;

Create resource for "test11" database


Create parameter file with attributes for "test11" database
$ cd /u01/app/12.1.0/grid_1/crs/public/
$ vi test11_instance_db.res
TYPE=single_instance_db
DESCRIPTION=test11 Oracle Database Resource
ACL=owner:orcl11:rwx,pgrp:oinstall:r-x,other::r-ACTION_SCRIPT=/u01/app/12.1.0/grid_1/crs/public/action_script_db.sh
AGENT_FILENAME=%CRS_HOME%/bin/scriptagent
PLACEMENT=restricted
ACTIVE_PLACEMENT=0
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=60
DEGREE=1
ENABLED=1
HOSTING_MEMBERS=cluster1 cluster2
LOGGING_LEVEL=1
RESTART_ATTEMPTS=1
START_DEPENDENCIES=hard(ora.DATA.dg,ora.FRA.dg,test11.listener)
pullup(ora.DATA.dg,ora.FRA.dg,test11.listener)
START_TIMEOUT=0
STOP_DEPENDENCIES=hard(test11.listener,intermediate:ora.asm,shutdown:or
a.DATA.dg,ora.FRA.dg)
STOP_TIMEOUT=0
UPTIME_THRESHOLD=1h
ORACLE_SID=test11
ORACLE_HOME=/u01/app/orcl11/product/11.2.0/dbhome_1
ORA_CRS_HOME=/u01/app/12.1.0/grid_1
SPFILE=+DATA/test11/spfiletest11.ora

Add single instance resource for test11 database as "root"


# ./crsctl add resource test11.db -type single_instance_db -file
/u01/app/12.1.0/grid_1/crs/public/test11_instance_db.res

Tests for "test11.db" single instance database resource.


Start database instance.
$ ./crsctl start resource test11.db -n cluster1 -f
CRS-2672: Attempting to start 'test11.db' on 'cluster1'
CRS-2676: Start of 'test11.db' on 'cluster1' succeeded

Relocate database instance to "cluster2" node.


$ ./crsctl relocate resource test11.db -f
CRS-2673: Attempting to stop 'test11.db' on 'cluster1'
CRS-2677: Stop of 'test11.db' on 'cluster1' succeeded
CRS-2673: Attempting to stop 'test11.listener' on 'cluster1'
CRS-2677: Stop of 'test11.listener' on 'cluster1' succeeded
CRS-2673: Attempting to stop 'test11.vip' on 'cluster1'
CRS-2677: Stop of 'test11.vip' on 'cluster1' succeeded
CRS-2672: Attempting to start 'test11.vip' on 'cluster2'
CRS-2676: Start of 'test11.vip' on 'cluster2' succeeded
CRS-2672: Attempting to start 'test11.listener' on 'cluster2'
CRS-2676: Start of 'test11.listener' on 'cluster2' succeeded
CRS-2672: Attempting to start 'test11.db' on 'cluster2'
CRS-2676: Start of 'test11.db' on 'cluster2' succeeded

Start test12 instance on "cluster1" node to have both databases running.


$ ./crsctl start resource test12.db -n cluster1 -f
CRS-2672: Attempting to start 'test12.vip' on 'cluster1'
CRS-2676: Start of 'test12.vip' on 'cluster1' succeeded
CRS-2672: Attempting to start 'test12.listener' on 'cluster1'
CRS-2676: Start of 'test12.listener' on 'cluster1' succeeded
CRS-2672: Attempting to start 'test12.db' on 'cluster1'
CRS-2676: Start of 'test12.db' on 'cluster1' succeeded

Check status on cluster.


$ ./crsctl status resource -t
------------------------------------------------------------------------------Name
Target State
Server
State details
------------------------------------------------------------------------------Local Resources
------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.FRA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.asm
ONLINE ONLINE
cluster1
Started,STABLE
ONLINE ONLINE
cluster2
Started,STABLE
ora.net1.network
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.ons
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
------------------------------------------------------------------------------Cluster Resources
------------------------------------------------------------------------------ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.cvu
1
ONLINE ONLINE
cluster1
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
test11.db
1
ONLINE ONLINE
cluster2
STABLE
test11.listener
1
ONLINE ONLINE
cluster2
STABLE
test11.vip
1
ONLINE ONLINE
cluster2
STABLE
test12.db
1
ONLINE ONLINE
cluster1
STABLE
test12.listener
1
ONLINE ONLINE
cluster1
STABLE
test12.vip
1
ONLINE ONLINE
cluster1
STABLE
-------------------------------------------------------------------------------

I have two databases runing in my Active/Passive Failover cluster.

XI. FAILOVER TESTS


Server Crash failover test
# ./crsctl stat res -t
------------------------------------------------------------------------------Name
Target State
Server
State details
------------------------------------------------------------------------------Local Resources
------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.FRA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.asm
ONLINE ONLINE
cluster1
Started,STABLE
ONLINE ONLINE
cluster2
Started,STABLE
ora.net1.network
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.ons
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
------------------------------------------------------------------------------Cluster Resources
------------------------------------------------------------------------------ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.cvu
1
ONLINE ONLINE
cluster1
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
test11.db
1
ONLINE ONLINE
cluster2
STABLE
test11.listener
1
ONLINE ONLINE
cluster2
STABLE
test11.vip
1
ONLINE ONLINE
cluster2
STABLE
test12.db
1
ONLINE ONLINE
cluster1
STABLE
test12.listener
1
ONLINE ONLINE
cluster1
STABLE
test12.vip
1
ONLINE ONLINE
cluster1
STABLE

Simulate crash of the node "cluster2".


Cluster2 -> Close -> Power Off

Power Off

After few seconds cluster will find out that second node is missing. All resources will be transferred to
"cluster1" node.
# ./crsctl stat res -t
------------------------------------------------------------------------------Name
Target State
Server
State details
------------------------------------------------------------------------------Local Resources
------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ora.FRA.dg
ONLINE ONLINE
cluster1
STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
cluster1
STABLE
ora.asm
ONLINE ONLINE
cluster1
Started,STABLE
ora.net1.network
ONLINE ONLINE
cluster1
STABLE
ora.ons
ONLINE ONLINE
cluster1
STABLE
------------------------------------------------------------------------------Cluster Resources
------------------------------------------------------------------------------ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE INTERMEDIATE cluster1
FAILED
OVER,STABLE
ora.cvu
1
ONLINE ONLINE
cluster1
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
test11.db
1
ONLINE ONLINE
cluster1
STABLE
test11.listener
1
ONLINE ONLINE
cluster1
STABLE
test11.vip
1
ONLINE ONLINE
cluster1
STABLE
test12.db
1
ONLINE ONLINE
cluster1
STABLE
test12.listener
1
ONLINE ONLINE
cluster1
STABLE
test12.vip
1
ONLINE ONLINE
cluster1
STABLE
-------------------------------------------------------------------------------

Start "cluster2" node.

When Oracle Clusterware realizes that "cluster2" node is Up again it will add node to cluster.
[cssd(2718)]CRS-1601:CSSD Reconfiguration complete. Active nodes are cluster1
cluster2 .
2014-03-30 09:09:11.293:
[crsd(2876)]CRS-2772:Server 'cluster2' has been assigned to pool 'Free'.

Check status.
# ./crsctl stat res -t
------------------------------------------------------------------------------Name
Target State
Server
State details
------------------------------------------------------------------------------Local Resources
------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.FRA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.asm
ONLINE ONLINE
cluster1
Started,STABLE
ONLINE ONLINE
cluster2
Started,STABLE
ora.net1.network
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.ons
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
------------------------------------------------------------------------------Cluster Resources
------------------------------------------------------------------------------ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.cvu
1
ONLINE ONLINE
cluster1
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
test11.db
1
ONLINE ONLINE
cluster1
STABLE
test11.listener
1
ONLINE ONLINE
cluster1
STABLE
test11.vip
1
ONLINE ONLINE
cluster1
STABLE
test12.db
1
ONLINE ONLINE
cluster1
STABLE
test12.listener
1
ONLINE ONLINE
cluster1
STABLE
test12.vip
1
ONLINE ONLINE
cluster1
STABLE
-------------------------------------------------------------------------------

Connectivity Failover tests


What will happen if I now kill private network interface on "cluster1" node.
Cluster1 -> Settings

Click "Network" -> Adapter 2 -> Uncheck "Cable Connected"

After Oracle Clusteware realized that there is problem with interconnection it removed "cluster2" node
from cluster and transferred all resources to "cluster1" node.

[cssd(2752)]CRS-1612:Network communication with node cluster2 (2) missing for


50% of timeout interval. Removal of this node from cluster in 14.610 seconds
2014-03-30 09:36:29.206:
[cssd(2752)]CRS-1611:Network communication with node cluster2 (2) missing for
75% of timeout interval. Removal of this node from cluster in 6.610 seconds
2014-03-30 09:36:33.206:
[cssd(2752)]CRS-1610:Network communication with node cluster2 (2) missing for
90% of timeout interval. Removal of this node from cluster in 2.610 seconds
2014-03-30 09:36:35.819:
[cssd(2752)]CRS-1607:Node cluster2 is being evicted in cluster incarnation
291634377; details at (:CSSNM00007:) in
/u01/app/12.1.0/grid_1/log/cluster1/cssd/ocssd.log.
2014-03-30 09:36:39.553:
[cssd(2752)]CRS-1625:Node cluster2, number 2, was manually shut down
2014-03-30 09:36:39.625:
[cssd(2752)]CRS-1601:CSSD Reconfiguration complete. Active nodes are cluster1
.
2014-03-30 09:36:39.719:
[crsd(2914)]CRS-5504:Node down event reported for node 'cluster2'.
2014-03-30 09:36:42.639:
[crsd(2914)]CRS-2773:Server 'cluster2' has been removed from pool 'Free'.

# ./crsctl stat res -t


------------------------------------------------------------------------------Name
Target State
Server
State details
------------------------------------------------------------------------------Local Resources
------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ora.FRA.dg
ONLINE ONLINE
cluster1
STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
cluster1
STABLE
ora.asm
ONLINE ONLINE
cluster1
Started,STABLE
ora.net1.network
ONLINE ONLINE
cluster1
STABLE
ora.ons
ONLINE ONLINE
cluster1
STABLE
-------------------------------------------------------------------------------

Cluster Resources
------------------------------------------------------------------------------ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE INTERMEDIATE cluster1
FAILED
OVER,STABLE
ora.cvu
1
ONLINE ONLINE
cluster1
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
test11.db
1
ONLINE ONLINE
cluster1
STABLE
test11.listener
1
ONLINE ONLINE
cluster1
STABLE
test11.vip
1
ONLINE ONLINE
cluster1
STABLE
test12.db
1
ONLINE ONLINE
cluster1
STABLE
test12.listener
1
ONLINE ONLINE
cluster1
STABLE
test12.vip
1
ONLINE ONLINE
cluster1
STABLE
-------------------------------------------------------------------------------

Restart CRS stack on "cluster2" node and check status of the cluster.
# ./crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed
resources on 'cluster2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'cluster2'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'cluster2'
CRS-2677: Stop of 'ora.mdnsd' on 'cluster2' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on 'cluster2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'cluster2'
CRS-2673: Attempting to stop 'ora.gipcd' on 'cluster2'
CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on 'cluster2'
CRS-2673: Attempting to stop 'ora.evmd' on 'cluster2'
CRS-2681: Clean of 'ora.cluster_interconnect.haip' on 'cluster2' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'cluster2' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'cluster2' succeeded
CRS-2677: Stop of 'ora.evmd' on 'cluster2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on
'cluster2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@cluster2 bin]# ./crsctl start crs
CRS-4123: Oracle High Availability Services has been started.

# ./crsctl stat res -t


------------------------------------------------------------------------------Name
Target State
Server
State details
------------------------------------------------------------------------------Local Resources
------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.FRA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.asm
ONLINE ONLINE
cluster1
Started,STABLE
ONLINE ONLINE
cluster2
Started,STABLE
ora.net1.network
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.ons
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
------------------------------------------------------------------------------Cluster Resources
------------------------------------------------------------------------------ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.cvu
1
ONLINE ONLINE
cluster1
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
test11.db
1
ONLINE ONLINE
cluster1
STABLE
test11.listener
1
ONLINE ONLINE
cluster1
STABLE
test11.vip
1
ONLINE ONLINE
cluster1
STABLE
test12.db
1
ONLINE ONLINE
cluster1
STABLE
test12.listener
1
ONLINE ONLINE
cluster1
STABLE
test12.vip
1
ONLINE ONLINE
cluster1
STABLE
-------------------------------------------------------------------------------

What will happen if I now kill Public network on "cluster1" node.


Cluster1 -> Settings -> Network -> Adapter 1 -> Uncheck "Cable Connected"

2014-03-30 09:52:46.652:
[crsd(2914)]CRS-2878:Failed to restart resource 'ora.net1.network'
2014-03-30 09:52:46.998:
[crsd(2914)]CRS-2769:Unable to failover resource 'ora.net1.network'.
2014-03-30 09:52:47.395:
[crsd(2914)]CRS-2771:Maximum restart attempts reached for resource
'test11.vip'; will not restart.
2014-03-30 09:52:47.444:
[crsd(2914)]CRS-2771:Maximum restart attempts reached for resource
'ora.cluster1.vip'; will not restart.
2014-03-30 09:52:47.486:
[crsd(2914)]CRS-2771:Maximum restart attempts reached for resource
'test12.vip'; will not restart.
2014-03-30 09:52:47.637:
[crsd(2914)]CRS-2769:Unable to failover resource 'ora.net1.network'.

Oracle Clusterware realized that something is wrong with public network interface on "cluster1" node
and relocated all resources to "cluster2" node.
# ./crsctl stat res -t
-------------------------------------------------------------------------------Name
Target State
Server
State details
-------------------------------------------------------------------------------Local Resources
-------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.FRA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER.lsnr
ONLINE OFFLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.asm
ONLINE ONLINE
cluster1
Started,STABLE
ONLINE ONLINE
cluster2
Started,STABLE
ora.net1.network
ONLINE OFFLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.ons
ONLINE OFFLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
-------------------------------------------------------------------------------Cluster Resources
-------------------------------------------------------------------------------ora.cluster1.vip
1
ONLINE INTERMEDIATE cluster2
FAILED OVER,STABLE
ora.cluster2.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.cvu
1
ONLINE ONLINE
cluster2
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
test11.db
1
ONLINE ONLINE
cluster2
STABLE
test11.listener
1
ONLINE ONLINE
cluster2
STABLE
test11.vip
1
ONLINE ONLINE
cluster2
STABLE
test12.db
1
ONLINE ONLINE
cluster2
STABLE
test12.listener
1
ONLINE ONLINE
cluster2
STABLE
test12.vip
1
ONLINE ONLINE
cluster2
STABLE
--------------------------------------------------------------------------------

"Fix" public network interface.


Cluster1 -> Settings -> Network -> Adapter 1 -> Check "Cable Connected"
Node "cluster1" is again part of the cluster.

# ./crsctl stat res -t


------------------------------------------------------------------------------Name
Target State
Server
State details
------------------------------------------------------------------------------Local Resources
------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.FRA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.asm
ONLINE ONLINE
cluster1
Started,STABLE
ONLINE ONLINE
cluster2
Started,STABLE
ora.net1.network
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.ons
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
------------------------------------------------------------------------------Cluster Resources
------------------------------------------------------------------------------ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.cvu
1
ONLINE ONLINE
cluster2
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
test11.db
1
ONLINE ONLINE
cluster2
STABLE
test11.listener
1
ONLINE ONLINE
cluster2
STABLE
test11.vip
1
ONLINE ONLINE
cluster2
STABLE
test12.db
1
ONLINE ONLINE
cluster2
STABLE
test12.listener
1
ONLINE ONLINE
cluster2
STABLE
test12.vip
1
ONLINE ONLINE
cluster2
STABLE
-------------------------------------------------------------------------------

What will happen if I now kill public and private network interfaces on node "cluster1".
Adapter1 - Public network

Adapter2 - Private network

Oracle Clusterware will stop all resources on "cluster1" and wait DBA intervention.
# ./crsctl stat res -t
------------------------------------------------------------------------------Name
Target State
Server
State details
------------------------------------------------------------------------------Local Resources
------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ora.FRA.dg
ONLINE ONLINE
cluster1
STABLE
ora.LISTENER.lsnr
ONLINE OFFLINE
cluster1
STABLE
ora.asm
ONLINE ONLINE
cluster1
Started,STABLE
ora.net1.network
ONLINE OFFLINE
cluster1
STABLE
ora.ons
ONLINE OFFLINE
cluster1
STABLE
------------------------------------------------------------------------------Cluster Resources
------------------------------------------------------------------------------ora.cluster1.vip
1
ONLINE OFFLINE
STABLE
ora.cluster2.vip
1
ONLINE OFFLINE
STABLE
ora.cvu
1
ONLINE OFFLINE
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
test11.db
1
ONLINE OFFLINE
STABLE
test11.listener
1
ONLINE OFFLINE
STABLE
test11.vip
1
ONLINE OFFLINE
STABLE
test12.db
1
ONLINE OFFLINE
STABLE
test12.listener
1
ONLINE OFFLINE
STABLE
test12.vip
1
ONLINE OFFLINE
STABLE
-------------------------------------------------------------------------------

Storage Failover Tests


# ./crsctl stat res -t
------------------------------------------------------------------------------Name
Target State
Server
State details
------------------------------------------------------------------------------Local Resources
------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.FRA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.asm
ONLINE ONLINE
cluster1
Started,STABLE
ONLINE ONLINE
cluster2
Started,STABLE
ora.net1.network
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.ons
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
------------------------------------------------------------------------------Cluster Resources
------------------------------------------------------------------------------ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.cvu
1
ONLINE ONLINE
cluster2
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
test11.db
1
ONLINE ONLINE
cluster2
STABLE
test11.listener
1
ONLINE ONLINE
cluster2
STABLE
test11.vip
1
ONLINE ONLINE
cluster2
STABLE
test12.db
1
ONLINE ONLINE
cluster2
STABLE
test12.listener
1
ONLINE ONLINE
cluster2
STABLE
test12.vip
1
ONLINE ONLINE
cluster2
STABLE

Lets detach shared storage device from "cluster2" node.


We will disconnect "DATADISK1" which is on SATA Port 1.

C:\>cd "c:\Program Files\Oracle\VirtualBox"


>VBoxManage.exe storageattach cluster2 --storagectl "SATA" --port 1 --device 0 --medium
none

OS log on "cluster2" node.


Mar 30 10:15:33 cluster2 kernel: ata4: exception Emask 0x10 SAct 0x0 SErr
0x4010000 action 0xe frozen
Mar 30 10:15:33 cluster2 kernel: ata4: irq_stat 0x80400040, connection status
changed
Mar 30 10:15:33 cluster2 kernel: ata4: SError: { PHYRdyChg DevExch }
Mar 30 10:15:33 cluster2 kernel: ata4: hard resetting link
Mar 30 10:15:34 cluster2 kernel: ata4: SATA link down (SStatus 0 SControl 300)
Mar 30 10:15:39 cluster2 kernel: ata4: hard resetting link
Mar 30 10:15:39 cluster2 kernel: ata4: SATA link down (SStatus 0 SControl 300)
Mar 30 10:15:39 cluster2 kernel: ata4: limiting SATA link speed to 1.5 Gbps
Mar 30 10:15:44 cluster2 kernel: ata4: hard resetting link
Mar 30 10:15:45 cluster2 kernel: ata4: SATA link down (SStatus 0 SControl 310)
Mar 30 10:15:45 cluster2 kernel: ata4.00: disabled
Mar 30 10:15:45 cluster2 kernel: ata4: EH complete
Mar 30 10:15:45 cluster2 kernel: sd 3:0:0:0: rejecting I/O to offline device
Mar 30 10:15:45 cluster2 kernel: sd 3:0:0:0: [sdb] killing request
Mar 30 10:15:45 cluster2 kernel: sd 3:0:0:0: rejecting I/O to offline device
Mar 30 10:15:45 cluster2 kernel: sd 3:0:0:0: rejecting I/O to offline device
Mar 30 10:15:45 cluster2 kernel: sd 3:0:0:0: rejecting I/O to offline device
Mar 30 10:15:45 cluster2 kernel: sd 3:0:0:0: [sdb] Unhandled error code
Mar 30 10:15:45 cluster2 kernel: sd 3:0:0:0: [sdb] Result:
hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
Mar 30 10:15:45 cluster2 kernel: sd 3:0:0:0: [sdb] CDB: Write(10): 2a 00 00 07
00 51 00 00 01 00
Mar 30 10:15:45 cluster2 kernel: end_request: I/O error, dev sdb, sector
458833
Mar 30 10:15:45 cluster2 kernel: ata4.00: detaching (SCSI 3:0:0:0)
Mar 30 10:15:45 cluster2 kernel: end_request: I/O error, dev sdb, sector 0
Mar 30 10:15:45 cluster2 kernel: sd 3:0:0:0: [sdb] Synchronizing SCSI cache
Mar 30 10:15:45 cluster2 kernel: sd 3:0:0:0: [sdb] Result:
hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Mar 30 10:15:45 cluster2 kernel: sd 3:0:0:0: [sdb] Stopping disk
Mar 30 10:15:45 cluster2 kernel: sd 3:0:0:0: [sdb] START_STOP FAILED
Mar 30 10:15:45 cluster2 kernel: sd 3:0:0:0: [sdb] Result:
hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK

Oracle relocated resources to "healthy" node "cluster1".


Node "cluster2" rebooted to avoid possible corruption or other problems at disk level.
# ./crsctl stat res -t
------------------------------------------------------------------------------Name
Target State
Server
State details
------------------------------------------------------------------------------Local Resources
------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ora.FRA.dg
ONLINE ONLINE
cluster1
STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
cluster1
STABLE
ora.asm
ONLINE ONLINE
cluster1
Started,STABLE
ora.net1.network
ONLINE ONLINE
cluster1
STABLE
ora.ons
ONLINE ONLINE
cluster1
STABLE
------------------------------------------------------------------------------Cluster Resources
------------------------------------------------------------------------------ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE INTERMEDIATE cluster1
FAILED
OVER,STABLE
ora.cvu
1
ONLINE ONLINE
cluster1
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
test11.db
1
ONLINE ONLINE
cluster1
STABLE
test11.listener
1
ONLINE ONLINE
cluster1
STABLE
test11.vip
1
ONLINE ONLINE
cluster1
STABLE
test12.db
1
ONLINE ONLINE
cluster1
STABLE
test12.listener
1
ONLINE ONLINE
cluster1
STABLE
test12.vip
1
ONLINE ONLINE
cluster1
STABLE
------------------------------------------------------------------------------

"Fix" storage connection.

Shutdown "cluster1" and "cluster2" nodes to attach shareable storage to "cluster2" node again.
Use VBoxManage to attach storage.
C:\>cd "c:\Program Files\Oracle\VirtualBox"
C:\>VBoxManage.exe storageattach cluster2 --storagectl "SATA" --port 1 --device 0 -type hdd --medium D:D:\VirtualneMasine\ClusterSharedDisks\DataDisk1.vdi --mtype
shareable

Start nodes again and perform clusterware check.

# ./crsctl stat res -t


------------------------------------------------------------------------------Name
Target State
Server
State details
------------------------------------------------------------------------------Local Resources
------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.FRA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.asm
ONLINE ONLINE
cluster1
Started,STABLE
ONLINE ONLINE
cluster2
Started,STABLE
ora.net1.network
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.ons
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE

------------------------------------------------------------------------------Cluster Resources
------------------------------------------------------------------------------ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.cvu
1
ONLINE ONLINE
cluster1
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
test11.db
1
ONLINE ONLINE
cluster1
STABLE
test11.listener
1
ONLINE ONLINE
cluster1
STABLE
test11.vip
1
ONLINE ONLINE
cluster1
STABLE
test12.db
1
ONLINE ONLINE
cluster1
STABLE
test12.listener
1
ONLINE ONLINE
cluster1
STABLE
test12.vip
1
ONLINE ONLINE
cluster1
STABLE
-------------------------------------------------------------------------------

# ./crsctl relocate res test11.db -f


CRS-2673: Attempting to stop 'test11.db' on 'cluster1'
CRS-2677: Stop of 'test11.db' on 'cluster1' succeeded
CRS-2673: Attempting to stop 'test11.listener' on 'cluster1'
CRS-2677: Stop of 'test11.listener' on 'cluster1' succeeded
CRS-2673: Attempting to stop 'test11.vip' on 'cluster1'
CRS-2677: Stop of 'test11.vip' on 'cluster1' succeeded
CRS-2672: Attempting to start 'test11.vip' on 'cluster2'
CRS-2676: Start of 'test11.vip' on 'cluster2' succeeded
CRS-2672: Attempting to start 'test11.listener' on 'cluster2'
CRS-2676: Start of 'test11.listener' on 'cluster2' succeeded
CRS-2672: Attempting to start 'test11.db' on 'cluster2'
CRS-2676: Start of 'test11.db' on 'cluster2' succeeded

# ./crsctl stat res -t


-------------------------------------------------------------------------------Name
Target State
Server
State details
-------------------------------------------------------------------------------Local Resources
-------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.FRA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.asm
ONLINE ONLINE
cluster1
Started,STABLE
ONLINE ONLINE
cluster2
Started,STABLE
ora.net1.network
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.ons
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
-------------------------------------------------------------------------------Cluster Resources
-------------------------------------------------------------------------------ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.cvu
1
ONLINE ONLINE
cluster1
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
test11.db
1
ONLINE ONLINE
cluster2
STABLE
test11.listener
1
ONLINE ONLINE
cluster2
STABLE
test11.vip
1
ONLINE ONLINE
cluster2
STABLE
test12.db
1
ONLINE ONLINE
cluster1
STABLE
test12.listener
1
ONLINE ONLINE
cluster1
STABLE
test12.vip
1
ONLINE ONLINE
cluster1
STABLE
--------------------------------------------------------------------------------

XII. Configure Active/Passive clustering for single instance


database using SCAN listeners

Add SCAN IP addresses in "/etc/hosts" on both nodes:


(previously removed)
# vi /etc/hosts
(add)
# SCAN
192.168.56.91
192.168.56.92
192.168.56.93

clus-scan.localdomain
clus-scan.localdomain
clus-scan.localdomain

clus-scan
clus-scan
clus-scan

Add to listener.ora information about SCAN listeners on "cluster1" node.


(previously removed)
$ vi /u01/app/12.1.0/grid_1/network/admin/listener.ora
(add)
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN3 = ON
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN2 = ON
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN3 = OFF
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN2=OFF
by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
LISTENER_SCAN3 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN3))
)
)
LISTENER_SCAN2 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN2))
)
)
LISTENER_SCAN1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
)
)

# line added

Restore initial Clusterware configuration from automatic OCR backup.


On both nodes stop clusterware.
# ./crsctl stop crs
Use "-f" flag if normal operation hangs or fails to stop clusterware.
# ./crsctl stop crs -f
Check for active clusterware processes on both nodes.
# ps -ef|grep d.dbin|grep -v grep

As my OCR file is located on ASM I must mount ASM disks to restore OCR.
On "cluster1" node start clusterware with "-excl" flag to run clusterware in exclusive mode.
# ./crsctl start crs -excl
CRS-4123: Oracle High Availability Services has been started.
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'cluster1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'cluster1' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'cluster1'
CRS-2672: Attempting to start 'ora.mdnsd' on 'cluster1'
CRS-2676: Start of 'ora.evmd' on 'cluster1' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'cluster1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'cluster1'
CRS-2676: Start of 'ora.gpnpd' on 'cluster1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'cluster1'
CRS-2672: Attempting to start 'ora.gipcd' on 'cluster1'
CRS-2676: Start of 'ora.cssdmonitor' on 'cluster1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'cluster1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'cluster1'
CRS-2672: Attempting to start 'ora.diskmon' on 'cluster1'
CRS-2676: Start of 'ora.diskmon' on 'cluster1' succeeded
CRS-2676: Start of 'ora.cssd' on 'cluster1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'cluster1'
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'cluster1'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on
'cluster1'
CRS-2676: Start of 'ora.drivers.acfs' on 'cluster1' succeeded
CRS-2676: Start of 'ora.ctssd' on 'cluster1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'cluster1'
succeeded
CRS-2672: Attempting to start 'ora.asm' on 'cluster1'
CRS-2676: Start of 'ora.asm' on 'cluster1' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'cluster1'
CRS-2676: Start of 'ora.storage' on 'cluster1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'cluster1'
CRS-2676: Start of 'ora.crsd' on 'cluster1' succeeded

Stop crsd if running.


# ./crsctl stop resource ora.crsd -init
CRS-2673: Attempting to stop 'ora.crsd' on 'cluster1'
CRS-2677: Stop of 'ora.crsd' on 'cluster1' succeeded

Show OCR backups.


# ./ocrconfig -showbackup
cluster1
2014/03/29 13:43:13
cluster/backup00.ocr
cluster1
2014/03/29 13:43:13
cluster/day.ocr
cluster1
2014/03/29 13:43:13
cluster/week.ocr
cluster1
2014/03/27 22:02:13
cluster/backup_20140327_220825.ocr

/u01/app/12.1.0/grid_1/cdata/clus/u01/app/12.1.0/grid_1/cdata/clus/u01/app/12.1.0/grid_1/cdata/clus/u01/app/12.1.0/grid_1/cdata/clus-

Restore old OCR.


# ./ocrconfig -restore /u01/app/12.1.0/grid_1/cdata/cluscluster/backup_20140327_220825.ocr

Stop clusterware.
# ./crsctl stop crs

Start clusterware on both nodes.


# ./crsctl start crs

I have initial status of my clusterware resources (with SCAN) without added any resources.
# ./crsctl stat res -t
------------------------------------------------------------------------------Name
Target State
Server
State details
------------------------------------------------------------------------------Local Resources
------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
cluster1
STABLE
OFFLINE OFFLINE
cluster2
STABLE
ora.asm
ONLINE ONLINE
cluster1
Started,STABLE
ONLINE ONLINE
cluster2
Started,STABLE
ora.net1.network
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.ons
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
------------------------------------------------------------------------------Cluster Resources
------------------------------------------------------------------------------ora.LISTENER_SCAN1.lsnr
1
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER_SCAN2.lsnr
1
ONLINE ONLINE
cluster1
STABLE
ora.LISTENER_SCAN3.lsnr
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.cvu
1
ONLINE ONLINE
cluster1
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
ora.scan1.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.scan2.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.scan3.vip
1
ONLINE ONLINE
cluster1
STABLE
-------------------------------------------------------------------------------

Create single instance action script on both nodes.


$ cd /u01/app/12.1.0/grid_1/crs/public
$ vi single_instance_action_script.sh
(add)
#!/bin/sh
export
export
export
export
export

ORA_CRS_HOME=$_CRS_ORA_CRS_HOME
ORACLE_HOME=$_CRS_ORACLE_HOME
ORACLE_SID=$_CRS_ORACLE_SID
SPFILE=$_CRS_SPFILE
LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH

NODE_NAME=$(${ORA_CRS_HOME}/bin/olsnodes -l)
VIP_IP=$(${ORA_CRS_HOME}/bin/srvctl config vip -n $NODE_NAME | grep IPv4| awk
'{print $4}')
check_dbstatus() {
OUTPUT=`$ORACLE_HOME/bin/sqlplus -s /nolog<<EOF
connect / as sysdba
set echo off define off heading off pagesize 0
SET LINESIZE 100
COLUMN l_output FORMAT A100
SELECT i.status || ' ' || pa.value || '/' || 'ora_' || p.spid || '.aud' AS
l_output
FROM v\\$session s,
v\\$process p,
v\\$parameter pa,
v\\$instance i
WHERE pa.name = 'audit_file_dest'
AND s.paddr = p.addr
AND s.sid = (select sid from v\\$mystat where rownum=1) and s.audsid =
sys_context('userenv','sessionid');
exit
EOF`
DBSTATUS=$(echo $OUTPUT | awk '{print $1}')
AUDITFILE=$(echo $OUTPUT | awk '{print $2}')
rm -f $AUDITFILE 2>/dev/null
if [ "$DBSTATUS" == "OPEN" ]
then
return 0
else
return 1
fi
}
case $1 in
'start')
echo "spfile='${SPFILE}'" > /tmp/init${ORACLE_SID}.ora

$ORACLE_HOME/bin/sqlplus /nolog <<EOF


connect / as sysdba
startup pfile=/tmp/init${ORACLE_SID}.ora
alter system set
local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=${VIP_
IP})(PORT=1521))))' scope=memory;
alter system set remote_listener='clus-scan:1521' scope=memory;
EOF
check_dbstatus
RET=$?
;;
'stop')
$ORACLE_HOME/bin/sqlplus /nolog <<EOF
connect / as sysdba
shutdown immediate
EOF
NUM=`ps -ef | grep -i smon_${ORACLE_SID} | grep -v grep | wc -l`
if [ $NUM = 0 ]; then
RET=0
else
RET=1
fi
;;
'clean')
$ORACLE_HOME/bin/sqlplus /nolog <<EOF
connect / as sysdba
shutdown abort
EOF
##for i in `ps -ef | grep -i mon_${ORACLE_SID} | awk '{print $2}' ` ;do kill 9 $i; done
NUM=`ps -ef | grep -i smon_${ORACLE_SID} | grep -v grep | wc -l`
if [ $NUM = 0 ]; then
RET=0
else
RET=1
fi
;;
'check')
check_dbstatus
RET=$?
;;
'*')
RET=0
;;
esac
if [ $RET -eq 0 ]; then
exit 0
else
exit 1
fi

I have specified "remote_listener" parameter to register database with SCAN listeners. You must use
"remote_listeners parameter for ports other than 1521.

Execute permission:
$ chmod ug+x single_instance_action_script.sh
Copy script to another node:
$ scp single_instance_action_script.sh
oracle@cluster2:/u01/app/12.1.0/grid_1/crs/public

Create resource file for single instance database "test11".


$ vi test11_inst_db.res
(add)
TYPE=single_instance_db
DESCRIPTION=test11 Oracle Database Resource
ACL=owner:orcl11:rwx,pgrp:oinstall:r-x,other::r-ACTION_SCRIPT=/u01/app/12.1.0/grid_1/crs/public/single_instance_action_
script.sh.sh
AGENT_FILENAME=%CRS_HOME%/bin/scriptagent
PLACEMENT=restricted
ACTIVE_PLACEMENT=0
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=60
DEGREE=1
ENABLED=1
HOSTING_MEMBERS=cluster1 cluster2
LOGGING_LEVEL=1
RESTART_ATTEMPTS=1
START_DEPENDENCIES=hard(ora.DATA.dg,ora.FRA.dg)
weak(type:ora.listener.type,uniform:ora.ons,uniform:ora.eons)
pullup(ora.DATA.dg,ora.FRA.dg)
START_TIMEOUT=0
STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DATA.dg,shutdo
wn:ora.FRA.dg)
STOP_TIMEOUT=0
UPTIME_THRESHOLD=1h
ORACLE_SID=test11
ORACLE_HOME=/u01/app/orcl11/product/11.2.0/dbhome_1
ORA_CRS_HOME=/u01/app/12.1.0/grid_1
SPFILE=+DATA/test11/spfiletest11.ora

Add new resource type for single instance database.


$ ./crsctl add type single_instance_db -basetype cluster_resource attr
"ATTRIBUTE=ORACLE_SID,TYPE=string,FLAGS=REQUIRED,ATTRIBUTE=ORACLE_HOME,
TYPE=string,FLAGS=REQUIRED,ATTRIBUTE=SPFILE,TYPE=string,FLAGS=REQUIRED,
ATTRIBUTE=ORA_CRS_HOME,TYPE=string,FLAGS=REQUIRED"

Add resource for "test11" single instance database as "root" user.


# ./crsctl add resource test11.db -type single_instance_db -file
/u01/app/12.1.0/grid_1/crs/public/test11_inst_db.res

# ./crsctl start res test11.db


CRS-2672: Attempting to start 'test11.db' on 'cluster1'
CRS-2676: Start of 'test11.db' on 'cluster1' succeeded

$ ./crsctl stat res -t


------------------------------------------------------------------------------Name
Target State
Server
State details
------------------------------------------------------------------------------Local Resources
------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.FRA.dg
ONLINE ONLINE
cluster1
STABLE
OFFLINE OFFLINE
cluster2
STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
cluster1
STABLE
OFFLINE OFFLINE
cluster2
STABLE
ora.asm
ONLINE ONLINE
cluster1
Started,STABLE
ONLINE ONLINE
cluster2
Started,STABLE
ora.net1.network
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.ons
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
-------------------------------------------------------------------------------

Cluster Resources
------------------------------------------------------------------------------ora.LISTENER_SCAN1.lsnr
1
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER_SCAN2.lsnr
1
ONLINE ONLINE
cluster1
STABLE
ora.LISTENER_SCAN3.lsnr
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.cvu
1
ONLINE ONLINE
cluster1
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
ora.scan1.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.scan2.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.scan3.vip
1
ONLINE ONLINE
cluster1
STABLE
test11.db
1
ONLINE ONLINE
cluster1
STABLE
-------------------------------------------------------------------------------

Database instance "test11" is registered with SCAN listener.

$ lsnrctl status LISTENER_SCAN2


LSNRCTL for Linux: Version 12.1.0.1.0 - Production on 30-MAR-2014 21:04:09
Copyright (c) 1991, 2013, Oracle.

All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN2)))
STATUS of the LISTENER
-----------------------Alias
LISTENER_SCAN2
Version
TNSLSNR for Linux: Version 12.1.0.1.0 - Production
Start Date
30-MAR-2014 20:51:10
Uptime
0 days 0 hr. 12 min. 58 sec
Trace Level
off
Security
ON: Local OS Authentication
SNMP
OFF
Listener Parameter File
/u01/app/12.1.0/grid_1/network/admin/listener.ora
Listener Log File
/u01/app/oracle/diag/tnslsnr/cluster1/listener_scan2/alert/log.xml
Listening Endpoints Summary...

(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN2)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.92)(PORT=1521)))
Services Summary...
Service "test11" has 1 instance(s).
Instance "test11", status READY, has 1 handler(s) for this service...
Service "test11XDB" has 1 instance(s).
Instance "test11", status READY, has 1 handler(s) for this service...
The command completed successfully

Tnsnames.ora entry for "test11" instance would be:


TEST11 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = clus-scan)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = test11)
)
)

Add another instance "test12" to the cluster

Create resource file for "test12" instance.


$ vi /u01/app/12.1.0/grid_1/crs/public/test12_inst_db.res
(add)
TYPE=single_instance_db
DESCRIPTION=test12 Oracle Database Resource
ACL=owner:orcl12:rwx,pgrp:oinstall:r-x,other::r-ACTION_SCRIPT=/u01/app/12.1.0/grid_1/crs/public/single_instance_action_
script.sh
AGENT_FILENAME=%CRS_HOME%/bin/scriptagent
PLACEMENT=restricted
ACTIVE_PLACEMENT=0
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=60
DEGREE=1
ENABLED=1
HOSTING_MEMBERS=cluster1 cluster2
LOGGING_LEVEL=1
RESTART_ATTEMPTS=1
START_DEPENDENCIES=hard(ora.DATA.dg,ora.FRA.dg)
weak(type:ora.listener.type,uniform:ora.ons,uniform:ora.eons)
pullup(ora.DATA.dg,ora.FRA.dg)
START_TIMEOUT=0
STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DATA.dg,shutdo
wn:ora.FRA.dg)
STOP_TIMEOUT=0
UPTIME_THRESHOLD=1h
ORACLE_SID=test12
ORACLE_HOME=/u01/app/orcl12/product/12.1.0/dbhome_1
ORA_CRS_HOME=/u01/app/12.1.0/grid_1
SPFILE=+DATA/test12/spfiletest12.ora

As "root" user add resource for "test12" single instance database.


# ./crsctl add resource test12.db -type single_instance_db -file
/u01/app/12.1.0/grid_1/crs/public/test12_inst_db.res
# ./crsctl start res test12.db
CRS-2672: Attempting to start 'test12.db' on 'cluster1'
CRS-2676: Start of 'test12.db' on 'cluster1' succeeded

Tnsnames.ora to connect to "test12" database.


TEST12 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = clus-scan)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = test12)
)
)

Tests.
# ./crsctl start res test12.db -n cluster1
CRS-2672: Attempting to start 'test12.db' on
CRS-2676: Start of 'test12.db' on 'cluster1'
# ./crsctl start res test11.db n cluster1
CRS-2672: Attempting to start 'test11.db' on
CRS-2676: Start of 'test11.db' on 'cluster1'

'cluster1'
succeeded
'cluster1'
succeeded

Both databases are registered with SCAN listener.


$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux: Version 12.1.0.1.0 - Production on 31-MAR-2014
10:50:32
Copyright (c) 1991, 2013, Oracle.

All rights reserved.

Connecting to
(DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
-----------------------Alias
LISTENER_SCAN1
Version
TNSLSNR for Linux: Version 12.1.0.1.0 Production
Start Date
31-MAR-2014 09:43:40
Uptime
0 days 1 hr. 6 min. 51 sec
Trace Level
off
Security
ON: Local OS Authentication
SNMP
OFF
Listener Parameter File
/u01/app/12.1.0/grid_1/network/admin/listener.ora
Listener Log File
/u01/app/oracle/diag/tnslsnr/cluster1/listener_scan1/alert/log.xml
Listening Endpoints Summary...

(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.91)(PORT=1521)))
Services Summary...
Service "test11" has 1 instance(s).
Instance "test11", status READY, has 1 handler(s) for this service...
Service "test11XDB" has 1 instance(s).
Instance "test11", status READY, has 1 handler(s) for this service...
Service "test12" has 1 instance(s).
Instance "test12", status READY, has 1 handler(s) for this service...
Service "test12XDB" has 1 instance(s).
Instance "test12", status READY, has 1 handler(s) for this service...
The command completed successfully
$ tnsping test11
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST =
clus-scan)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED)
(SERVICE_NAME = test11)))
OK (30 msec)
$ sqlplus system@test11
SQL>

$ tnsping test12
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST =
clus-scan)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED)
(SERVICE_NAME = test12)))
OK (10 msec)
$ sqlplus system@test12
SQL>

Relocate database resources to second node.


# ./crsctl relocate res test11.db
CRS-2673: Attempting to stop 'test11.db' on 'cluster1'
CRS-2677: Stop of 'test11.db' on 'cluster1' succeeded
CRS-2672: Attempting to start 'test11.db' on 'cluster2'
CRS-2676: Start of 'test11.db' on 'cluster2' succeeded
# ./crsctl relocate res test12.db
CRS-2673: Attempting to stop 'test12.db' on 'cluster1'
CRS-2677: Stop of 'test12.db' on 'cluster1' succeeded
CRS-2672: Attempting to start 'test12.db' on 'cluster2'
CRS-2676: Start of 'test12.db' on 'cluster2' succeeded
# ./crsctl stat res -t

-------------------------------------------------------------------------------Name
Target State
Server
State details
-------------------------------------------------------------------------------Local Resources
-------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.FRA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.asm
ONLINE ONLINE
cluster1
Started,STABLE
ONLINE ONLINE
cluster2
Started,STABLE
ora.net1.network
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.ons
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
-------------------------------------------------------------------------------Cluster Resources
-------------------------------------------------------------------------------ora.LISTENER_SCAN1.lsnr
1
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER_SCAN2.lsnr
1
ONLINE ONLINE
cluster1
STABLE
ora.LISTENER_SCAN3.lsnr
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.cvu
1
ONLINE ONLINE
cluster1
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
ora.scan1.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.scan2.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.scan3.vip
1
ONLINE ONLINE
cluster1
STABLE
test11.db
1
ONLINE ONLINE
cluster2
STABLE
test12.db
1
ONLINE ONLINE
cluster2
STABLE
--------------------------------------------------------------------------------

$ tnsping test11
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST =
clus-scan)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED)
(SERVICE_NAME = test11)))
OK (0 msec)

$ sqlplus system/oracle@test11
SQL>

$ tnsping test12
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST =
clus-scan)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED)
(SERVICE_NAME = test12)))
OK (0 msec)
$ sqlplus system@test12
SQL>

REFERENCES
Special thanks to Oracle bloggers who generously share their knowledge with
Oracle community.
In this document I've used information from posts below.
Sergei Romanenko
http://www.lab128.com/rac12_installation_using_vb/article_text.html

Mathijs Bruggink
https://mathijsbruggink.wordpress.com/2013/07/04/thoughts-on-adding-a-single-instance-to-oracleclusterware-grid-infra/

Krzysztof Lewandowski
http://dbaspot.wordpress.com/2013/06/09/action-script-for-clusterware-11g-activepassive-failoverconfiguration/

Vous aimerez peut-être aussi