Académique Documents
Professionnel Documents
Culture Documents
I. DISCLAIMER
Before I start I want to note that this configuration is created and tested only in test environment. Please
do not configure your production environment blindly using this document. I don't know will it work or
what problems you could experience. This configuration is not supported by Oracle. I don't accept
responsibility for any loss or damage arising from the use of this document.
II. INTRODUCTION
Oracle offers several solutions to ensure High Availability of the database like RAC, RAC OneNode or
Dataguard. These solutions are very popular, stable and robust but not free. On the other hand Oracle
Clusterware can be used to protect any resource free of charge if you have valid Oracle Unbreakable
Linux support and at least one of servers in cluster licensed for Oracle database (SE or EE).
Oracle Clusterware 12cR1 is part of 12cR1 Grid Infrastructure powerful and stable cluster manager
usually used in combination with RAC.
My goal is to use this powerful software to create Active/Passive failover cluster to protect one or more
single instance databases. To accomplish this I will use custom scripts to manage resources.
If you decide to run similar configuration in production environment you will have Oracle support for
operating system, Grid Infrastructure software or RDBMS. But you won't have support for custom scripts
used for managing resources.
Create on both nodes directory structure where Oracle software will be installed:
mkdir
mkdir
chown
chown
chmod
-p /u01/app/12.1.0/grid_1
-p /u01/app/oracle
-R oracle:oinstall /u01
oracle:oinstall /u01/app/oracle
-R 775 /u01/
[FAILED]
cluster1-priv.localdomain
cluster2-priv.localdomain
cluster1-priv
cluster2-priv
# Public
192.168.56.71
192.168.56.72
cluster1.localdomain
cluster2.localdomain
cluster1
cluster2
# Virtual
192.168.56.81
192.168.56.82
cluster1-vip.localdomain
cluster2-vip.localdomain
cluster1-vip
cluster2-vip
# SCAN
192.168.56.91
192.168.56.92
192.168.56.93
clus-scan.localdomain
clus-scan.localdomain
clus-scan.localdomain
clus-scan
clus-scan
clus-scan
Oracle strongly recommends that you don't configure SCAN VIP addresses in /etc/hosts file. In
production use Round Robin DNS. As this is test environment and I don't have DNS server running I won't
follow Oracle recommendation.
Install operating system package "cvuqdisk". Without "cvuqdisk" Cluster Verification Utility cannot
discover shared disks.
On both nodes:
# cd /media/sf_SharedFolderVM/grid/rpm
# CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
# rpm -Uvh cvuqdisk*
Preparing...
########################################### [100%]
1:cvuqdisk
########################################### [100%]
FIxed size.
Create another shareable disk which will be used for FRA ASM diskgroup.
/dev/sda2
/dev/sdb
/dev/sdc
# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF
disklabel
Building a new DOS disklabel with disk identifier 0x713f1a5d.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): n
Command action
e
extended
p
primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1044, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-1044, default 1044):
Using default value 1044
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
# fdisk /dev/sdc
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF
disklabel
Building a new DOS disklabel with disk identifier 0xa5594e81.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): n
Command action
e
extended
p
primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1305, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-1305, default 1305):
Using default value 1305
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
Reboot second node and on both nodes check if disks are visible to ASM.
[root@cluster1 ~]# oracleasm listdisks
DATADISK1
FRADISK1
[root@cluster2 ~]# oracleasm listdisks
DATADISK1
FRADISK1
Install and Configure Oracle Grid Infrastructure for a Cluster -> Next
Enter Cluster Name, SCAN Name, SCAN Port and uncheck Configure GNS -> Next
Click Yes.
Choose "Automatically run configuration scripts" and enter password for "root" user -> Next
Check summary, save response file if you want and then click "Install".
Progress...
It is expected for Oracle Cluster Verification Utility to fail due to low Host RAM and DATADISK1 group
mismatch.
Close
Create FRA disk group using ASM configuration assistant as "oracle" user.
Click Create.
Specify DIsk Group Name, select External(None) redundancy and select disk -> OK.
Exit
Add to "/etc/security/limits.conf":
orcl12
orcl12
orcl12
orcl12
soft
hard
soft
hard
nproc
nproc
nofile
nofile
2047
16384
1024
65536
Create directories.
# mkdir -p /u01/app/orcl12
# chown -R orcl12:oinstall /u01/app/orcl12
# chmod -R 775 /u01/app/orcl12
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User
export
export
export
export
export PATH=.:${PATH}:$HOME/bin:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch
export PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
export NLS_DATE_FORMAT="DD.MM.YYYY HH24:MI:SS"
Progress.
Finish
I will use DBCA to create database just on first node as this will be single instance database (not RAC).
Choose "Oracle Single Instance Database" and "Custom Database" -> Next
Enter password and ignore warning for weak passwords -> Next
We will create listener later so leave this checkboxes unchecked -> Next
Progress
Finished
I've created SCAN addresses, since SCAN concept is essential part during the Oracle Grid Infrastructure
installation. OUI will not let you continue with installation until you have provided a suitable SCAN
resolution.
As I don't need SCAN concept for this configuration I will remove SCAN listeners and IP addresses.
This step isn't essential for configuration to work and if you want you can leave SCAN listeners and IP's.
I have three SCAN listeners and three SCAN IP addresses which I want to remove.
Stop SCAN
# ./srvctl stop scan -f
Check
[root@cluster1 bin]# ./crsctl stat res -t
------------------------------------------------------------------------------Name
Target State
Server
State details
------------------------------------------------------------------------------Local Resources
------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.asm
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
Started,STABLE
ora.net1.network
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.ons
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
------------------------------------------------------------------------------Cluster Resources
------------------------------------------------------------------------------ora.LISTENER_SCAN1.lsnr
1
OFFLINE OFFLINE
cluster2
STABLE
ora.LISTENER_SCAN2.lsnr
1
OFFLINE OFFLINE
cluster1
STABLE
ora.LISTENER_SCAN3.lsnr
1
OFFLINE OFFLINE
cluster1
STABLE
ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.cvu
1
ONLINE ONLINE
cluster1
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
ora.scan1.vip
1
OFFLINE OFFLINE
cluster2
STABLE
ora.scan2.vip
1
OFFLINE OFFLINE
cluster1
STABLE
ora.scan3.vip
1
OFFLINE OFFLINE
cluster1
STABLE
-------------------------------------------------------------------------------
Now we can remove SCAN addresses as "root" user from "/etc/hosts" on both nodes.
# vi /etc/hosts
(remove)
# SCAN
192.168.56.91
192.168.56.92
192.168.56.93
clus-scan.localdomain
clus-scan.localdomain
clus-scan.localdomain
clus-scan
clus-scan
clus-scan
# line added
test12-vip
1) Run this only on one node as "root" to create new CRS resource for VIP name.
# ./appvipcfg create -network=1 -ip=192.168.56.51 -vipname=test12.vip user=root
Production Copyright 2007, 2008, Oracle.All rights reserved
2014-03-28 11:07:20: Creating Resource Type
2014-03-28 11:07:20: Executing /u01/app/12.1.0/grid_1/bin/crsctl add type
app.appvip_net1.type -basetype ora.cluster_vip_net1.type -file
/u01/app/12.1.0/grid_1/crs/template/appvip.type
2014-03-28 11:07:20: Executing cmd: /u01/app/12.1.0/grid_1/bin/crsctl add type
app.appvip_net1.type -basetype ora.cluster_vip_net1.type -file
/u01/app/12.1.0/grid_1/crs/template/appvip.type
2014-03-28 11:07:20: Create the Resource
2014-03-28 11:07:20: Executing /u01/app/12.1.0/grid_1/bin/crsctl add resource
test12.vip -type app.appvip_net1.type -attr
"USR_ORA_VIP=192.168.56.51,START_DEPENDENCIES=hard(ora.net1.network)
pullup(ora.net1.network),STOP_DEPENDENCIES=hard(ora.net1.network),ACL='owner:r
oot:rwx,pgrp:root:r-x,other::r--,user:root:rx',HOSTING_MEMBERS=cluster1.localdomain,APPSVIP_FAILBACK="
2014-03-28 11:07:20: Executing cmd: /u01/app/12.1.0/grid_1/bin/crsctl add
resource test12.vip -type app.appvip_net1.type -attr
"USR_ORA_VIP=192.168.56.51,START_DEPENDENCIES=hard(ora.net1.network)
pullup(ora.net1.network),STOP_DEPENDENCIES=hard(ora.net1.network),ACL='owner:r
oot:rwx,pgrp:root:r-x,other::r--,user:root:rx',HOSTING_MEMBERS=cluster1.localdomain,APPSVIP_FAILBACK="
$ vi tnsnames.ora
TEST12 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = test12-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = test12)
)
)
LISTENER_TEST12 = (ADDRESS = (PROTOCOL = TCP)(HOST = test12-vip)(PORT = 1521))
$ scp tnsnames.ora
orcl12@cluster2:/u01/app/orcl12/product/12.1.0/dbhome_1/network/admin/tnsnames.ora
3) Create "act_listener.pl" script into $GRID_HOME/crs/public directory on both nodes as "oracle" user.
$ cd /u01/app/12.1.0/grid_1/crs/public/
$ vi act_listener.pl
#!/usr/bin/perl
#
NOTES
#
Edit the perl installation directory as appropriate.
#
#
Place this file in <CRS_HOME>/crs/public/
$ORACLE_HOME = "$ENV{_CRS_ORACLE_HOME}";
$ORA_LISTENER_NAME = "$ENV{_CRS_ORA_LISTENER_NAME}";
if ($#ARGV != 0 ) {
print "usage: start stop check required \n";
exit;
}
$command = $ARGV[0];
# start listener
if ($command eq "start") {
system ("
ORACLE_HOME=$ORACLE_HOME
export ORACLE_HOME
ORA_LISTENER_NAME=$ORA_LISTENER_NAME
export ORA_LISTENER_NAME
#
export TNS_ADMIN=$ORACLE_HOME/network/admin # optionally set TNS_ADMIN here
$ORACLE_HOME/bin/lsnrctl start $ORA_LISTENER_NAME");
}
# stop listener
if ($command eq "stop") {
system ("
ORACLE_HOME=$ORACLE_HOME
export ORACLE_HOME
ORA_LISTENER_NAME=$ORA_LISTENER_NAME
export ORA_LISTENER_NAME
#
export TNS_ADMIN=$ORACLE_HOME/network/admin # optionally set TNS_ADMIN here
$ORACLE_HOME/bin/lsnrctl stop $ORA_LISTENER_NAME");
}
# check listener
if ($command eq "check") {
check_listener();
}
sub check_listener {
my($check_proc_listener,$process_listener) = @_;
$process_listener = "$ORACLE_HOME/bin/tnslsnr $ORA_LISTENER_NAME -inherit";
$check_proc_listener = qx(ps -ae -o args | grep -w "tnslsnr $ORA_LISTENER_NAME"
| grep -v grep | head -n 1 );
chomp($check_proc_listener);
if ($process_listener eq $check_proc_listener) {
exit 0;
} else {
exit 1;
}
}
# clean listener
if ($command eq "clean") {
my $kill_proc = qx(ps -aef | grep -w "tnslsnr $ORA_LISTENER_NAME" | grep -v
grep | head -n 1 | awk '{print \$2}'| xargs kill -9 $1);
exit 0;
}
Test script
$ export _CRS_ORACLE_HOME=/u01/app/orcl12/product/12.1.0/dbhome_1
$ export _CRS_ORA_LISTENER_NAME=LISTENER_TEST12
$ perl act_listener.pl start
LSNRCTL for Linux: Version 12.1.0.1.0 - Production on 28-MAR-2014 12:32:00
Copyright (c) 1991, 2013, Oracle.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=test12vip)(PORT=1521)))
The command completed successfully
Cluster-aware resource types are aware of the cluster environment and are subject to cardinality and
cross-server switchover and failover.
5) Create new crs resource of custom_listener type for "LISTENER_TEST12" listener as "oracle" user.
$ ./crsctl add resource test12.listener -type custom_listener \
-attr "PLACEMENT=favored, HOSTING_MEMBERS='cluster1
cluster2',CHECK_INTERVAL=30,RESTART_ATTEMPTS=2, \
START_DEPENDENCIES=hard(test12.vip),STOP_DEPENDENCIES=hard(test12.vip), \
ORACLE_HOME=/u01/app/orcl12/product/12.1.0/dbhome_1,ORA_LISTENER_NAME=LISTENER
_TEST12"
7) Tests
Status of new "test12.listener" resource.
$ ./crsctl stat res test12.listener
NAME=test12.listener
TYPE=custom_listener
TARGET=OFFLINE
STATE=OFFLINE
Start listener.
$ ./crsctl start res test12.listener
CRS-2672: Attempting to start 'test12.listener' on 'cluster1'
CRS-2676: Start of 'test12.listener' on 'cluster1' succeeded
Relocate listener.
$ ./crsctl relocate res test12.listener -f
CRS-2673: Attempting to stop 'test12.listener' on 'cluster1'
CRS-2677: Stop of 'test12.listener' on 'cluster1' succeeded
CRS-2673: Attempting to stop 'test12.vip' on 'cluster1'
CRS-2677: Stop of 'test12.vip' on 'cluster1' succeeded
CRS-2672: Attempting to start 'test12.vip' on 'cluster2'
CRS-2676: Start of 'test12.vip' on 'cluster2' succeeded
CRS-2672: Attempting to start 'test12.listener' on 'cluster2'
CRS-2676: Start of 'test12.listener' on 'cluster2' succeeded
Clusterware status.
$ ./crsctl stat res -t
------------------------------------------------------------------------------Name
Target State
Server
State details
------------------------------------------------------------------------------Local Resources
------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.asm
ONLINE ONLINE
cluster1
Started,STABLE
ONLINE ONLINE
cluster2
Started,STABLE
ora.net1.network
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.ons
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
------------------------------------------------------------------------------Cluster Resources
------------------------------------------------------------------------------ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.cvu
1
ONLINE ONLINE
cluster2
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
test12.listener
1
ONLINE ONLINE
cluster2
STABLE
test12.vip
1
ONLINE ONLINE
cluster2
STABLE
-------------------------------------------------------------------------------
ORA_CRS_HOME=$_CRS_ORA_CRS_HOME
ORACLE_HOME=$_CRS_ORACLE_HOME
ORACLE_SID=$_CRS_ORACLE_SID
SPFILE=$_CRS_SPFILE
LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
check_dbstatus() {
OUTPUT=`$ORACLE_HOME/bin/sqlplus -s /nolog<<EOF
connect / as sysdba
set echo off define off heading off pagesize 0
SET LINESIZE 100
COLUMN l_output FORMAT A100
SELECT i.status || ' ' || pa.value || '/' || 'ora_' || p.spid || '.aud' AS l_output
FROM v\\$session s,
v\\$process p,
v\\$parameter pa,
v\\$instance i
WHERE pa.name = 'audit_file_dest'
AND s.paddr = p.addr
AND s.sid = (select sid from v\\$mystat where rownum=1) and s.audsid =
sys_context('userenv','sessionid');
exit
EOF`
DBSTATUS=$(echo $OUTPUT | awk '{print $1}')
AUDITFILE=$(echo $OUTPUT | awk '{print $2}')
rm -f $AUDITFILE 2>/dev/null
if [ "$DBSTATUS" == "OPEN" ]
then
return 0
else
return 1
fi
}
case $1 in
'start')
$ORACLE_HOME/bin/sqlplus /nolog <<EOF
connect / as sysdba
startup
EOF
check_dbstatus
RET=$?
;;
'stop')
$ORACLE_HOME/bin/sqlplus /nolog <<EOF
connect / as sysdba
shutdown immediate
EOF
NUM=`ps -ef | grep -i smon_${ORACLE_SID} | grep -v grep | wc -l`
if [ $NUM = 0 ]; then
RET=0
else
RET=1
fi
;;
'clean')
$ORACLE_HOME/bin/sqlplus /nolog <<EOF
connect / as sysdba
shutdown abort
EOF
##for i in `ps -ef | grep -i mon_${ORACLE_SID} | awk '{print $2}' ` ;do kill -9 $i;
done
NUM=`ps -ef | grep -i smon_${ORACLE_SID} | grep -v grep | wc -l`
if [ $NUM = 0 ]; then
RET=0
else
RET=1
fi
;;
'check')
check_dbstatus
RET=$?
;;
'*')
RET=0
;;
esac
if [ $RET -eq 0 ]; then
exit 0
else
exit 1
fi
3) Create new resource type for single instance database as "oracle" user
5) Create directories for trace files and audit dumps and copy parameter file to second node
$ mkdir -p /u01/app/orcl12/diag/rdbms/test12/test12/trace
$ mkdir -p /u01/app/orcl12/admin/test12/adump
$ scp /u01/app/orcl12/product/12.1.0/dbhome_1/dbs/inittest12.ora
orcl12@cluster2:$ORACLE_HOME/dbs
6) Tests
Status of single instance database resource.
# ./crsctl stat res test12.db
NAME=test12.db
TYPE=single_instance_db
TARGET=OFFLINE
STATE=OFFLINE
Virtual IP, listener and database instance are started on "cluster1" node.
# ./crsctl stat res -t
------------------------------------------------------------------------------Name
Target State
Server
State details
------------------------------------------------------------------------------Local Resources
------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.FRA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE OFFLINE
cluster2
STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.asm
ONLINE ONLINE
cluster1
Started,STABLE
ONLINE ONLINE
cluster2
Started,STABLE
ora.net1.network
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.ons
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
------------------------------------------------------------------------------Cluster Resources
------------------------------------------------------------------------------ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.cvu
1
ONLINE ONLINE
cluster2
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
test12.db
1
ONLINE ONLINE
cluster1
STABLE
test12.listener
1
ONLINE ONLINE
cluster1
STABLE
test12.vip
1
ONLINE ONLINE
cluster1
STABLE
-------------------------------------------------------------------------------
Oracle reallocated virtual IP, listener and database instance to second node.
Oracle reallocated virtual IP address, listener and database instance to "cluster1" node.
soft
hard
soft
hard
nproc
nproc
nofile
nofile
2047
16384
1024
65536
Add directories:
# mkdir -p /u01/app/orcl11
# chown -R orcl11:oinstall /u01/app/orcl11/
# chmod -R 775 /u01/app/orcl11
export ORACLE_SID=test11
export PATH=.:${PATH}:$HOME/bin:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch
export PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
export NLS_DATE_FORMAT="DD.MM.YYYY HH24:MI:SS"
For Installation Option select "Install Database Software Only" -> Next
Click SSH Connectivity, enter password for OS user "orcl11" and "Setup" connectivity
Progress
Select ASM for database files and "+DATA" disk group for database files -> Next
Select "+FRA" ASM disk group for fast recovery files -> Next
Uncheck Components
Under "Standard Database Components" uncheck Oracle JVM and Oracle APEX -> OK -> Next
Select "Generate Database Creation Scripts" and enter location to store scripts -> Finish
Summary -> OK
Edit "postDBCreation.sql" script and remove lines to register database with Clusterware. Later we will
create resources for vip, listener and database instance.
$ cd /u01/app/orcl11/admin/test11/scripts
$ vi postDBCreation.sql
(delete)
host /u01/app/orcl11/product/11.2.0/dbhome_1/bin/srvctl enable database -d test11;
host /u01/app/orcl11/product/11.2.0/dbhome_1/bin/srvctl start database -d test11;
Create trace and audit directories and copy parameter file to another node:
$ mkdir -p /u01/app/orcl11/diag/rdbms/test11/test11/trace
$ mkdir -p /u01/app/orcl11/admin/test11/adump
$ scp inittest11.ora
orcl11@cluster2:/u01/app/orcl11/product/11.2.0/dbhome_1/dbs
inittest11.ora
$ vi tnsnames.ora
TEST11 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = test11-vip)(PORT = 1522))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = test11)
)
)
LISTENER_TEST11 = (ADDRESS = (PROTOCOL = TCP)(HOST = test12-vip)(PORT =
1522))
test11-vip
# ping 192.168.56.52
PING 192.168.56.52 (192.168.56.52) 56(84) bytes of data.
64 bytes from 192.168.56.52: icmp_seq=1 ttl=64 time=0.023 ms
64 bytes from 192.168.56.52: icmp_seq=2 ttl=64 time=0.026 ms
^C
--- 192.168.56.52 ping statistics --2 packets transmitted, 2 received, 0% packet loss, time 1353ms
rtt min/avg/max/mdev = 0.023/0.024/0.026/0.005 ms
Power Off
After few seconds cluster will find out that second node is missing. All resources will be transferred to
"cluster1" node.
# ./crsctl stat res -t
------------------------------------------------------------------------------Name
Target State
Server
State details
------------------------------------------------------------------------------Local Resources
------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ora.FRA.dg
ONLINE ONLINE
cluster1
STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
cluster1
STABLE
ora.asm
ONLINE ONLINE
cluster1
Started,STABLE
ora.net1.network
ONLINE ONLINE
cluster1
STABLE
ora.ons
ONLINE ONLINE
cluster1
STABLE
------------------------------------------------------------------------------Cluster Resources
------------------------------------------------------------------------------ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE INTERMEDIATE cluster1
FAILED
OVER,STABLE
ora.cvu
1
ONLINE ONLINE
cluster1
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
test11.db
1
ONLINE ONLINE
cluster1
STABLE
test11.listener
1
ONLINE ONLINE
cluster1
STABLE
test11.vip
1
ONLINE ONLINE
cluster1
STABLE
test12.db
1
ONLINE ONLINE
cluster1
STABLE
test12.listener
1
ONLINE ONLINE
cluster1
STABLE
test12.vip
1
ONLINE ONLINE
cluster1
STABLE
-------------------------------------------------------------------------------
When Oracle Clusterware realizes that "cluster2" node is Up again it will add node to cluster.
[cssd(2718)]CRS-1601:CSSD Reconfiguration complete. Active nodes are cluster1
cluster2 .
2014-03-30 09:09:11.293:
[crsd(2876)]CRS-2772:Server 'cluster2' has been assigned to pool 'Free'.
Check status.
# ./crsctl stat res -t
------------------------------------------------------------------------------Name
Target State
Server
State details
------------------------------------------------------------------------------Local Resources
------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.FRA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.asm
ONLINE ONLINE
cluster1
Started,STABLE
ONLINE ONLINE
cluster2
Started,STABLE
ora.net1.network
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.ons
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
------------------------------------------------------------------------------Cluster Resources
------------------------------------------------------------------------------ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.cvu
1
ONLINE ONLINE
cluster1
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
test11.db
1
ONLINE ONLINE
cluster1
STABLE
test11.listener
1
ONLINE ONLINE
cluster1
STABLE
test11.vip
1
ONLINE ONLINE
cluster1
STABLE
test12.db
1
ONLINE ONLINE
cluster1
STABLE
test12.listener
1
ONLINE ONLINE
cluster1
STABLE
test12.vip
1
ONLINE ONLINE
cluster1
STABLE
-------------------------------------------------------------------------------
After Oracle Clusteware realized that there is problem with interconnection it removed "cluster2" node
from cluster and transferred all resources to "cluster1" node.
Cluster Resources
------------------------------------------------------------------------------ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE INTERMEDIATE cluster1
FAILED
OVER,STABLE
ora.cvu
1
ONLINE ONLINE
cluster1
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
test11.db
1
ONLINE ONLINE
cluster1
STABLE
test11.listener
1
ONLINE ONLINE
cluster1
STABLE
test11.vip
1
ONLINE ONLINE
cluster1
STABLE
test12.db
1
ONLINE ONLINE
cluster1
STABLE
test12.listener
1
ONLINE ONLINE
cluster1
STABLE
test12.vip
1
ONLINE ONLINE
cluster1
STABLE
-------------------------------------------------------------------------------
Restart CRS stack on "cluster2" node and check status of the cluster.
# ./crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed
resources on 'cluster2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'cluster2'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'cluster2'
CRS-2677: Stop of 'ora.mdnsd' on 'cluster2' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on 'cluster2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'cluster2'
CRS-2673: Attempting to stop 'ora.gipcd' on 'cluster2'
CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on 'cluster2'
CRS-2673: Attempting to stop 'ora.evmd' on 'cluster2'
CRS-2681: Clean of 'ora.cluster_interconnect.haip' on 'cluster2' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'cluster2' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'cluster2' succeeded
CRS-2677: Stop of 'ora.evmd' on 'cluster2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on
'cluster2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@cluster2 bin]# ./crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
2014-03-30 09:52:46.652:
[crsd(2914)]CRS-2878:Failed to restart resource 'ora.net1.network'
2014-03-30 09:52:46.998:
[crsd(2914)]CRS-2769:Unable to failover resource 'ora.net1.network'.
2014-03-30 09:52:47.395:
[crsd(2914)]CRS-2771:Maximum restart attempts reached for resource
'test11.vip'; will not restart.
2014-03-30 09:52:47.444:
[crsd(2914)]CRS-2771:Maximum restart attempts reached for resource
'ora.cluster1.vip'; will not restart.
2014-03-30 09:52:47.486:
[crsd(2914)]CRS-2771:Maximum restart attempts reached for resource
'test12.vip'; will not restart.
2014-03-30 09:52:47.637:
[crsd(2914)]CRS-2769:Unable to failover resource 'ora.net1.network'.
Oracle Clusterware realized that something is wrong with public network interface on "cluster1" node
and relocated all resources to "cluster2" node.
# ./crsctl stat res -t
-------------------------------------------------------------------------------Name
Target State
Server
State details
-------------------------------------------------------------------------------Local Resources
-------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.FRA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER.lsnr
ONLINE OFFLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.asm
ONLINE ONLINE
cluster1
Started,STABLE
ONLINE ONLINE
cluster2
Started,STABLE
ora.net1.network
ONLINE OFFLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.ons
ONLINE OFFLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
-------------------------------------------------------------------------------Cluster Resources
-------------------------------------------------------------------------------ora.cluster1.vip
1
ONLINE INTERMEDIATE cluster2
FAILED OVER,STABLE
ora.cluster2.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.cvu
1
ONLINE ONLINE
cluster2
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
test11.db
1
ONLINE ONLINE
cluster2
STABLE
test11.listener
1
ONLINE ONLINE
cluster2
STABLE
test11.vip
1
ONLINE ONLINE
cluster2
STABLE
test12.db
1
ONLINE ONLINE
cluster2
STABLE
test12.listener
1
ONLINE ONLINE
cluster2
STABLE
test12.vip
1
ONLINE ONLINE
cluster2
STABLE
--------------------------------------------------------------------------------
What will happen if I now kill public and private network interfaces on node "cluster1".
Adapter1 - Public network
Oracle Clusterware will stop all resources on "cluster1" and wait DBA intervention.
# ./crsctl stat res -t
------------------------------------------------------------------------------Name
Target State
Server
State details
------------------------------------------------------------------------------Local Resources
------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ora.FRA.dg
ONLINE ONLINE
cluster1
STABLE
ora.LISTENER.lsnr
ONLINE OFFLINE
cluster1
STABLE
ora.asm
ONLINE ONLINE
cluster1
Started,STABLE
ora.net1.network
ONLINE OFFLINE
cluster1
STABLE
ora.ons
ONLINE OFFLINE
cluster1
STABLE
------------------------------------------------------------------------------Cluster Resources
------------------------------------------------------------------------------ora.cluster1.vip
1
ONLINE OFFLINE
STABLE
ora.cluster2.vip
1
ONLINE OFFLINE
STABLE
ora.cvu
1
ONLINE OFFLINE
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
test11.db
1
ONLINE OFFLINE
STABLE
test11.listener
1
ONLINE OFFLINE
STABLE
test11.vip
1
ONLINE OFFLINE
STABLE
test12.db
1
ONLINE OFFLINE
STABLE
test12.listener
1
ONLINE OFFLINE
STABLE
test12.vip
1
ONLINE OFFLINE
STABLE
-------------------------------------------------------------------------------
Shutdown "cluster1" and "cluster2" nodes to attach shareable storage to "cluster2" node again.
Use VBoxManage to attach storage.
C:\>cd "c:\Program Files\Oracle\VirtualBox"
C:\>VBoxManage.exe storageattach cluster2 --storagectl "SATA" --port 1 --device 0 -type hdd --medium D:D:\VirtualneMasine\ClusterSharedDisks\DataDisk1.vdi --mtype
shareable
------------------------------------------------------------------------------Cluster Resources
------------------------------------------------------------------------------ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.cvu
1
ONLINE ONLINE
cluster1
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
test11.db
1
ONLINE ONLINE
cluster1
STABLE
test11.listener
1
ONLINE ONLINE
cluster1
STABLE
test11.vip
1
ONLINE ONLINE
cluster1
STABLE
test12.db
1
ONLINE ONLINE
cluster1
STABLE
test12.listener
1
ONLINE ONLINE
cluster1
STABLE
test12.vip
1
ONLINE ONLINE
cluster1
STABLE
-------------------------------------------------------------------------------
clus-scan.localdomain
clus-scan.localdomain
clus-scan.localdomain
clus-scan
clus-scan
clus-scan
# line added
As my OCR file is located on ASM I must mount ASM disks to restore OCR.
On "cluster1" node start clusterware with "-excl" flag to run clusterware in exclusive mode.
# ./crsctl start crs -excl
CRS-4123: Oracle High Availability Services has been started.
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'cluster1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'cluster1' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'cluster1'
CRS-2672: Attempting to start 'ora.mdnsd' on 'cluster1'
CRS-2676: Start of 'ora.evmd' on 'cluster1' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'cluster1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'cluster1'
CRS-2676: Start of 'ora.gpnpd' on 'cluster1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'cluster1'
CRS-2672: Attempting to start 'ora.gipcd' on 'cluster1'
CRS-2676: Start of 'ora.cssdmonitor' on 'cluster1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'cluster1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'cluster1'
CRS-2672: Attempting to start 'ora.diskmon' on 'cluster1'
CRS-2676: Start of 'ora.diskmon' on 'cluster1' succeeded
CRS-2676: Start of 'ora.cssd' on 'cluster1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'cluster1'
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'cluster1'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on
'cluster1'
CRS-2676: Start of 'ora.drivers.acfs' on 'cluster1' succeeded
CRS-2676: Start of 'ora.ctssd' on 'cluster1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'cluster1'
succeeded
CRS-2672: Attempting to start 'ora.asm' on 'cluster1'
CRS-2676: Start of 'ora.asm' on 'cluster1' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'cluster1'
CRS-2676: Start of 'ora.storage' on 'cluster1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'cluster1'
CRS-2676: Start of 'ora.crsd' on 'cluster1' succeeded
/u01/app/12.1.0/grid_1/cdata/clus/u01/app/12.1.0/grid_1/cdata/clus/u01/app/12.1.0/grid_1/cdata/clus/u01/app/12.1.0/grid_1/cdata/clus-
Stop clusterware.
# ./crsctl stop crs
I have initial status of my clusterware resources (with SCAN) without added any resources.
# ./crsctl stat res -t
------------------------------------------------------------------------------Name
Target State
Server
State details
------------------------------------------------------------------------------Local Resources
------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
cluster1
STABLE
OFFLINE OFFLINE
cluster2
STABLE
ora.asm
ONLINE ONLINE
cluster1
Started,STABLE
ONLINE ONLINE
cluster2
Started,STABLE
ora.net1.network
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.ons
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
------------------------------------------------------------------------------Cluster Resources
------------------------------------------------------------------------------ora.LISTENER_SCAN1.lsnr
1
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER_SCAN2.lsnr
1
ONLINE ONLINE
cluster1
STABLE
ora.LISTENER_SCAN3.lsnr
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.cvu
1
ONLINE ONLINE
cluster1
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
ora.scan1.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.scan2.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.scan3.vip
1
ONLINE ONLINE
cluster1
STABLE
-------------------------------------------------------------------------------
ORA_CRS_HOME=$_CRS_ORA_CRS_HOME
ORACLE_HOME=$_CRS_ORACLE_HOME
ORACLE_SID=$_CRS_ORACLE_SID
SPFILE=$_CRS_SPFILE
LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
NODE_NAME=$(${ORA_CRS_HOME}/bin/olsnodes -l)
VIP_IP=$(${ORA_CRS_HOME}/bin/srvctl config vip -n $NODE_NAME | grep IPv4| awk
'{print $4}')
check_dbstatus() {
OUTPUT=`$ORACLE_HOME/bin/sqlplus -s /nolog<<EOF
connect / as sysdba
set echo off define off heading off pagesize 0
SET LINESIZE 100
COLUMN l_output FORMAT A100
SELECT i.status || ' ' || pa.value || '/' || 'ora_' || p.spid || '.aud' AS
l_output
FROM v\\$session s,
v\\$process p,
v\\$parameter pa,
v\\$instance i
WHERE pa.name = 'audit_file_dest'
AND s.paddr = p.addr
AND s.sid = (select sid from v\\$mystat where rownum=1) and s.audsid =
sys_context('userenv','sessionid');
exit
EOF`
DBSTATUS=$(echo $OUTPUT | awk '{print $1}')
AUDITFILE=$(echo $OUTPUT | awk '{print $2}')
rm -f $AUDITFILE 2>/dev/null
if [ "$DBSTATUS" == "OPEN" ]
then
return 0
else
return 1
fi
}
case $1 in
'start')
echo "spfile='${SPFILE}'" > /tmp/init${ORACLE_SID}.ora
I have specified "remote_listener" parameter to register database with SCAN listeners. You must use
"remote_listeners parameter for ports other than 1521.
Execute permission:
$ chmod ug+x single_instance_action_script.sh
Copy script to another node:
$ scp single_instance_action_script.sh
oracle@cluster2:/u01/app/12.1.0/grid_1/crs/public
Cluster Resources
------------------------------------------------------------------------------ora.LISTENER_SCAN1.lsnr
1
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER_SCAN2.lsnr
1
ONLINE ONLINE
cluster1
STABLE
ora.LISTENER_SCAN3.lsnr
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.cvu
1
ONLINE ONLINE
cluster1
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
ora.scan1.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.scan2.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.scan3.vip
1
ONLINE ONLINE
cluster1
STABLE
test11.db
1
ONLINE ONLINE
cluster1
STABLE
-------------------------------------------------------------------------------
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN2)))
STATUS of the LISTENER
-----------------------Alias
LISTENER_SCAN2
Version
TNSLSNR for Linux: Version 12.1.0.1.0 - Production
Start Date
30-MAR-2014 20:51:10
Uptime
0 days 0 hr. 12 min. 58 sec
Trace Level
off
Security
ON: Local OS Authentication
SNMP
OFF
Listener Parameter File
/u01/app/12.1.0/grid_1/network/admin/listener.ora
Listener Log File
/u01/app/oracle/diag/tnslsnr/cluster1/listener_scan2/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN2)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.92)(PORT=1521)))
Services Summary...
Service "test11" has 1 instance(s).
Instance "test11", status READY, has 1 handler(s) for this service...
Service "test11XDB" has 1 instance(s).
Instance "test11", status READY, has 1 handler(s) for this service...
The command completed successfully
Tests.
# ./crsctl start res test12.db -n cluster1
CRS-2672: Attempting to start 'test12.db' on
CRS-2676: Start of 'test12.db' on 'cluster1'
# ./crsctl start res test11.db n cluster1
CRS-2672: Attempting to start 'test11.db' on
CRS-2676: Start of 'test11.db' on 'cluster1'
'cluster1'
succeeded
'cluster1'
succeeded
Connecting to
(DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
-----------------------Alias
LISTENER_SCAN1
Version
TNSLSNR for Linux: Version 12.1.0.1.0 Production
Start Date
31-MAR-2014 09:43:40
Uptime
0 days 1 hr. 6 min. 51 sec
Trace Level
off
Security
ON: Local OS Authentication
SNMP
OFF
Listener Parameter File
/u01/app/12.1.0/grid_1/network/admin/listener.ora
Listener Log File
/u01/app/oracle/diag/tnslsnr/cluster1/listener_scan1/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.91)(PORT=1521)))
Services Summary...
Service "test11" has 1 instance(s).
Instance "test11", status READY, has 1 handler(s) for this service...
Service "test11XDB" has 1 instance(s).
Instance "test11", status READY, has 1 handler(s) for this service...
Service "test12" has 1 instance(s).
Instance "test12", status READY, has 1 handler(s) for this service...
Service "test12XDB" has 1 instance(s).
Instance "test12", status READY, has 1 handler(s) for this service...
The command completed successfully
$ tnsping test11
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST =
clus-scan)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED)
(SERVICE_NAME = test11)))
OK (30 msec)
$ sqlplus system@test11
SQL>
$ tnsping test12
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST =
clus-scan)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED)
(SERVICE_NAME = test12)))
OK (10 msec)
$ sqlplus system@test12
SQL>
-------------------------------------------------------------------------------Name
Target State
Server
State details
-------------------------------------------------------------------------------Local Resources
-------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.FRA.dg
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER.lsnr
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.asm
ONLINE ONLINE
cluster1
Started,STABLE
ONLINE ONLINE
cluster2
Started,STABLE
ora.net1.network
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
ora.ons
ONLINE ONLINE
cluster1
STABLE
ONLINE ONLINE
cluster2
STABLE
-------------------------------------------------------------------------------Cluster Resources
-------------------------------------------------------------------------------ora.LISTENER_SCAN1.lsnr
1
ONLINE ONLINE
cluster2
STABLE
ora.LISTENER_SCAN2.lsnr
1
ONLINE ONLINE
cluster1
STABLE
ora.LISTENER_SCAN3.lsnr
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster1.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.cluster2.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.cvu
1
ONLINE ONLINE
cluster1
STABLE
ora.oc4j
1
OFFLINE OFFLINE
STABLE
ora.scan1.vip
1
ONLINE ONLINE
cluster2
STABLE
ora.scan2.vip
1
ONLINE ONLINE
cluster1
STABLE
ora.scan3.vip
1
ONLINE ONLINE
cluster1
STABLE
test11.db
1
ONLINE ONLINE
cluster2
STABLE
test12.db
1
ONLINE ONLINE
cluster2
STABLE
--------------------------------------------------------------------------------
$ tnsping test11
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST =
clus-scan)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED)
(SERVICE_NAME = test11)))
OK (0 msec)
$ sqlplus system/oracle@test11
SQL>
$ tnsping test12
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST =
clus-scan)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED)
(SERVICE_NAME = test12)))
OK (0 msec)
$ sqlplus system@test12
SQL>
REFERENCES
Special thanks to Oracle bloggers who generously share their knowledge with
Oracle community.
In this document I've used information from posts below.
Sergei Romanenko
http://www.lab128.com/rac12_installation_using_vb/article_text.html
Mathijs Bruggink
https://mathijsbruggink.wordpress.com/2013/07/04/thoughts-on-adding-a-single-instance-to-oracleclusterware-grid-infra/
Krzysztof Lewandowski
http://dbaspot.wordpress.com/2013/06/09/action-script-for-clusterware-11g-activepassive-failoverconfiguration/