Vous êtes sur la page 1sur 86

Installing Oracle Database 11g Release 2

with Oracle RAC using multipath to


configure ASM disks

. . . . . . . .

Betty Lee
IBM Oracle International Competency Center
October 2011

© Copyright IBM Corporation, 2011. All Rights Reserved.


All trademarks or registered trademarks mentioned herein are the property of their respective holders
Table of Contents
Abstract........................................................................................................................................1
Prerequisites ...............................................................................................................................1
Introduction .................................................................................................................................1
Oracle Database 11g Release 2 (11.2.0.2) new features ....................................................................... 1
About Oracle Real Application Clusters 11g Release 2 (11.2.0.2).......................................................... 1
About IBM BladeCenter ........................................................................................................................... 2
About IBM Storwize V7000...................................................................................................................... 2
Hardware requirements ..............................................................................................................3
Oracle Real Application Clusters requirements....................................................................................... 3
Server CPU ....................................................................................................................... 4
Server memory .................................................................................................................. 4
Network ............................................................................................................................. 4
Shared storage .................................................................................................................. 5
High availability considerations................................................................................................................ 6
Software requirements ...............................................................................................................6
Operating system..................................................................................................................................... 6
Oracle Database 11g Release 2 (11.2.0.2) ............................................................................................. 7
Automatic Storage Management ............................................................................................................. 7
Configuring the system environment .......................................................................................7
BIOS......................................................................................................................................................... 7
Remote system management.................................................................................................................. 7
Installing Linux operating systems........................................................................................................... 8
Configuring multipath devices and udev.................................................................................................. 9
Installing Oracle Grid Infrastructure 11.2.0.2 .........................................................................14
Pre-Installation tasks.............................................................................................................................. 14
Configuring kernel parameters ........................................................................................ 14
Creating users and groups .............................................................................................. 14
Setting shell limits for the grid and oracle software owner .............................................. 15
Setting the time on Cluster Nodes................................................................................... 16
Setting up network files ................................................................................................... 16
Configuring SSH on all cluster nodes.............................................................................. 17
Configuring ASM Disks ................................................................................................... 17
Running Cluster Verification Utility (CVU)....................................................................... 17
Performing Oracle Clusterware installation and Automatic Storage Management installation ............. 20
Performing post-installation tasks ..........................................................................................32
Installing Oracle Database 11g Release 2 (11.2.0.2) ..............................................................33
Pre-Installation tasks.............................................................................................................................. 33
Running Cluster Verification Utility .................................................................................. 33
Preparing Oracle home and its path................................................................................ 35
Performing database installation ........................................................................................................... 36
Post-installation tasks ............................................................................................................................ 49

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
Summary....................................................................................................................................50
References.................................................................................................................................51
Oracle documentation............................................................................................................................ 51
IBM documentation ................................................................................................................................ 51
IBM and Oracle Web sites ..................................................................................................................... 51
About the author .......................................................................................................................52
Appendix A: sample configuration..........................................................................................53
BladeCenter and V7000 ........................................................................................................................ 53
Appendix B: Oracle ASM Configuration Assistant (ASMCA)................................................54
Creating Oracle ASM Disk Groups ........................................................................................................ 54
Appendix C: Oracle Enterprise Manager 11g .........................................................................58
Appendix D: adding and removing storage disks to and from ASM instances ..................59
Adding storage disks to ASM instances ................................................................................................ 64
Logging into ASM instances............................................................................................ 64
Query ASM instances...................................................................................................... 65
Adding disks to the appropriate disk groups ................................................................... 65
Removing storage disks from ASM instances ....................................................................................... 67
Logging into ASM instances............................................................................................ 67
Query ASM instances...................................................................................................... 67
Removing disks from the appropriate disk groups .......................................................... 68
Appendix E: Adding cluster nodes to Oracle Real Application Clusters ............................70
Appendix F: deleting cluster nodes from Oracle Real Application Clusters.......................80
Trademarks and special notices..............................................................................................82

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
Abstract
The purpose of this paper is to assist those who are looking to implement Oracle Database 11g
Release 2 with Oracle Real Application Clusters (RAC) on Red Hat Enterprise Linux 5 (RHEL5)
running on IBM BladeCenter® servers and IBM System Storage™ products using a multipath
configuration instead of ASMLib. The information provided herein is based on experiences with
test environments at the IBM Oracle International Competency Center, and is based on
available documentation from IBM, Oracle, and Red Hat.

Prerequisites
 Good knowledge of Oracle Database installation process
 Knowledge of the Linux® operating system

Introduction
This paper will discuss the necessary steps to prepare for and successfully install Red Hat Enterprise
Linux 5 64-bit and Oracle Database 11g Release 2 Enterprise Edition with Oracle Real Application
Clusters 64-bit on IBM BladeCenter servers and IBM System Storage disks. The operating system
environment used for this exercise is the 2.6 kernel-based Red Hat Enterprise Linux 5 (RHEL5).
An implementation of Oracle Real Application Clusters 11g Release 2 consists of three main steps:
 Planning the hardware for Oracle Real Application Clusters implementation
 Configuring the servers and storage disk systems
 Installing and configuring the Oracle Clusterware and Oracle RAC database

NOTE: Based on the information in the My Oracle Support Note titled: “Oracle ASMLIB - Support for Red
Hat Enterprise Linux 5 AS on IBM zSeries” [ID 821652.1] ASMLib will not be available in future releases
with Red Hat. This paper documents the option of using UDEV rules and multipathing to configure ASM
disk permission’s and device persistence.

Oracle Database 11g Release 2 (11.2.0.2) new features


There are many new features found in Oracle Database 11g Release 2 (11.2.0.2). They can be found in
Oracle Database New Features Guide 11g Release 2 (11.2).
For more information on Oracle Real Application Clusters 11g Release 2 (11.2.0.2) new features, please
refer to Oracle Real Application Clusters Installation Guide 11.2 for Linux and UNIX.

About Oracle Real Application Clusters 11g Release 2 (11.2.0.2)


Oracle Real Application Clusters (RAC) is an option of Oracle Database that allows a database to be
installed across multiple servers. According to Oracle, RAC uses the shared disk method of clustering
databases. Oracle processes running in each node access the same data residing on shared data disk

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
1
storage. First introduced with Oracle Database 9i, RAC provides high availability and flexible scalability.
If one of the clustered nodes fails, Oracle continues processing on the other nodes. If additional capacity
is needed, nodes can be added without taking down the cluster.
In Oracle Database 11g Release 2, Oracle provides Oracle Clusterware, which is designed specifically for
Oracle RAC. You do not need a third party Clusterware product to implement Oracle RAC. Since
storage is shared, the file system and volume management must be cluster-aware.
Starting with Oracle Database 11g Release 2, Oracle Clusterware files can be stored in Oracle ASM.
Oracle Clusterware and Oracle ASM are installed into a single home directory called grid home.

In Oracle Real Application Clusters 11.2.0.2, there is a new installation option called Oracle Real
Application Cluster One Node (Oracle RAC One Node). Oracle RAC One Node has only one instance of
RAC running on one node in a cluster. It is supported on all platforms where Oracle RAC is certified to
run.
For further information on Oracle RAC One Node, please refer to Oracle Real Application Clusters
Installation Guide 11.2 for Linux and UNIX.

About IBM BladeCenter


The unique IBM BladeCenter design addresses today’s customers’ most serious issues: space
constraints, efficient manageability, resiliency, and the physical environment which includes cooling and
power. IBM BladeCenter servers takes less time to install, fewer resources to manage and maintain, and
cost less than traditional multi-server solutions. These blade servers are so compact and easy to use that
customers can increase the system capacity by simply sliding an additional blade into the integrated
chassis, and then IBM Director can auto-configure it making it ready to use. Since the blades share a
common, integrated infrastructure, basic components such as power, system ports and fans, power
consumption and system complexity are reduced.
For more information about the IBM BladeCenter platform, please refer to the following web site:
http://www-03.ibm.com/systems/bladecenter/intel-based.html

About IBM Storwize V7000


The IBM Storwize V7000 Midrange Disk system is designed to provide fast, reliable and efficient
networked storage. It is easy to deploy and flexible for use with IBM System x® and BladeCenter
servers.
The V7000 delivers sophisticated enterprise-class storage function for small and midsize businesses. It
not only supports SSD and a wide range of disk drives, it also adopts a new technology called Easy Tier
which allows you to deploy SSDs confidently, effectively and economically by automatically and
dynamically moving only the appropriate data to the SSDs in the system, based on ongoing performance
monitoring. In essence, volumes in an SSD or HDD managed disk group are monitored and can be
managed automatically or manually by moving hot extents to SSD and cold extents to HDD.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
2
Figure 1: IBM Storwize V7000

As shown in Figure 1, IBM Storwize V7000 has 2U rack-mount chassis with 24 2.5” drive bays (model
x24) or 12 3.5” drive bays (model x12). It is scalable up to 24 TB of physical storage per enclosure using
2TB near-line SAS disk drive module or up to 14 TB physical storage per enclosure using 600 GB SAS
disk drive modules.
For further information on IBM Storwize V7000, please refer to the following web site:
http://www-03.ibm.com/systems/storage/disk/storwize_v7000/index.html

Hardware requirements
Oracle Real Application Clusters requirements
An Oracle Real Application Clusters database environment consists of the following components:
1. Cluster nodes - 2 to n nodes or hosts, running Oracle Database server(s)
2. Network interconnect - a private network used for cluster communications and cache fusion
3. Shared storage - used to hold database system and data files and accessed by the cluster nodes
4. Production network - used by clients and application servers to access the database

Figure 2 below is an architecture diagram for Oracle RAC:

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
3
High speed
Application
interconnect Servers Users

Shared cache with Oracle Cache Fusion


Production
Network

SAN
Storage Area Fabric
Network

Shared
storage

Figure 2: Oracle Real Application Clusters architecture

For more information on Oracle RAC, please visit


http://www.oracle.com/technology/products/database/clustering/index.html.

Server CPU
There should be enough server CPU capacity in terms of speed and number of CPU’s to handle the
workload. Generally speaking, there should be enough CPU capacity to have an average CPU
utilization of 65%. This will allow the server absorb peak activity more easily.

Server memory
An Oracle database may require a lot of memory. This depends on the activity level of users and the
nature of the workload. As a rule of thumb, the server should have more memory than it actually
uses because performance will be greatly degraded and heavy disk swapping may occur when there
is insufficient memory.
It is important to select servers that are available with the amount of memory required plus room for
growth. Memory utilization should be around 75-85% maximum of the physical memory in production
environment. Otherwise, heavy disk swapping may occur and server performance will decrease.

Network
Servers in an Oracle RAC environment need at least two separate networks, a public network and a
private network. The public network is used for the communication between the clients or
applications servers and the database. The private network, sometimes referred to as “network
interconnect”, is used for cluster node communication. It is used for monitoring the heartbeat of the
cluster and by Oracle RAC for Cache Fusion.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
4
InfiniBand networking is supported with Oracle Database 11g.

Shared storage
Shared storage for Oracle RAC devices can be a logical drives or LUNs from a Storage Area Network
(SAN) controller or a Network File System (NFS) from a supported Network Attached Storage (NAS)
device. NAS has some advantages but a SAN is recommended for higher performance.

Please refer to the following IBM web site for more information about IBM NAS offerings such as IBM
System Storage N3000 Express, N5000, N7000 and N series Gateway.
IBM System Storage N series: Overview - IBM Network Attached Storage

For SAN products, IBM offers enterprise disk systems such as DS6000™ and DS8000®, mid-range
disk systems such as DS3400, DS3500 Express, DS4000, DS5000 series, IBM XIV®, and SONAS.
Check to ensure the System Storage product you are using is supported with Oracle RAC
implementations. Third party storage subsystem can also be used with BladeCenter servers. Please
refer to third party documentation or contact a third party representative for product certification
information.
For more information on IBM System Storage product offerings, please visit
http://www-03.ibm.com/systems/storage/disk/index.html
For an Oracle RAC implementation, Oracle Database files may be located on shared storage using
the following options:
1. A Certified Cluster file system

It is a file system that may be accessed (read and write) by all members in a cluster at the same time,
with all cluster members having the same view of the file system. It allows all nodes in a cluster to
access a device concurrently via the standard file system interface. Oracle Cluster File System
Release 2 (OCFS2) is an example.
2. Oracle Automated Storage Management (ASM)

ASM is a simplified database storage management and provisioning system that provides file system
and volume management capabilities in the Oracle Database. It allows database administrators
(DBA) to reference disk groups instead of individual disks and files which ASM manages internally.
ASM is included in Oracle Database 11g and is designed to handle Oracle Database files, control
files and log files.
In Oracle Database 11g Release 2, Oracle Automatic Storage Management Cluster File System
(Oracle ACFS) is introduced. It is a multi-platform, scalable file system which supports database and
application files like executables, database trace files, database alert logs, application reports,
BFILEs, and configuration files. However, it does not support any file that can be directly stored in
Oracle ASM as well as any files for the Oracle grid infrastructure home.
For more information on new features of Oracle ACFS on Oracle Database Release 11.2.0.2, please
refer to Oracle Database New Features Guide 11g Release 2 (11.2).

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
5
High availability considerations
High availability (HA) is a key requirement for many clients. From a hardware configuration standpoint,
this means eliminating single points of failure. IBM products are designed for high availability, with such
standard features as redundant power supplies and cooling fans, hot-swappable components, and so on.
For high availability environments, the following recommendations should also be taken into consideration
when selecting the server:
 Configure additional network interfaces and use IP bonding to combine at least two network
interfaces for each of the two Oracle RAC networks. This reduces the downtime due to a network
interface card (NIC) failure or network component failure. Multi-port adapters provide network path
redundancy, however the adapter will be a single point of failure. In this case, redundant multi-port
adapters are the best solution. In addition, NICs used for IP bonding should be on separate physical
network cards and connected to different network switches.
 There should be at least two fiber channel host bus adapters (HBA) on each node to provide
redundant I/O paths to the storage subsystem. Multi-port HBAs and Storage Area Network (SAN)
with redundant components like SAN switches and cabling will provide higher availability of the
servers.

A kernel crash utility for every node in the cluster should be configured. This will throw kernel panic when
a server crashes and the kernel dump will be saved. This core dump can then be used for further
investigation for the problem. These will in-turn save some problem resolution time.
Finally, an Oracle RAC implementation requires at least two network interfaces. Nevertheless, up to five
network interfaces are recommended, two for public, two for private and one for administration and net
dump. The more redundancy of hardware architectures and software components, the less downtime
databases and applications will experience.

Software requirements
In an Oracle Real Application Clusters implementation, different kinds of software need to be downloaded
and installed in the cluster nodes. A few of them are optional, however, it will be very beneficial to install
them and make use of them in the implementation.

Operating system
Red Hat Enterprise Linux 5 Update 7 is the operating system used in the tests described in this paper. It
can be downloaded from https://www.redhat.com/apps/download.
For the latest information regarding IBM hardware certification by Red Hat, please refer to:
https://hardware.redhat.com

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
6
Oracle Database 11g Release 2 (11.2.0.2)
Oracle Database 11g Release 2 (11.2.0.2) is the current release of the Oracle Database. It is certified on
IBM System x and BladeCenter with the following operating systems in both 32-bit and 64-bit:
 SuSE Linux Enterprise System 11 (SLES-11) / SuSE Linux Enterprise System 10 (SLES-10)
 Red Hat Enterprise AS/ES 5 (RHEL5) / Red Hat Enterprise AS/ES 4 (RHEL4)
 Oracle Enterprise Linux 5 (OEL5) / Oracle Enterprise Linux 4 (OEL4)

For the latest information on Oracle product certification, please visit My Oracle Support web site:
https://support.oracle.com/CSP/ui/flash.html
This software can be downloaded from the Oracle Technology Network (OTN) or the DVDs can be
requested from Oracle Support. Oracle RAC is a separately licensed option of Oracle Enterprise and
Standard Editions. For additional information on pricing, please refer to:
http://www.oracle.com/us/corporate/pricing/index.html

Automatic Storage Management


Automatic Storage Management (ASM) provides volume and cluster file system management where the
I/O subsystem is directly handled by the Oracle kernel. Oracle ASM will have each LUN mapped as a
disk. Disks are then grouped together into disk groups. Each disk group can be segmented in one or
more fail groups. ASM automatically performs load balancing in parallel across all available disk drives to
prevent hot spots and maximize performance.
Starting with Oracle Database 11g Release 2, Oracle Clusterware OCR and voting disk files can be
stored in Oracle ASM.
There are two methods to configure ASM on Linux, one is ASM with ASMLib and the other is ASM with
standard Linux I/O. ASM with Linux device manager “udev” will be employed to configure ASM on Linux
in this paper.

Configuring the system environment


BIOS
Be sure to upgrade the system BIOS and adapter BIOS to the latest levels. Look for the blade models on
http://www.ibm.com/support/us.

Remote system management


On the BladeCenter platform, the Management Module functions as a system-management processor
and a keyboard/video/mouse-multiplexing switch for the blade servers. It provides keyboard, video, and

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
7
mouse ports for a local console and a 10/100 Ethernet port which provides access to the system
management processor.

The system management processor communicates with other BladeCenter components, providing
functions such as:
 Status monitoring of the blade servers, switch modules, power modules, blower modules
 Blade server management and control, e.g. power/restart, upgrading firmware, switching the
keyboard/video/mouse, etc. in conjunction with the blade server service processors
 Switch module configuration, such as enabling/disabling external ports
 Remote console

Set up the Ethernet ports on BladeCenter Management Module and connect them to your management
Local Area Network (LAN). For information and instructions, please refer to IBM Redbook web site:

http://www.redbooks.ibm.com

Installing Linux operating systems


Installation of the operating systems will not be discussed in detail in this paper. For more details, please
refer to the operating system vendor documentation. The instructions for installation of different operating
systems for BladeCenter can be found at:
http://www-304.ibm.com/jct01004c/systems/support/supportsite.wss/docdisplay?lndocid=SITE-
HELP05&brandind=5000020
Prior to installation, please make note of the following:
 Oracle recommends you use the same operating systems on all cluster nodes and the same software
configurations on each node of the cluster.
 Be sure to create sufficient swap space appropriate for the amount of physical memory on your
servers. Oracle recommends that the amount of swap space should equal the amount of RAM.
Swap spaces should also be in the same amount on all cluster nodes.
 It is strongly recommended that every node of the cluster have an identical hardware configuration,
although it is not mandatory.
 Secure Shell (SSH) configuration is required for Oracle Clusterware and Oracle RAC.

Prior to installation, install the required OS packages, otherwise Oracle Universal Installer will provide you
with the list of packages that you need to install before you can proceed.
Starting with Oracle Grid Infrastructure 11g Release 2, only the 64-bit packages are required.
The following packages will be checked for Oracle Real Application Clusters 11g Release 2 on RHEL 5.7
64-bit (32-bit version are listed for your reference) when using Cluster Verification Utility (the version
numbers of these packages are the minimum version required):
binutils-2.17.50.0.6
compat-libstdc++-33-3.2.3
compat-libstdc++-33-3.2.3 (32 bit)
elfutils-libelf-0.125
elfutils-libelf-devel-0.125
gcc-4.1.2

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
8
gcc-c++-4.1.2
glibc-2.5-24
glibc-2.5-24 (32 bit)
glibc-common-2.5
glibc-devel-2.5
glibc-devel-2.5 (32 bit)
glibc-headers-2.5
ksh-20060214
libaio-0.3.106
libaio-0.3.106 (32 bit)
libaio-devel-0.3.106
libaio-devel-0.3.106 (32 bit)
libgcc-4.1.2
libgcc-4.1.2 (32 bit)
libstdc++-4.1.2
libstdc++-4.1.2 (32 bit)
libstdc++-devel 4.1.2
make-3.81
numactl-devel-0.9.8.x86_64
sysstat-7.0.2
unixODBC-2.2.11
unixODBC-2.2.11 (32 bit)
unixODBC-devel-2.2.11
unixODBC-devel-2.2.11 (32 bit)

Configuring multipath devices and udev


Oracle Clusterware 11.2.0.2 does not support raw and block devices for Oracle Clusterware files (OCR
and Voting disks). If you have single path to your shared LUNs, you need to configure udev for the
purpose of setting ownership, permissions and rules. In this section, since we have multiple paths to our
shared LUNs, multipath devices will be configured for Oracle ASM configuration and storage.
In order to prevent device name changes after failover or reboot, multipath needs to be set up correctly to
provide persistent device name.
Before beginning the procedures, please make sure that device-mapper-multipath-0.4.7-46.el5 package
has been installed in both nodes. This package provides scsi multipath support so that the device
mapper multipath will allow multiple I/O paths to single LUN.
In this section, the clusterware devices have been created on IBM Storwize V7000 and are shared
devices among the cluster nodes. This paper does not cover the shared disks creation process.
Following are the steps to configure multipath devices for both nodes, which are icc-51 and icc-52 in this
example. Please note that every step listed below needs to be executed on both nodes.
1. Update /etc/scsi_id.config with additional line options= -g. For instance, “vendor="ATA",options=-
p 0x80, options=-g” should be the content of the file.
2. Check the partitions in proc/partitions table.
[root@icc-51 mapper]# cat /proc/partitions
major minor #blocks name
8 0 71687000 sda
8 1 104391 sda1
8 2 5116702 sda2
8 3 4096575 sda3
8 4 1 sda4

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
9
8 5 4096543 sda5
8 6 4096543 sda6
8 7 54171148 sda7
8 16 2097152 sdb
8 17 2095662 sdb1
8 32 2097152 sdc
8 33 2095662 sdc1
8 48 2097152 sdd
8 49 2095662 sdd1
8 64 10485760 sde
8 80 10485760 sdf
8 81 10485744 sdf1
8 96 10485760 sdg
8 97 10485744 sdg1
8 112 10485760 sdh
8 113 10485744 sdh1
8 128 2097152 sdi
8 144 2097152 sdj
8 160 2097152 sdk
8 176 52428800 sdl
8 192 2097152 sdm
8 208 2097152 sdn
8 224 2097152 sdo
8 240 10485760 sdp
65 0 10485760 sdq
65 16 10485760 sdr
65 32 10485760 sds
65 48 2097152 sdt
65 64 2097152 sdu
65 80 2097152 sdv
65 96 52428800 sdw
3. In order to give persistent naming of Clusterware devices, we have to obtain the unique SCSI
identifiers of the devices. This can be accomplished by executing scsi_id –g –u –s
/block/<device>.
[root@icc-51]# scsi_id -g -u -s /block/sdc
3600507680282809ac80000000000004f

Record all the unique SCSI identifiers of all the devices that are required for ASM disk groups.
This unique SCSI identifiers is also called universally unique identifier (UUID). It is used by
Linux’s ext2/ext3 filesystem to identify partitions. The uuid value should be the same for all
shared cluster nodes.
4. Modify multipath configuration file (/etc/multipath.conf) with the correct configuration in
devnode_blacklist, defaults, multipaths and devices sections. If you put in the device WWID and
its alias for every multipath device, this should minimize errors during the failover between the
nodes. Make sure you copy the latest multipath.conf to all other cluster nodes after you updated
the file, so that all the cluster nodes are using the same copy of multipath.conf.

This is an example of /etc/multipath.conf in our environment.


# This is a basic configuration file with some examples, for device mapper
# multipath.
# For a complete list of the default configuration values, see
# /usr/share/doc/device-mapper-multipath-0.4.7/multipath.conf.defaults
# For a list of configuration options with descriptions, see
# /usr/share/doc/device-mapper-multipath-0.4.7/multipath.conf.annotated

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
10
# Blacklist all devices by default. Remove this to enable multipathing
# on the default devices.
blacklist {
devnode "*"
}

## By default, devices with vendor = "IBM" and product = "S/390.*" are


## blacklisted. To enable mulitpathing on these devices, uncomment the
## following lines.
blacklist_exceptions {
devnode "^(sd)[b-z]*"
devnode "^(dm-)[0-9]*"
}

## Use user friendly names, instead of using WWIDs as names.


defaults {
vendor "IBM"
product "2145"
path_grouping_policy group_by_prio
prio_callout "/sbin/mpath_prio_alua /dev/%n"
features "1 queue_if_no_path"
path_checker tur
}
multipaths {
multipath {
wwid 3600507680282809ac80000000000004e
alias voting1
}
multipath {
wwid 3600507680282809ac80000000000004f
alias voting2
}
multipath {
wwid 3600507680282809ac800000000000050
alias voting3
}
multipath {
wwid 3600507680282809ac800000000000051
alias mpath12
}
multipath {
wwid 3600507680282809ac800000000000053
alias mpath14
}
multipath {
wwid 3600507680282809ac800000000000049
alias mpath7
}
multipath {
wwid 3600507680282809ac800000000000052
alias mpath13
}
multipath {
wwid 3600507680282809ac800000000000048
alias mpath6
}
multipath {
wwid 3600507680282809ac800000000000047
alias mpath5
}
multipath {
wwid 3600507680282809ac80000000000004d

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
11
alias mpath11
}
multipath {
wwid 3600507680282809ac800000000000046
alias mpath4
}
}
5. After configuring the multipath, you can verify it by running multipath commands. Before you
verify them, you need to start the multipathd service and ensure the daemon starts at boot time
by using chkconfig command.
[root@icc-51 scripts]# /etc/init.d/multipathd start
Starting multipathd: [ OK ]
[root@icc-51 scripts]# chkconfig multipathd on
[root@icc-51 scripts]# multipath -v2
[root@icc-51 scripts]# multipath -ll
ocr2 (3600507680282809ac800000000000045) dm-2 IBM,2145
[size=1.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=50][enabled]
\_ 2:0:0:2 sdo 8:224 [active][ready]
\_ round-robin 0 [prio=10][enabled]
\_ 1:0:0:2 sdd 8:48 [active][ready]
ocr1 (3600507680282809ac800000000000044) dm-1 IBM,2145
[size=1.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=50][enabled]
\_ 1:0:0:1 sdc 8:32 [active][ready]
\_ round-robin 0 [prio=10][enabled]
\_ 2:0:0:1 sdn 8:208 [active][ready]
voting (3600507680282809ac800000000000043) dm-0 IBM,2145
[size=1.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=50][enabled]
\_ 2:0:0:0 sdm 8:192 [active][ready]
\_ round-robin 0 [prio=10][enabled]
\_ 1:0:0:0 sdb 8:16 [active][ready]
mpath9 (3600507680282809ac80000000000004b) dm-8 IBM,2145
[size=1.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=50][enabled]
\_ 2:0:0:8 sdu 65:64 [active][ready]
\_ round-robin 0 [prio=10][enabled]
\_ 1:0:0:8 sdj 8:144 [active][ready]
mpath8 (3600507680282809ac80000000000004a) dm-7 IBM,2145
[size=1.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=50][enabled]
\_ 1:0:0:7 sdi 8:128 [active][ready]
\_ round-robin 0 [prio=10][enabled]
\_ 2:0:0:7 sdt 65:48 [active][ready]
mpath7 (3600507680282809ac800000000000049) dm-6 IBM,2145
[size=10G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=50][enabled]
\_ 2:0:0:6 sds 65:32 [active][ready]
\_ round-robin 0 [prio=10][enabled]
\_ 1:0:0:6 sdh 8:112 [active][ready]
mpath6 (3600507680282809ac800000000000048) dm-5 IBM,2145
[size=10G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=50][enabled]
\_ 1:0:0:5 sdg 8:96 [active][ready]
\_ round-robin 0 [prio=10][enabled]
\_ 2:0:0:5 sdr 65:16 [active][ready]
mpath5 (3600507680282809ac800000000000047) dm-4 IBM,2145
[size=10G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=50][enabled]

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
12
\_ 2:0:0:4 sdq 65:0 [active][ready]
\_ round-robin 0 [prio=10][enabled]
\_ 1:0:0:4 sdf 8:80 [active][ready]
mpath11 (3600507680282809ac80000000000004d) dm-10 IBM,2145
[size=50G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=50][enabled]
\_ 2:0:0:10 sdw 65:96 [active][ready]
\_ round-robin 0 [prio=10][enabled]
\_ 1:0:0:10 sdl 8:176 [active][ready]
mpath4 (3600507680282809ac800000000000046) dm-3 IBM,2145
[size=10G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=50][enabled]
\_ 1:0:0:3 sde 8:64 [active][ready]
\_ round-robin 0 [prio=10][enabled]
\_ 2:0:0:3 sdp 8:240 [active][ready]
mpath10 (3600507680282809ac80000000000004c) dm-9 IBM,2145
[size=1.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=50][enabled]
\_ 1:0:0:9 sdk 8:160 [active][ready]
\_ round-robin 0 [prio=10][enabled]
\_ 2:0:0:9 sdv 65:80 [active][ready]

Since /dev/dm-N devices are not persistent across reboot, it should not be used. Nevertheless,
/dev/mapper devices are persistent across reboot, they should be used to access and interact
with mulitpathd devices.
6. Update /etc/rc.local to set the multipath device ownership and permissions. This is an example of
/etc/rc.local in our environment.
#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.

touch /var/lock/subsys/local

# OCR and Voting disks 11gR2


chown grid:oinstall /dev/mapper/voting*
chmod 0640 /dev/mapper/voting*

# multipath disks for Oracle 11gR2


chown oracle:oinstall /dev/mapper/mpath*
chmod 0660 /dev/mapper/mpath*
7. Execute the /etc/rc.local script to test the settings of ownership and permissions of multipath
devices. In our example, any disks for grid infrastructure will be owned by grid user and oinstall
group; disks for Oracle Database will be owned by oracle user and oinstall group. When you set
your discovery path during ASM disk group creation, it should be “/dev/mapper”. If Oracle ASM
cannot discover any disk, you will need to check the permissions of the disks.
[root@icc-51 mapper]# /etc/rc.local
[root@icc-51 mapper]# ls -ltr /dev/mapper
total 0
crw------- 1 root root 10, 63 Aug 9 11:01 control
brw-r----- 1 grid oinstall 253, 2 Aug 15 13:32 voting3
brw-r----- 1 grid oinstall 253, 1 Aug 15 13:32 voting2
brw-r----- 1 grid oinstall 253, 0 Aug 15 13:32 voting1
brw-r----- 1 grid oinstall 253, 11 Aug 16 15:11 voting3p1
brw-rw---- 1 oracle oinstall 253, 9 Aug 17 11:22 mpath14

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
13
brw-rw---- 1 oracle oinstall 253, 8 Aug 17 11:22 mpath13
brw-r----- 1 grid oinstall 253, 13 Aug 25 14:58 voting2p1
brw-rw---- 1 oracle oinstall 253, 5 Aug 25 15:06 mpath6
brw-rw---- 1 oracle oinstall 253, 6 Aug 25 15:07 mpath7
brw-rw---- 1 oracle oinstall 253, 7 Aug 25 15:07 mpath12
brw-rw---- 1 oracle oinstall 253, 10 Aug 25 15:07 mpath11
brw-rw---- 1 oracle oinstall 253, 4 Aug 25 15:07 mpath5
brw-r----- 1 grid oinstall 253, 12 Aug 25 15:07 voting1p1
brw-rw---- 1 oracle oinstall 253, 3 Aug 25 15:07 mpath4

At this point, we recommend to reboot all cluster nodes and check to see if the LUNs under /dev/mapper
directory have the correct permissions, groups and owners.

Installing Oracle Grid Infrastructure 11.2.0.2


Before installing Oracle Grid Infrastructure 11.2.0.2 on both servers, there are several important tasks
that need to be done on all of the cluster nodes.

Pre-Installation tasks

Configuring kernel parameters


Edit /etc/sysctl.conf file to set up the kernel parameters for Oracle Database. If you have current
values in the file and they are higher than the value listed below, you do not need to change the value
of the parameter. However, the range values must match exactly.
kernel.shmall = 2097152
kernel.shmmax = 1/2 of physical RAM. This would be the value 2147483648 for a 4Gb
RAM system.
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 512 * PROCESSES (for example 65536 for 128 processes)
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 4194304
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 262144

After making these changes, sysctl –p will enforce these values.


If you do not set the kernel parameters correctly before installation, Oracle Installer will create a fixup
script (runfixup.sh) that you can run as root when your prerequisites check fails. This script will then
update the kernel parameters for you. Nevertheless, Oracle recommends that you do not change the
contents of the generated fixup script.

Creating users and groups


Two groups need to be created: dba and oinstall. The dba group is used for Oracle Database
authentication and oinstall for Oracle Inventory group. Please make sure that the group id is the
same on all cluster nodes. For instance, if oinstall gid is 502 on node 1, oinstall gid must be 502 on
node 2 and any other nodes in the cluster. This can be accomplished by the groupadd command.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
14
You can optionally create another user, besides oracle, for grid infrastructure installations in order to
separate the administrative privileges from others. For instance, you can create a user ID grid and
user ID oracle for Oracle Clusterware and Database installation.
# useradd –u 1101 –g oinstall –G dba oracle
# usermod –u 1100 –g oinstall –G dba grid

As mentioned in Oracle documentation, you cannot have separate Oracle Clusterware and Oracle
ASM installation owners. In this paper, users oracle and grid have been created for simplification.
Please make sure that user ID for users oracle and grid are the same on all cluster nodes, and users
oracle and grid are in the dba group on all cluster nodes.
Create appropriate directories for oracle and grid installations and have appropriate ownership of the
directories. Set up a grid infrastructure home directory to be owned by user grid and group oinstall.
The Oracle grid infrastructure directory cannot be a subdirectory of the Oracle base directory.
[root@icc-51 etc]# groupadd -g 1000 oinstall
[root@icc-51 etc]# groupadd -g 1031 dba
[root@icc-51 etc]# mkdir -p /u01/app/11.2.0/grid
[root@icc-51 etc]# mkdir -p /u01/app/oracle
[root@icc-51 etc]# chown -R oracle:oinstall /u01
[root@icc-51 etc]# chmod -R 775 /u01/

In Oracle 11g Release 2, there are two separate ORACLE_HOME directories; One home for Oracle
grid infrastructure; and the other home for Oracle Real Application Clusters Database. To execute
commands like ASMCA for Oracle ASM Configuration or DBCA for Database Configuration, you will
need to change the ORACLE_HOME environment variable to Oracle RAC Database home.

Setting shell limits for the grid and oracle software owner
The file /etc/security/limits.conf needs to be modified to include new standards for users oracle and
grid.
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536

In order for these limits to take effect, /etc/pam.d/login file needs to be edited.
session required pam_limits.so

Finally, enable these limits when user oracle is logged into the server in $HOME/.bash_profile if you
are using bash shell or $HOME/.profile if you are using C shell.
if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
15
Setting the time on Cluster Nodes
In an Oracle RAC environment, the date and time settings on all cluster nodes have to be
synchronized either by Oracle Cluster Time Synchronization Service (CTSS) or Network Time
Protocol (NTP). If you do not use NTP, Oracle will use CTSS to synchronize the internal clocks of all
cluster members.
If you are using NTP, you need to add slewing option “-x” in /etc/sysconfig/ntpd.
root@icc-51 grid installation]# cat /etc/sysconfig/ntpd
# Drop root to id 'ntp:ntp' by default.
OPTIONS="-u ntp:ntp -p /var/run/ntpd.pid -x"

# Set to 'yes' to sync hw clock after successful ntpdate


SYNC_HWCLOCK=no

# Additional options for ntpdate


NTPDATE_OPTIONS=""

After that, you will need to restart ntpd.


[root@icc-51 grid installation]# service ntpd restart
Shutting down ntpd: [ OK ]
ntpd: Synchronizing with time server: [ OK ]
Starting ntpd: [ OK ]
[root@icc-51 ~]# ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
+x2la01.hostigat 207.171.7.151 3 u 212 1024 377 81.100 -0.924 0.074
+ntp.sunflower.c 64.6.144.6 3 u 199 1024 377 70.648 -1.422 20.100
*clock.team-cymr 172.16.65.22 2 u 292 1024 377 80.314 -9.591 2.974
LOCAL(0) .LOCL. 10 l 17 64 377 0.000 0.000 0.001

When you install Oracle software on the system for the first time, a file called oraInst.loc will be
created under /etc directory. The file will provide information on where the Oracle inventory directory
is and the name of the Oracle Inventory group.
inventory_loc=/u01/app/oraInventory
inst_group=oinstall

If a previous inventory directory exists, please make sure that the same Oracle inventory directory is
used and all Oracle software users have the write permissions to this directory.

Setting up network files


The following network addresses are required for each node:
 Public network address
 Private network address
 Virtual IP network address (VIP)
 Single Client Access Name (SCAN) address for the cluster

The interfaces and IP addresses for both public and private networks need to be set up. These
configurations can be done in Red Hat Enterprise Linux 5 System => Administration => Network.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
16
After that, add the host names and IP addresses to /etc/hosts as shown in the example below. If the
public host names and IP addresses are registered to the Domain Name Server (DNS), they can be
excluded. Interconnect (private) host names and IP addresses should always be placed in /etc/hosts.
127.0.0.1 localhost.localdomain localhost
9.52.156.51 icc-51.usca.ibm.com icc-51
9.52.156.52 icc-52.usca.ibm.com icc-52
10.10.10.1 icc-51-priv.usca.ibm.com icc-51-priv
10.10.10.2 icc-52-priv.usca.ibm.com icc-52-priv
9.52.156.153 icc-51-vip.usca.ibm.com icc-51-vip
9.52.156.154 icc-52-vip.usca.ibm.com icc-52-vip
9.52.156.155 icc-cluster.usca.ibm.com icc-cluster

SCAN is a new requirement for Oracle Clusterware installation. It is a domain name that resolves to
all the SCAN addresses (recommended three IP addresses) allocated for the cluster. The SCAN IP
address must be on the same subnet as the VIP addresses, and it must be unique within the
corporate network.
Configuring SCANs in a DNS or a hosts file is the only supported configuration for Oracle grid
infrastructure installation. SCANs configuration thru Network Information Service (NIS) is not
supported.

Configuring SSH on all cluster nodes


Starting with Oracle Database 11g Release 2, there is no need to configure SSH on all cluster nodes
because Oracle Universal Installer will set them up for you during the grid infrastructure installation.

Configuring ASM Disks


Starting with Oracle Database 11g Release 2, the Oracle Clusterware, the voting disk and OCR can
be stored in ASM. Oracle strongly recommends storing Oracle Clusterware disks on ASM. However,
Oracle Clusterware binaries and files cannot be stored in Oracle ASM Cluster File System (ACFS).
Oracle recommends 280 MB minimum for each voting disk and OCR file. The total required values
are cumulative and it depends on the level of redundancy you choose during the installation.
In this example, Oracle Clusterware disks will be stored in Oracle ASM. Oracle ASM disks will need
to be created prior to installation for Oracle Database 11.2.0.2 installation. For Oracle Clusterware
disks (Voting and OCR disks), Oracle ASM disk group will be created during the grid installation.

Running Cluster Verification Utility (CVU)


Cluster Verification Utility (CVU) can be used to verify if the systems are ready to install Oracle
Clusterware 11g Release 2. The Oracle Universal Installer will use CVU to perform all pre-requisite
checks during the installation interview. Login as oracle user and run the following command:
[grid@icc-51 grid]$ ./runcluvfy.sh stage -pre crsinst -n icc-51,icc-52

Performing pre-checks for cluster services setup

Checking node reachability...


Node reachability check passed from node "icc-51"

Checking user equivalence...


User equivalence check passed for user "grid"

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
17
Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Node connectivity passed for subnet "9.52.156.0" with node(s) icc-52,icc-51


TCP connectivity check passed for subnet "9.52.156.0"

Node connectivity passed for subnet "10.10.10.0" with node(s) icc-52,icc-51


TCP connectivity check passed for subnet "10.10.10.0"

Interfaces found on subnet "9.52.156.0" that are likely candidates for VIP are:
icc-52 eth0:9.52.156.52
icc-51 eth0:9.52.156.51

Interfaces found on subnet "10.10.10.0" that are likely candidates for a private
interconnect are:
icc-52 eth1:10.10.10.2
icc-51 eth1:10.10.10.1

Node connectivity check passed

Checking ASMLib configuration.


Check for ASMLib configuration passed.
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "icc-52:/tmp"
Free disk space check passed for "icc-51:/tmp"
Check for multiple users with UID value 1100 passed
User existence check passed for "grid"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Membership check for user "grid" in group "oinstall" [as Primary] passed
Membership check for user "grid" in group "dba" passed
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make-3.81( x86_64)"
Package existence check passed for "binutils-2.17.50.0.6( x86_64)"
Package existence check passed for "gcc-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libaio-0.3.106 (x86_64)( x86_64)"

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
18
Package existence check passed for "glibc-2.5-24 (x86_64)( x86_64)"
Package existence check passed for "compat-libstdc++-33-3.2.3 (x86_64)( x86_64)"
Package existence check passed for "elfutils-libelf-0.125 (x86_64)( x86_64)"
Package existence check passed for "elfutils-libelf-devel-0.125( x86_64)"
Package existence check passed for "glibc-common-2.5( x86_64)"
Package existence check passed for "glibc-devel-2.5 (x86_64)( x86_64)"
Package existence check passed for "glibc-headers-2.5( x86_64)"
Package existence check passed for "gcc-c++-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libaio-devel-0.3.106 (x86_64)( x86_64)"
Package existence check passed for "libgcc-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libstdc++-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libstdc++-devel-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "sysstat-7.0.2( x86_64)"
Package existence check passed for "ksh-20060214( x86_64)"
Check for multiple users with UID value 0 passed
Current group ID check passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...


NTP Configuration file check passed

Checking daemon liveness...


Liveness check passed for "ntpd"
Check for NTP daemon or service alive passed on all nodes

NTP daemon slewing option check passed

NTP daemon's boot time configuration check for slewing option passed

NTP common Time Server Check started...


PRVF-5408 : NTP Time Server ".ACTS." is common only to the following nodes "icc-
52"
PRVF-5408 : NTP Time Server "132.236.56.250" is common only to the following
nodes "icc-51"
PRVF-5408 : NTP Time Server "172.16.65.22" is common only to the following nodes
"icc-51"
PRVF-5408 : NTP Time Server "69.164.222.108" is common only to the following
nodes "icc-52"
PRVF-5408 : NTP Time Server "64.236.96.53" is common only to the following nodes
"icc-52"
PRVF-5408 : NTP Time Server "207.171.7.151" is common only to the following nodes
"icc-51"
Check of common NTP Time Server passed

Clock time offset check from NTP Time Server started...


Clock time offset check passed

Clock synchronization check using Network Time Protocol(NTP) passed

Core file name pattern consistency check passed.

User "grid" is not part of "root" group. Check passed


Default user file creation mask check passed
Checking consistency of file "/etc/resolv.conf" across nodes

File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
All nodes have one search entry defined in file "/etc/resolv.conf"
The DNS response time for an unreachable node is within acceptable limit on all
nodes

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
19
File "/etc/resolv.conf" is consistent across nodes

Time zone consistency check passed

Starting check for Huge Pages Existence ...

Check for Huge Pages Existence passed

Starting check for Hardware Clock synchronization at shutdown ...

Check for Hardware Clock synchronization at shutdown passed

Pre-check for cluster services setup was successful.

Performing Oracle Clusterware installation and Automatic Storage


Management installation
To install Oracle Clusterware 11g Release 2, Oracle Database 11g Release 2 Grid Infrastructure
(11.2.0.1) for Linux x86-64, needs to be downloaded from the Oracle web site. After that, unzip
linux.x64_11gR2_grid.zip and run the Oracle Universal Installer (OUI) from one node (local node). For
the most part, OUI handles the installation of the other cluster nodes. There are a number of steps that
need to be done on the other cluster nodes and these are called out by OUI at various points during the
process.
Running the installation from the system console will require an XWindows session, or you can run it from
vncserver on the node and use XWindows on the workstation to start the OUI.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
20
1. The first screen will ask you if you want to get software update from Oracle. If you want to get
software updates like patch updates once they are available, you could enter your “My Oracle
Support” username and password.

2. The next screen will ask you to select one of the installation options. For this example, we select
“Install and Configure Grid Infrastructure for a Cluster”.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
21
3. For this example, we will select a typical installation..

4. The next screen asks for the SCAN and its cluster node names and virtual IP addresses. If this is
the first installation, put in the OS password for user oracle and click setup. This will then set up
the SSH connectivity between the above-mentioned cluster nodes. After that, you can click
“Test” to make sure that the SSH worked properly between the nodes. Note: If you choose
Advanced Installation in the previous screen, you need to provide more details for Single Client
Access Name (SCAN) such as SCAN Port and IP addresses. The SCAN should be defined in
the DNS to resolve to three IP addresses. For the Typical Installation, you need to provide the
SCAN. Oracle Installer will verify the SCAN address.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
22
5. When you pressed “SSH Connectivity”, OUI will ask you for the OS username and password in
order for you to either “Setup” or “Test” the connectivity.

6. We pressed “Test” to check the SSH connectivity.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
23
7. When you click on “Next”, OUI will verify SCAN information.

8. The next screen will ask you for the Oracle base and software directories. In this example, all
Oracle Clusterware files are going to be stored in ASM. Then, enter the password for SYSASM.
The password should conform to specific rules. If you did not follow these rules, errors will be
shown at the bottom of the screen.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
24
9. Since ASM is chosen to be the storage type for the Clusterware files, the setup process asks for
the names of the ASM disks and it will create the Disk Group Name with the selected ASM disks
to store the OCR and voting disks. The number of disks needed for installation depends on the
redundancy level you picked. For High redundancy, it requires five disks; for Normal redundancy,
it requires three disks; for External redundancy, it requires one disk. If you do not select enough
disks, errors will be generated. The recommended size of each disk is 2 GB. In this example,
external redundancy has been chosen. However, we recommend that you place the ASM disks
in at least three different storage disks to minimize single points of failure.

10. Next the Cluster Verification Utility will run to check if the cluster nodes have met all the
prerequisites. If not, it will stop and show you errors. You can fix the errors and ask to check
again. At the bottom of the screen, you can click on more details, where suggestions on how to
fix the errors will be shown.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
25
11. After fixing all the errors and passing the prerequisites tests, the installation summary will be
shown. You can save the response file for future silent installation.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
26
12. After the install process has installed the binary files on all cluster nodes, it will ask you to run
root.sh as user root. It is very important to run root.sh on the local node first and allow it to
successfully complete. Do not run root.sh on other nodes until root.sh on the local node has
completed; otherwise, errors will occur on the other cluster nodes.

This is the output from the local node which is “icc-51” in this example:
[root@icc-51 grid]# ./root.sh
Running Oracle 11g root script...

The following environment variables are set as:


ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:


Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...


Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file:
/u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
OLR initialization - successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
27
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding daemon to inittab
ACFS-9200: Supported
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies - this may take some time.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9309: ADVM/ACFS installation correctness verified.
CRS-2672: Attempting to start 'ora.mdnsd' on 'icc-51'
CRS-2676: Start of 'ora.mdnsd' on 'icc-51' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'icc-51'
CRS-2676: Start of 'ora.gpnpd' on 'icc-51' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'icc-51'
CRS-2672: Attempting to start 'ora.gipcd' on 'icc-51'
CRS-2676: Start of 'ora.cssdmonitor' on 'icc-51' succeeded
CRS-2676: Start of 'ora.gipcd' on 'icc-51' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'icc-51'
CRS-2672: Attempting to start 'ora.diskmon' on 'icc-51'
CRS-2676: Start of 'ora.diskmon' on 'icc-51' succeeded
CRS-2676: Start of 'ora.cssd' on 'icc-51' succeeded

ASM created and started successfully.

Disk Group DISK created successfully.

clscfg: -install mode specified


Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk a28e344aeafd4f38bf0f6b87d86d5ba5.
Successfully replaced voting disk group with +DISK.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE a28e344aeafd4f38bf0f6b87d86d5ba5 (/dev/mapper/voting1p1) [DISK]
Located 1 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'icc-51'
CRS-2676: Start of 'ora.asm' on 'icc-51' succeeded
CRS-2672: Attempting to start 'ora.DISK.dg' on 'icc-51'
CRS-2676: Start of 'ora.DISK.dg' on 'icc-51' succeeded
ACFS-9200: Supported
ACFS-9200: Supported
CRS-2672: Attempting to start 'ora.registry.acfs' on 'icc-51'
CRS-2676: Start of 'ora.registry.acfs' on 'icc-51' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
28
This is the output of the second node which is “icc-52”. The output is slightly different from the first node
and is shorter.
[root@icc-52 grid]# ./root.sh
Running Oracle 11g root script...

The following environment variables are set as:


ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:


Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...


Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file:
/u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
OLR initialization - successful
Adding daemon to inittab
ACFS-9200: Supported
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies - this may take some time.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9309: ADVM/ACFS installation correctness verified.
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS
daemon on node icc-51, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the
cluster
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
29
13. After executing root.sh on all cluster nodes, Oracle OUI will continue to configure Oracle Grid
Infrastructure for a cluster.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
30
14. Then cluvfy will run again after the configuration and post the errors on the screen. In this
example, the error is about the inconsistent name resolution for SCAN which caused the
verification of SCAN VIP and Listener setup to fail. According to My Oracle Support Note
887471.1, this error can be ignored because we are not using DNS in our network.

After you press OK and continue, the Oracle grid infrastructure installation has completed.

Please check the configuration log file for more details if there is any failure during the configuration. The
configuration log file is located in Oracle Inventory location.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
31
Performing post-installation tasks
To confirm Oracle Clusterware is running correctly, use these commands:
$CRS_HOME/bin/srvctl status nodeapps
[grid@icc-51 bin]$ ./srvctl status nodeapps
VIP icc-51-vip is enabled
VIP icc-51-vip is running on node: icc-51
VIP icc-52-vip is enabled
VIP icc-52-vip is running on node: icc-52
Network is enabled
Network is running on node: icc-51
Network is running on node: icc-52
GSD is disabled
GSD is not running on node: icc-51
GSD is not running on node: icc-52
ONS is enabled
ONS daemon is running on node: icc-51
ONS daemon is running on node: icc-52

$CRS_HOME/bin/crs_stat -t
[grid@icc-51 bin]$ ./crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.BACKUP.dg ora....up.type ONLINE ONLINE icc-51
ora.DATA.dg ora....up.type ONLINE ONLINE icc-51
ora.DISK.dg ora....up.type ONLINE ONLINE icc-51
ora....ER.lsnr ora....er.type ONLINE ONLINE icc-51
ora....N1.lsnr ora....er.type ONLINE ONLINE icc-52
ora.LOG.dg ora....up.type ONLINE ONLINE icc-51
ora.asm ora.asm.type ONLINE ONLINE icc-51
ora.cvu ora.cvu.type ONLINE ONLINE icc-52
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....SM1.asm application ONLINE ONLINE icc-51
ora....51.lsnr application ONLINE ONLINE icc-51
ora.icc-51.gsd application OFFLINE OFFLINE
ora.icc-51.ons application ONLINE ONLINE icc-51
ora.icc-51.vip ora....t1.type ONLINE ONLINE icc-51
ora....SM2.asm application ONLINE ONLINE icc-52
ora....52.lsnr application ONLINE ONLINE icc-52
ora.icc-52.gsd application OFFLINE OFFLINE
ora.icc-52.ons application ONLINE ONLINE icc-52
ora.icc-52.vip ora....t1.type ONLINE ONLINE icc-52
ora....network ora....rk.type ONLINE ONLINE icc-51
ora.oc4j ora.oc4j.type ONLINE ONLINE icc-52
ora.ons ora.ons.type ONLINE ONLINE icc-51
ora.orcl.db ora....se.type ONLINE ONLINE icc-51
ora....ry.acfs ora....fs.type ONLINE ONLINE icc-51
ora.scan1.vip ora....ip.type ONLINE ONLINE icc-52

The Oracle GSD (Global Service Daemon) is offline because it is not enabled and it is only used if you
have Oracle 9i Real Application Clusters database on the Oracle Clusterware 11g Release 2 (11.2)
cluster.
Another command, “crsctl check cluster -all”, can also be used for cluster check.
[grid@icc-51 bin]$ ./crsctl check cluster -all
**************************************************************

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
32
icc-51:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
icc-52:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

Finally, the command, “crsctl check crs”, can also be used for a less detailed system check.
[grid@icc-51 bin]$ ./crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

After the installation of Oracle Clusterware, Oracle recommends that a backup is made for the contents of
root.sh for contingency purpose. According to Oracle’s recommendation, do not backup voting disks
using dd command because this may result loss of data in the voting disk. Other post-installation tasks
Oracle recommends include tuning semaphore parameters and creating a fast recovery area disk group.

Installing Oracle Database 11g Release 2 (11.2.0.2)


Pre-Installation tasks
All of the pre-installation tasks for Oracle Database 11g Release 2 are the same as the pre-installation
tasks for Oracle Clusterware.

Running Cluster Verification Utility


The Cluster Verification Utility (CVU) can be used to verify that the systems are ready to install Oracle
Database 11g Release 2 with Oracle RAC.
The command “cluvfy.sh stage –pre dbcfg –n nodelist –d $ORACLE_HOME” is used to pre-check
requirements for an Oracle Database with an Oracle RAC installation. Login as oracle user and run
the cluvfy command.
[oracle@icc-51 ~]$ cluvfy stage -pre dbcfg -n icc-51,icc-52 -d
/u01/app/oracle/product/11.2.0/dbhome_1

Performing pre-checks for database configuration

Checking node reachability...


Node reachability check passed from node "blade1"

Checking user equivalence...


User equivalence check passed for user "oracle"

Checking node connectivity...

Checking hosts config file...

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
33
Verification of the hosts config file successful

Node connectivity passed for subnet "10.10.10.0" with node(s) blade2,blade1


TCP connectivity check passed for subnet "10.10.10.0"

Node connectivity passed for subnet "9.38.158.128" with node(s) blade2,blade1


TCP connectivity check passed for subnet "9.38.158.128"

Interfaces found on subnet "9.52.156.51" that are likely candidates for VIP are:
blade2 eth1:9.38.158.143 eth1:9.38.158.233
blade1 eth1:9.38.158.142 eth1:9.38.158.232 eth1:9.38.158.231

Interfaces found on subnet "10.10.10.0" that are likely candidates for a private
interconnect are:
blade2 eth0:10.10.10.12
blade1 eth0:10.10.10.11

Node connectivity check passed

Total memory check passed


Available memory check passed
Swap space check passed
Free disk space check passed for "icc-52:/u01/app/oracle/product/11.2.0/dbhome_1"
Free disk space check passed for "icc-51:/u01/app/oracle/product/11.2.0/dbhome_1"
Free disk space check passed for "icc-52:/u01/app/11.2.0/grid"
Free disk space check passed for "icc-51:/u01/grid"
Free disk space check passed for "icc-52:/tmp"
Free disk space check passed for "icc-51:/tmp"
User existence check passed for "oracle"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Membership check for user "oracle" in group "oinstall" [as Primary] passed
Membership check for user "oracle" in group "dba" passed
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make-3.81"
Package existence check passed for "binutils-2.17.50.0.6"
Package existence check passed for "gcc-4.1.2"
Package existence check passed for "libaio-0.3.106 (i386)"
Package existence check passed for "libaio-0.3.106 (x86_64)"
Package existence check passed for "glibc-2.5-24 (i686)"
Package existence check passed for "glibc-2.5-24 (x86_64)"

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
34
Package existence check passed for "compat-libstdc++-33-3.2.3 (i386)"
Package existence check passed for "compat-libstdc++-33-3.2.3 (x86_64)"
Package existence check passed for "elfutils-libelf-0.125 (x86_64)"
Package existence check passed for "elfutils-libelf-devel-0.125"
Package existence check passed for "glibc-common-2.5"
Package existence check passed for "glibc-devel-2.5 (i386)"
Package existence check passed for "glibc-devel-2.5 (x86_64)"
Package existence check passed for "glibc-headers-2.5"
Package existence check passed for "gcc-c++-4.1.2"
Package existence check passed for "libaio-devel-0.3.106 (i386)"
Package existence check passed for "libaio-devel-0.3.106 (x86_64)"
Package existence check passed for "libgcc-4.1.2 (i386)"
Package existence check passed for "libgcc-4.1.2 (x86_64)"
Package existence check passed for "libstdc++-4.1.2 (i386)"
Package existence check passed for "libstdc++-4.1.2 (x86_64)"
Package existence check passed for "libstdc++-devel-4.1.2 (x86_64)"
Package existence check passed for "sysstat-7.0.2"
Package existence check passed for "unixODBC-2.2.11 (i386)"
Package existence check passed for "unixODBC-2.2.11 (x86_64)"
Package existence check passed for "unixODBC-devel-2.2.11 (i386)"
Package existence check passed for "unixODBC-devel-2.2.11 (x86_64)"
Package existence check passed for "ksh-20060214"
Check for multiple users with UID value 0 passed
Current group ID check passed

Checking CRS integrity...

CRS integrity check passed

Checking node application existence...

Checking existence of VIP node application (required)


Check passed.

Checking existence of ONS node application (optional)


Check passed.

Checking existence of GSD node application (optional)


Check ignored.

Checking existence of EONS node application (optional)


Check passed.

Checking existence of NETWORK node application (optional)


Check passed.

Checking time zone consistency...


Time zone consistency check passed.

Pre-check for database configuration was successful.

Preparing Oracle home and its path


Prepare the Oracle home and the path for the database installation. Use the “edit ~/.bash_profile” file
command to set up an Oracle Database environment with variables such as ORACLE_SID,
LD_LIBRARY_PATH.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
35
NOTE: The Oracle home path must be different from Oracle Clusterware home. In other words,
Oracle Database 11g Release 2 with Oracle RAC cannot be installed onto the same home as Oracle
Clusterware software.

Performing database installation


1. Download the patchset for Oracle Database 11.2.0.2 for Linux x86-64 from the My Oracle
Support web site. The patchset number for 11.2.0.2.0 is 10098816. There are total 7 zip files.
Unzip them and invoke runInstaller. Note: as of September, 2011 patch set 11.2.0.3 is available.
2. The first screen asks for your email address. You have to provide your email address in order to
proceed. If you want to receive security updates from My Oracle Support, you will need to
provide the password of your email address (username) for the My Oracle Support web site.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
36
3. This screen provides the opportunity for you to download software updates for this Oracle
11.2.0.2 RAC installation if there is one. You are asked for the username and password for My
Oracle Support to initiate the download.

4. The next screen provides the users different installation options. In this example, we will be
creating and configuring a database.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
37
5. The next screen asks for the class of the database server. For this example, Server Class will be
selected.

6. The next screen asks if you want to install and configure a single instance or Oracle RAC
database. In this example, we are going to install the Oracle 11.2.0.2 RAC Database on “icc-51”
and “icc-52”.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
38
7. The next screen asks for the type of installation. We will choose “Advanced Installation” so that
we can set up the location for the automated backup.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
39
8. Choose a language for the Oracle RAC installation.

9. We are going to install “Enterprise Edition”.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
40
10. Next set the Oracle Base and Oracle software locations.

11. We are going to create a general purpose database.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
41
12. Provide the global database name with domain name and Oracle SID.

13. This screen allows you to edit the memory size and character sets for the database.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
42
14. We will use Oracle Enterprise Manager to manage the Oracle RAC database.

15. The next screen asks for the storage device type for the database installation. The software
location must be different from the software location of the grid infrastructure. If the storage type
is ASM, you need to create ASM disk groups by using the Oracle ASM configuration assistant
(ASMCA).

Note: If you plan to use Oracle ASM to store your Oracle RAC database by using the
Oracle ASM configuration assistant (ASMCA), please refer to Appendix C “Oracle ASM
Configuration Assistant”.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
43
16. This screen allows you to enable the automated backups and define its location and storage type.

17. We have used ASMCA to create LOG, DATA and BACKUP ASM disk groups. For the Oracle
Database, we will put them in DATA disk group.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
44
18. This screen gives you the opportunity to set different passwords for SYS, SYSTEM, SYSMAN
and DBSNMP accounts. Or you could use the same password for all accounts.

19. We have created dba group for user oracle to install and manage databases.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
45
20. This screen shows the prerequisites checks on all cluster nodes before installation.

21. The next screen shows the installation summary for the database

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
46
22. This screen shows the installation process of the Oracle RAC Database installation.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
47
23. This is the last step of the database installation process. Execute root.sh from the software
location that you provided previously on all cluster nodes as user root. Run the root.sh on the
local node first, after it is completed, then move to next node and run the root.sh.

The output of the cluster nodes are the same. This is the output from running root.sh on “icc-51”.
root@icc-51 dbhome_1]# ./root.sh
Running Oracle 11g root script...

The following environment variables are set as:


ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/11.2.0/dbhome_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:


The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
48
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.

24. This is the end of database installation.

Post-installation tasks
1. Oracle recommends the root.sh script to be backed up after completing the database installation.
If the information is needed in the future, the original root.sh script can be easily recovered.
2. The next step is to set up Oracle User home with proper path environment variable like
ORACLE_BASE, ORACLE_SID, ORACLE_HOME and PATH.
3. After upgrading or creating databases, it is recommended that utlrp.sql be executed to compile or
re-compile all PL/SQL modules that might be in an invalid state including packages, procedures
and types. This script is located in the $ORACLE_HOME/rdbms/admin directory.
4. Finally, user accounts need to be created for the database and system. Most of the administrator
accounts in the new database have been locked except sys, system. They will need to be
unlocked if the modules for the administrators are going to be implemented.
5. The port numbers of several Web-based applications including Oracle Enterprise Manager
Database Control are recorded in $ORACLE_HOME/install/portlist.ini. Make a note of these port
numbers for future reference.
6. There is a major Patch Set Update (PSU) available on My Oracle Support web site. As of
September, 2011, it is called Patch 12419353 – 11.2.0.2.3 Grid Infrastructure (GI) PSU (it also
includes Database PSU 11.2.0.2.3). You may want to download and install this patch if you are
getting the bugs that have been fixed by this PSU. Always check the “Patch Set Update
Availability Document” on My Oracle Support for the most recent GI PSU as there are new
versions released quarterly.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
49
Summary
Oracle Database 11g Release 2 offers many new features. Many of the new features further optimize the
performance, scalability and failover mechanisms of Oracle Real Application Clusters (RAC) 11g. It
makes Oracle RAC easier to implement and gives you the flexibility to add nodes. Integrated with Oracle
Fusion Middleware, Oracle RAC can fail over connections in the connection pools and immediately take
appropriate recovery action.
The implementation steps are very different than Oracle Database 11g Release 1.
One important thing is to make sure that the Oracle Clusterware installation is successful and functional
before proceeding to database installation. This is because Oracle Clusterware daemons make sure that
all applications startup during system startup and any failed applications will be started automatically to
maintain the high availability aspect of the Oracle RAC cluster.
Before proceeding to database creation, multipath needs to be configured properly and ASM disks have
to be created using ASMCA. This will smooth the implementation process. Furthermore, in order to
ensure high availability, the ASM disks need to be placed in at least three different locations.
Last but not least, choosing the hardware, operating systems and storage for Oracle RAC 11g Release 2
are very significant steps. Having the right combination of all options will contribute to the success of the
Oracle RAC 11g Release 2 installation and implementation on the IBM BladeCenter and IBM System
Storage platforms.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
50
References
Oracle documentation
 Oracle Database New Features Guide 11g Release 2 (11.2) E10881-03
 Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Linux E10812-03
 Oracle Real Application Clusters Installation Guide 11g Release 2 (11.2) for Linux and UNIX E10813-
03
 Oracle Real Application Clusters Administration and Deployment Guide 11g Release 2 (11.2)
E10718-04
 Oracle® Clusterware Administration and Deployment Guide 11g Release 2 (11.2) Part Number
E10717-03
 Configuring non-raw multipath devices for Oracle Clusterware 11g (11.1.0) on RHEL5/OEL5 [My
Oracle Support ID 605828.1]

IBM documentation
 Oracle Database11g R2 Enterprise Edition using Oracle RAC on IBM BladeCenter running RedHat
Enterprise Linux 5 and IBM System Storage DS4800 December 2009, Betty Lee

IBM and Oracle Web sites


These Web sites provide useful references to supplement the information contained in this document:
 IBM BladeCenter
http://www-03.ibm.com/systems/bladecenter/intel-based.html

 Compatibility of IBM BladeCenter on hardware, applications and middleware


http://www-03.ibm.com/servers/eserver/serverproven/compat/us/eserver.html

 IBM System Storage V7000


http://www-03.ibm.com/systems/storage/disk/storwize_v7000

 IBM Network Attached Storage


http://www-03.ibm.com/systems/storage/network

 IBM System Storage Interoperability Matrix


http://www-
03.ibm.com/systems/support/storage/config/ssic/displayesssearchwithoutjs.wss?start_over=yes

 IBM RedBooks
http://www.redbooks.ibm.com

 IBM Techdocs (White Papers)


http://www-03.ibm.com/support/techdocs/atsmastr.nsf/Web/WhitePapers

 Oracle Database 11g Enterprise Edition


http://www.oracle.com/database/enterprise_edition.html

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
51
 Oracle Automatic Storage Management (ASM)
http://www.oracle.com/technology/products/database/asm

 My Oracle Support (formerly Oracle Metalink)


https://support.oracle.com/CSP/ui/flash.html

 Oracle Real Application Clusters


http://www.oracle.com/technology/products/database/clustering

About the author


Betty Lee is a Senior IT Specialist with the IBM Enablement team and works at the IBM Oracle
International Competency Center. She provides System x and BladeCenter platform support for projects
at the Competency Center and for the IBM enablement activities for Oracle solutions running on IBM
Systems at Oracle Corporation in Redwood Shores, CA.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
52
Appendix A: sample configuration
BladeCenter and V7000
The figure below shows the cabling for a BladeCenter and V7000 configuration. Note that the
BladeCenter fibre channel switch modules have been cabled directly to the V7000 host minihubs. If
existing SAN switches are utilized, the fibre channel switches can be run in interoperability mode, or
Optical Pass thru Modules (OPM) can be selected for use in the BladeCenter implementation.

Production Network

eth0 eth0
Fibre
Fibre channel
channel

BladeCenter
HS20’s

eth1 eth1

Cluster Interconnect

BladeCenter
Fibre Channel
Switch Modules

V7000

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
53
Appendix B: Oracle ASM Configuration Assistant
(ASMCA)
Oracle ASM Configuration Assistant is a new tool to manage ASM instances, create ASM disks groups,
create volumes and ASM Cluster file systems. To invoke ASMCA, go to grid infrastructure home and bin
directory.
# /u01/grid/bin/asmca

Creating Oracle ASM Disk Groups


1. Go to the tab “Disk Groups”. In this example, DISK disk group has already been created for
Oracle Clusterware files during Oracle grid infrastructure installation.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
54
2. Click the Create button and select the disks for the DATA disk group. For normal redundancy, a
minimum of three disks must be selected; for high redundancy, at least five disks must be
selected; for external redundancy, only one disk must be selected. External redundancy is
selected in this example. Click OK to complete the disk group creation. It may take a few
minutes. If Oracle ASM cannot find any disks, click on “Change Disk Discovery Path” to find the
disks.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
55
3. Create LOG disk group for Oracle RAC database redo logs.

4. An informational window will be shown to confirm the LOG disk group creation.

5. Repeat the same procedures for BACKUP disk group creation. In this example, there is only one
disk selected for BACKUP disk group.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
56
Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
57
Appendix C: Oracle Enterprise Manager 11g
This is the Oracle Enterprise Manager 11g Database Control window. Two instances are included in this
cluster database ORCL.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
58
Appendix D: adding and removing storage disks to
and from ASM instances
Following are the steps to configure additional multipath devices for both nodes, which are “icc-51” and
“icc-52” in this example. Please note that every step listed below needs to be executed on both nodes.
1. Update /etc/scsi_id.config with additional line options= -g. For instance, “vendor="ATA",options=-
p 0x80, options=-g” should be the content of the file.
2. Check the partitions in proc/partitions table.
[root@icc-51 mapper]# cat /proc/partitions
major minor #blocks name
8 0 71687000 sda
8 1 104391 sda1
8 2 5116702 sda2
8 3 4096575 sda3
8 4 1 sda4
8 5 4096543 sda5
8 6 4096543 sda6
8 7 54171148 sda7
8 16 2097152 sdb
8 17 2095662 sdb1
8 32 2097152 sdc
8 33 2095662 sdc1
8 48 2097152 sdd
8 49 2095662 sdd1
8 64 10485760 sde
8 80 10485760 sdf
8 81 10485744 sdf1
8 96 10485760 sdg
8 97 10485744 sdg1
8 112 10485760 sdh
8 113 10485744 sdh1
8 128 2097152 sdi
8 144 2097152 sdj
8 160 2097152 sdk
8 176 52428800 sdl
8 192 2097152 sdm
8 208 2097152 sdn
8 224 2097152 sdo
8 240 10485760 sdp
65 0 10485760 sdq
65 16 10485760 sdr
65 32 10485760 sds
65 48 2097152 sdt
65 64 2097152 sdu
65 80 2097152 sdv
65 96 52428800 sdw
3. In order to give persistent naming of Clusterware devices, we have to obtain the unique SCSI
identifiers of the devices. This can be accomplished by executing “scsi_id –g –u –s
/block/<device>”.
[root@icc-51]# scsi_id -g -u -s /block/sdc
3600507680282809ac80000000000004f

Record all the unique SCSI identifiers of all the devices that are required for ASM disk groups.
These unique SCSI identifiers are also called universally unique identifier (UUID). It is used by

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
59
Linux’s ext2/ext3 filesystem to identify partitions. The uuid value should be the same for all
shared cluster nodes.
4. Modify multipath configuration file (/etc/multipath.conf) with the correct configuration in
devnode_blacklist, defaults, multipaths and devices sections. If you put in the device WWID and
its alias for every multipath device, this should minimize errors during the failover between the
nodes. Make sure you copy the latest multipath.conf to all other cluster nodes after you’ve
updated the file, so that all the cluster nodes are using the same copy of multipath.conf.
Moreover, reload multipathd on all cluster nodes and make sure that /dev/mapper/mpath* shows
the same numbers for all devices on all cluster nodes.

This is an example of /etc/multipath.conf in our environment.


blacklist {
devnode "*"
}

blacklist_exceptions {
devnode "^(sd)[b-z]*"
devnode "^(dm-)[0-9]*"
}

## Use user friendly names, instead of using WWIDs as names.


defaults {
vendor "IBM"
product "2145"
path_grouping_policy group_by_prio
prio_callout "/sbin/mpath_prio_alua /dev/%n"
features "1 queue_if_no_path"
path_checker tur
}
multipaths {
multipath {
wwid 3600507680282809ac80000000000004e
alias voting1
}
multipath {
wwid 3600507680282809ac80000000000004f
alias voting2
}
multipath {
wwid 3600507680282809ac800000000000050
alias voting3
}
multipath {
wwid 3600507680282809ac800000000000051
alias mpath12
}
multipath {
wwid 3600507680282809ac800000000000053
alias mpath14
}
multipath {
wwid 3600507680282809ac800000000000049
alias mpath7
}
multipath {
wwid 3600507680282809ac800000000000052
alias mpath13

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
60
}
multipath {
wwid 3600507680282809ac800000000000048
alias mpath6
}
multipath {
wwid 3600507680282809ac800000000000047
alias mpath5
}
multipath {
wwid 3600507680282809ac80000000000004d
alias mpath11
}
multipath {
wwid 3600507680282809ac800000000000046
alias mpath4
}
multipath {
wwid 3600507680282809ac800000000000054
alias mpath23
}
multipath {
wwid 3600507680282809ac800000000000055
alias mpath24
}
multipath {
wwid 3600507680282809ac800000000000056
alias mpath25
}
multipath {
wwid 3600507680282809ac800000000000057
alias mpath26
}
multipath {
wwid 3600507680282809ac800000000000058
alias mpath27
}
multipath {
wwid 3600507680282809ac800000000000059
alias mpath28
}
multipath {
wwid 3600507680282809ac80000000000005a
alias mpath29
}
multipath {
wwid 3600507680282809ac80000000000005b
alias mpath30
}
}
5. After adding new entries for the new devices, you need to reload multipath.
[root@icc-52 ~]# /etc/init.d/multipathd reload
Reloading multipathd: [ OK ]
[root@icc-52 ~]# multipath -v2
[root@icc-52 ~]# multipath -ll
mpath23 (3600507680282809ac800000000000054) dm-5 IBM,2145
[size=5.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=50][active]
\_ 1:0:0:11 sdm 8:192 [active][ready]
\_ round-robin 0 [prio=10][enabled]
\_ 2:0:0:11 sdaf 65:240 [active][ready]

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
61
voting3 (3600507680282809ac800000000000050) dm-15 IBM,2145
[size=2.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=50][enabled]
\_ 2:0:0:2 sdw 65:96 [active][ready]
\_ round-robin 0 [prio=10][enabled]
\_ 1:0:0:2 sdd 8:48 [active][ready]
voting2 (3600507680282809ac80000000000004f) dm-14 IBM,2145
[size=2.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=50][enabled]
\_ 1:0:0:1 sdc 8:32 [active][ready]
\_ round-robin 0 [prio=10][enabled]
\_ 2:0:0:1 sdv 65:80 [active][ready]
voting1 (3600507680282809ac80000000000004e) dm-13 IBM,2145
[size=2.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=50][active]
\_ 2:0:0:0 sdu 65:64 [active][ready]
\_ round-robin 0 [prio=10][enabled]
\_ 1:0:0:0 sdb 8:16 [active][ready]
mpath29 (3600507680282809ac80000000000005a) dm-11 IBM,2145
[size=1.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=50][enabled]
\_ 1:0:0:17 sds 65:32 [active][ready]
\_ round-robin 0 [prio=10][enabled]
\_ 2:0:0:17 sdal 66:80 [active][ready]
mpath14 (3600507680282809ac800000000000053) dm-3 IBM,2145
[size=2.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=50][active]
\_ 1:0:0:9 sdk 8:160 [active][ready]
\_ round-robin 0 [prio=10][enabled]
\_ 2:0:0:9 sdad 65:208 [active][ready]
mpath7 (3600507680282809ac800000000000049) dm-0 IBM,2145
[size=10G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=50][active]
\_ 2:0:0:6 sdaa 65:160 [active][ready]
\_ round-robin 0 [prio=10][enabled]
\_ 1:0:0:6 sdh 8:112 [active][ready]
mpath28 (3600507680282809ac800000000000059) dm-10 IBM,2145
[size=1.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=50][enabled]
\_ 2:0:0:16 sdak 66:64 [active][ready]
\_ round-robin 0 [prio=10][enabled]
\_ 1:0:0:16 sdr 65:16 [active][ready]
mpath13 (3600507680282809ac800000000000052) dm-2 IBM,2145
[size=2.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=50][active]
\_ 2:0:0:8 sdac 65:192 [active][ready]
\_ round-robin 0 [prio=10][enabled]
\_ 1:0:0:8 sdj 8:144 [active][ready]
mpath30 (3600507680282809ac80000000000005b) dm-12 IBM,2145
[size=25G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=50][enabled]
\_ 2:0:0:18 sdam 66:96 [active][ready]
\_ round-robin 0 [prio=10][enabled]
\_ 1:0:0:18 sdt 65:48 [active][ready]
mpath6 (3600507680282809ac800000000000048) dm-18 IBM,2145
[size=10G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=50][active]
\_ 1:0:0:5 sdg 8:96 [active][ready]
\_ round-robin 0 [prio=10][enabled]
\_ 2:0:0:5 sdz 65:144 [active][ready]
mpath27 (3600507680282809ac800000000000058) dm-9 IBM,2145
[size=1.0G][features=1 queue_if_no_path][hwhandler=0][rw]

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
62
\_ round-robin 0 [prio=50][enabled]
\_ 1:0:0:15 sdq 65:0 [active][ready]
\_ round-robin 0 [prio=10][enabled]
\_ 2:0:0:15 sdaj 66:48 [active][ready]
mpath12 (3600507680282809ac800000000000051) dm-1 IBM,2145
[size=2.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=50][active]
\_ 1:0:0:7 sdi 8:128 [active][ready]
\_ round-robin 0 [prio=10][enabled]
\_ 2:0:0:7 sdab 65:176 [active][ready]
mpath5 (3600507680282809ac800000000000047) dm-17 IB
[size=10G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=50][active]
\_ 2:0:0:4 sdy 65:128 [active][ready]
\_ round-robin 0 [prio=10][enabled]
\_ 1:0:0:4 sdf 8:80 [active][ready]
mpath26 (3600507680282809ac800000000000057) dm-8 IBM,2145
[size=5.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=50][enabled]
\_ 2:0:0:14 sdai 66:32 [active][ready]
\_ round-robin 0 [prio=10][enabled]
\_ 1:0:0:14 sdp 8:240 [active][ready]
mpath11 (3600507680282809ac80000000000004d) dm-4 IBM,2145
[size=50G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=50][active]
\_ 2:0:0:10 sdae 65:224 [active][ready]
\_ round-robin 0 [prio=10][enabled]
\_ 1:0:0:10 sdl 8:176 [active][ready]
mpath4 (3600507680282809ac800000000000046) dm-16 IBM,2145
[size=10G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=50][active]
\_ 1:0:0:3 sde 8:64 [active][ready]
\_ round-robin 0 [prio=10][enabled]
\_ 2:0:0:3 sdx 65:112 [active][ready]
mpath25 (3600507680282809ac800000000000056) dm-7 IBM,2145
[size=5.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=50][active]
\_ 1:0:0:13 sdo 8:224 [active][ready]
\_ round-robin 0 [prio=10][enabled]
\_ 2:0:0:13 sdah 66:16 [active][ready]
mpath24 (3600507680282809ac800000000000055) dm-6 IBM,2145
[size=5.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=50][active]
\_ 2:0:0:12 sdag 66:0 [active][ready]
\_ round-robin 0 [prio=10][enabled]
\_ 1:0:0:12 sdn 8:208 [active][ready]

Since /dev/dm-N devices are not persistent across reboot, it should not be used. Nevertheless,
/dev/mapper devices are persistent across reboot, they should be used to access and interact
with mulitpathd devices.
6. Update /etc/rc.local to set the multipath device ownership and permissions. This is an example of
/etc/rc.local in our environment.
#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
63
touch /var/lock/subsys/local

# OCR and Voting disks 11gR2


chown grid:oinstall /dev/mapper/voting*
chmod 0640 /dev/mapper/voting*

# multipath disks for Oracle 11gR2


chown oracle:oinstall /dev/mapper/mpath*
chmod 0660 /dev/mapper/mpath*
7. Execute the /etc/rc.local script to test the ownership and permissions settings of the multipath
devices. In our example, any disks for grid infrastructure will be owned by grid user and oinstall
group; disks for Oracle Database will be owned by oracle user and oinstall group. When you set
your Discovery path during ASM disk group creation, it should be “/dev/mapper”. If Oracle ASM
cannot discover any disk, you will need to check the permissions of the disks.
[root@icc-51 mapper]# /etc/rc.local
[root@icc-51 ~]# ls -ltr /dev/mapper
total 0
crw------- 1 root root 10, 63 Aug 31 16:29 control
brw-r----- 1 grid oinstall 253, 15 Aug 31 16:29 voting3
brw-r----- 1 grid oinstall 253, 14 Aug 31 16:29 voting2
brw-r----- 1 grid oinstall 253, 13 Aug 31 16:29 voting1
brw-r----- 1 grid oinstall 253, 19 Sep 5 04:58 voting3p1
brw-rw---- 1 oracle oinstall 253, 12 Sep 6 15:28 mpath30
brw-r----- 1 grid oinstall 253, 20 Sep 7 10:32 voting2p1
brw-rw---- 1 oracle oinstall 253, 9 Sep 7 14:59 mpath27
brw-rw---- 1 oracle oinstall 253, 10 Sep 7 15:00 mpath28
brw-rw---- 1 oracle oinstall 253, 3 Sep 7 15:00 mpath14
brw-rw---- 1 oracle oinstall 253, 2 Sep 7 15:05 mpath13
brw-rw---- 1 oracle oinstall 253, 11 Sep 7 15:13 mpath29
brw-rw---- 1 oracle oinstall 253, 7 Sep 7 15:15 mpath25
brw-rw---- 1 oracle oinstall 253, 17 Sep 7 15:15 mpath5
brw-rw---- 1 oracle oinstall 253, 5 Sep 7 15:15 mpath23
brw-rw---- 1 oracle oinstall 253, 16 Sep 7 15:15 mpath4
brw-rw---- 1 oracle oinstall 253, 4 Sep 7 15:15 mpath11
brw-rw---- 1 oracle oinstall 253, 1 Sep 7 15:15 mpath12
brw-r----- 1 grid oinstall 253, 21 Sep 7 15:15 voting1p1
brw-rw---- 1 oracle oinstall 253, 0 Sep 7 15:15 mpath7
brw-rw---- 1 oracle oinstall 253, 8 Sep 7 15:15 mpath26
brw-rw---- 1 oracle oinstall 253, 18 Sep 7 15:15 mpath6
brw-rw---- 1 oracle oinstall 253, 6 Sep 7 15:15 mpath24

Adding storage disks to ASM instances


There are two ways to add storage disks to ASM instances. One is using ASMCA. The other one is
using command line by logging into ASM instances. ASMCA is pretty straight-forward. You launch
ASMCA from $GRID_HOME/bin directory.
We will discuss the second method in detail in this section.

Logging into ASM instances


In Oracle Database 11g Release 2, you need to log in as user grid and set environment variable
ORACLE_HOME to grid infrastructure home in order to log into ASM instance or use asmcmd tool.
[grid@icc-51 ~]$ export ORACLE_HOME=/u01/app/11.2.0/grid
[grid@icc-51 ~]$ export ORACLE_SID=+ASM1

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
64
[grid@icc-51 ~]$ export PATH=$PATH:$ORACLE_HOME/bin

Query ASM instances


First of all, you can query v$asm_disk and see the status of each disk in your ASM instance.
SQL> r
1 select name, path, header_status, mount_status, os_mb
2* from v$asm_disk

NAME PATH HEADER_STATU MOUNT_S OS_MB


--------------- ------------------------- ------------ ------- ----------
/dev/mapper/voting3 CANDIDATE CLOSED 2048
/dev/mapper/voting2 CANDIDATE CLOSED 2048
/dev/mapper/voting1 CANDIDATE CLOSED 2048
BACKUP_0001 /dev/mapper/mpath30 MEMBER CACHED 25600
DISK_0000 /dev/mapper/voting1p1 MEMBER CACHED 2046
DISK_0001 /dev/mapper/voting2p1 MEMBER CACHED 2046
DISK_0002 /dev/mapper/voting3p1 MEMBER CACHED 2046

NAME PATH HEADER_STATU MOUNT_S OS_MB


--------------- ------------------------- ------------ ------- ----------
DATA_0002 /dev/mapper/mpath6 MEMBER CACHED 10240
DATA_0001 /dev/mapper/mpath5 MEMBER CACHED 10240
DATA_0000 /dev/mapper/mpath4 MEMBER CACHED 10240
BACKUP_0000 /dev/mapper/mpath11 MEMBER CACHED 51200
LOG_0002 /dev/mapper/mpath14 MEMBER CACHED 2048
LOG_0001 /dev/mapper/mpath13 MEMBER CACHED 2048
LOG_0000 /dev/mapper/mpath12 MEMBER CACHED 2048
DATA_0003 /dev/mapper/mpath7 MEMBER CACHED 10240

15 rows selected.

Adding disks to the appropriate disk groups


Using alter diskgroup XXX add disk command to add the device mapper paths to the proper disk
groups. In our example, we are going to create a new database with the new storage disks. We will
be adding four storage disks with 5GB each to DATA disk group; and three storage disks with 1GB
each to LOG disk group.
SQL> alter diskgroup data add disk '/dev/mapper/mpath26';

Diskgroup altered.

SQL> alter diskgroup data add disk '/dev/mapper/mpath23';

Diskgroup altered.

SQL> alter diskgroup data add disk '/dev/mapper/mpath24';

Diskgroup altered.

SQL> alter diskgroup data add disk '/dev/mapper/mpath25';

Diskgroup altered.

SQL> alter diskgroup LOG add disk '/dev/mapper/mpath27';

Diskgroup altered.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
65
SQL> alter diskgroup LOG add disk '/dev/mapper/mpath28';

Diskgroup altered.

SQL> alter diskgroup LOG add disk '/dev/mapper/mpath29';

Diskgroup altered.

If the existing path exists and the command failed, you can use the FORCE option. For instance, the
command will be “alter diskgroup DATA add disk ‘/dev/mapper/mpath26’ force”.

After that you query v$asm_disk on all cluster nodes and make sure they are all in MEMBER
HEADER status and CACHED MOUNT status.
If you see that the disk groups in other nodes are “DISMOUNTED”, you can use “alter diskgroup XXX
mount” to mount the diskgroup.
SQL> select name, state from v$asm_diskgroup;

NAME STATE
------------------------------ -----------
DATA MOUNTED
DISK MOUNTED
BACKUP DISMOUNTED
LOG DISMOUNTED

SQL> alter diskgroup backup mount;

Diskgroup altered.

SQL> alter diskgroup log mount;

Diskgroup altered.

SQL> r
1* select name, state from v$asm_diskgroup

NAME STATE
------------------------------ -----------
DATA MOUNTED
DISK MOUNTED
BACKUP MOUNTED
LOG MOUNTED

After the diskgroups are mounted, the disks in the diskgroups are in proper status.
Finally, we need to rebalance the ASM disk groups. You can query v$asm_operation to see if the
rebalance has completed or not.
SQL> alter diskgroup DATA rebalance power 11;
Diskgroup altered.

SQL> alter diskgroup BACKUP rebalance power 11;


Diskgroup altered.

SQL> alter diskgroup LOG rebalance power 11;


Diskgroup altered.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
66
Removing storage disks from ASM instances
There are two ways to drop storage disks from ASM instances. One is using ASMCA. The other one
is using the command line by logging into ASM instances. ASMCA is pretty straight-forward. You
launch ASMCA from $GRID_HOME/bin directory.
We will discuss the second method in detail in this section.

Logging into ASM instances


In Oracle Database 11g Release 2, you need to log in as user grid and set environment variable
ORACLE_HOME to grid infrastructure home in order to log into ASM instance or use asmcmd tool.
[grid@icc-51 ~]$ export ORACLE_HOME=/u01/app/11.2.0/grid
[grid@icc-51 ~]$ export ORACLE_SID=+ASM1
[grid@icc-51 ~]$ export PATH=$PATH:$ORACLE_HOME/bin

Query ASM instances


First of all, you can query v$asm_diskgroup and see the status of each disk in your ASM instance.
You can also query v$asm_client and check which databases the ASM instances are supporting.

SQL> select name,total_mb, free_mb from v$asm_diskgroup;

NAME TOTAL_MB FREE_MB


------------------------------ ---------- ----------
BACKUP 76800 76703
DATA 61440 56613
DISK 6138 5738
LOG 9216 8900

SQL> select instance_name, status from v$asm_client;

INSTANCE_NAME STATUS
---------------------------------------------------------------- ------------
+ASM1 CONNECTED
RAC1 CONNECTED
RAC1 CONNECTED

SQL> select name,path,os_mb from v$asm_disk

NAME PATH OS_MB


------------------------------ ---------------------- ----------
/dev/mapper/voting3 2048
/dev/mapper/voting2 2048
/dev/mapper/voting1 2048
DATA_0009 /dev/mapper/mpath26 5120
DATA_0012 /dev/mapper/mpath25 5120
DATA_0011 /dev/mapper/mpath24 5120
DATA_0010 /dev/mapper/mpath23 5120
BACKUP_0001 /dev/mapper/mpath30 25600
DISK_0000 /dev/mapper/voting1p1 2046
DISK_0001 /dev/mapper/voting2p1 2046
DISK_0002 /dev/mapper/voting3p1 2046

NAME PATH OS_MB


------------------------------ ---------------------- ----------

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
67
DATA_0002 /dev/mapper/mpath6 10240
DATA_0001 /dev/mapper/mpath5 10240
DATA_0000 /dev/mapper/mpath4 10240
LOG_0005 /dev/mapper/mpath29 1024
LOG_0004 /dev/mapper/mpath28 1024
LOG_0003 /dev/mapper/mpath27 1024
BACKUP_0000 /dev/mapper/mpath11 51200
LOG_0002 /dev/mapper/mpath14 2048
LOG_0001 /dev/mapper/mpath13 2048
LOG_0000 /dev/mapper/mpath12 2048
DATA_0003 /dev/mapper/mpath7 10240

22 rows selected.

Removing disks from the appropriate disk groups


Use the alter diskgroup XXX drop disk command to drop the disk_name from the proper disk groups.
In our example, we will be dropping four storage disks with 5GB each from DATA disk group; and
three storage disks with 1GB each from LOG disk group.
SQL> alter diskgroup data drop disk DATA_0009;

Diskgroup altered.

SQL> alter diskgroup data drop disk DATA_0010;

Diskgroup altered.

SQL> alter diskgroup data drop disk DATA_0011;

Diskgroup altered.

SQL> alter diskgroup data drop disk DATA_0012;

Diskgroup altered.

SQL> alter diskgroup LOG drop disk LOG_0003;

Diskgroup altered.

SQL> alter diskgroup LOG drop disk LOG_0004;

Diskgroup altered.

SQL> alter diskgroup LOG drop disk LOG_0005;

Diskgroup altered.

Next we query the ASM instances again and check the total and free space available.
SQL> select name, total_mb, free_mb
2 from v$asm_diskgroup;

NAME TOTAL_MB FREE_MB


------------------------------ ---------- ----------
BACKUP 76800 76703
DATA 40960 36141
DISK 6138 5738
LOG 6144 5834

After that, we will manually rebalance the disk groups DATA and LOG.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
68
SQL> alter diskgroup data rebalance power 11;

Diskgroup altered.

SQL> alter diskgroup log rebalance power 11;

Diskgroup altered.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
69
Appendix E: Adding cluster nodes to Oracle Real
Application Clusters
Before adding cluster nodes to Oracle RAC, some prerequisite needs to be fulfilled.
1. Install Linux operating systems on the node. In this case, we will install Red Hat Enterprise Linux
5.7 64-bit.
2. Install the required Red Hat Enterprise Linux packages.
3. Create groups such as dba, and oinstall.
4. Create users grid and oracle on the nodes.
5. Update system configuration files including /etc/sysctl.conf and /etc/security/limits.conf.
6. Copy /etc/multipath.conf to the new nodes, reload the /etc/init.d/mulitpathd and reboot the new
nodes. (Please refer to Configuring Mulitpath devices and udev on page 9 in this paper.)
7. Configure SSH on the new nodes.
8. Install Grid Infrastructure software on the new nodes.
9. Verify the configuration with Cluster Verification Utility (CVU). Make sure cvuqdisk package is
installed on the server. It is recommended to run the following command without any errors
before proceeding to running addNode.sh. Execute the following command on the existing node.
Run this for all nodes that you want to add to the cluster.
[grid@icc-51 bin]$ cluvfy stage -pre nodeadd -n icc-60

Performing pre-checks for node addition

Checking node reachability...


Node reachability check passed from node "icc-51"

Checking user equivalence...


User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth0"


Node connectivity passed for interface "eth0"

Node connectivity check passed

Checking CRS integrity...

CRS integrity check passed

Checking shared resources...

Checking CRS home location...


The location "/u01/app/11.2.0/grid" is not shared but is present/creatable on all
nodes
Shared resources check for node addition passed

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
70
Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth0"


Node connectivity passed for interface "eth0"

Check: Node connectivity for interface "eth1"


Node connectivity passed for interface "eth1"

Node connectivity check passed

Total memory check passed


Available memory check passed
Swap space check passed
Free disk space check passed for "icc-51:/tmp"
Free disk space check passed for "icc-60:/tmp"
Check for multiple users with UID value 1100 passed
User existence check passed for "grid"
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make-3.81( x86_64)"
Package existence check passed for "binutils-2.17.50.0.6( x86_64)"
Package existence check passed for "gcc-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libaio-0.3.106 (x86_64)( x86_64)"
Package existence check passed for "glibc-2.5-24 (x86_64)( x86_64)"
Package existence check passed for "compat-libstdc++-33-3.2.3 (x86_64)( x86_64)"
Package existence check passed for "elfutils-libelf-0.125 (x86_64)( x86_64)"
Package existence check passed for "elfutils-libelf-devel-0.125( x86_64)"
Package existence check passed for "glibc-common-2.5( x86_64)"
Package existence check passed for "glibc-devel-2.5 (x86_64)( x86_64)"
Package existence check passed for "glibc-headers-2.5( x86_64)"
Package existence check passed for "gcc-c++-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libaio-devel-0.3.106 (x86_64)( x86_64)"
Package existence check passed for "libgcc-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libstdc++-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libstdc++-devel-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "sysstat-7.0.2( x86_64)"
Package existence check passed for "ksh-20060214( x86_64)"
Check for multiple users with UID value 0 passed
Current group ID check passed

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
71
Checking OCR integrity...

OCR integrity check passed

Checking Oracle Cluster Voting Disk configuration...

Oracle Cluster Voting Disk configuration check passed


Time zone consistency check passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...


NTP Configuration file check passed

Checking daemon liveness...


Liveness check passed for "ntpd"
Check for NTP daemon or service alive passed on all nodes

NTP daemon slewing option check passed

NTP daemon's boot time configuration check for slewing option passed

NTP common Time Server Check started...


PRVF-5408 : NTP Time Server "66.7.96.1" is common only to the following nodes
"icc-51"
PRVF-5408 : NTP Time Server ".GPS." is common only to the following nodes "icc-
51"
PRVF-5408 : NTP Time Server "172.16.65.22" is common only to the following nodes
"icc-60"
PRVF-5408 : NTP Time Server "192.12.19.20" is common only to the following nodes
"icc-60"
PRVF-5408 : NTP Time Server "138.39.23.13" is common only to the following nodes
"icc-51"
PRVF-5408 : NTP Time Server "18.26.4.105" is common only to the following nodes
"icc-60"
Check of common NTP Time Server passed

Clock time offset check from NTP Time Server started...


Clock time offset check passed

Clock synchronization check using Network Time Protocol(NTP) passed

User "grid" is not part of "root" group. Check passed


Checking consistency of file "/etc/resolv.conf" across nodes

File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
All nodes have one search entry defined in file "/etc/resolv.conf"
The DNS response time for an unreachable node is within acceptable limit on all
nodes

File "/etc/resolv.conf" is consistent across nodes

Pre-check for node addition was successful.

Finally, you can run addNode.sh under $GRID HOME on the first node. In this example, our first
node is “icc-51” and our new cluster nodes are “icc-59” and “icc-60”.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
72
[grid@icc-51 bin]$ ./addNode.sh "CLUSTER_NEW_NODES={icc-59,icc-60}"
"CLUSTER_NEW_VIRTUAL_HOSTNAMES={icc-59-vip,icc-60-vip}"

Performing pre-checks for node addition

Checking node reachability...


Node reachability check passed from node "icc-51"

Checking user equivalence...


User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth0"


Node connectivity passed for interface "eth0"

Node connectivity check passed

Checking CRS integrity...

CRS integrity check passed

Checking shared resources...

Checking CRS home location...


The location "/u01/app/11.2.0/grid" is not shared but is present/creatable on all
nodes
Shared resources check for node addition passed

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth0"


Node connectivity passed for interface "eth0"

Check: Node connectivity for interface "eth1"


Node connectivity passed for interface "eth1"

Node connectivity check passed

Total memory check passed


Available memory check passed
Swap space check passed
Free disk space check passed for "icc-51:/tmp"
Free disk space check passed for "icc-59:/tmp"
Free disk space check passed for "icc-60:/tmp"
Check for multiple users with UID value 1100 passed
User existence check passed for "grid"
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
73
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make-3.81( x86_64)"
Package existence check passed for "binutils-2.17.50.0.6( x86_64)"
Package existence check passed for "gcc-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libaio-0.3.106 (x86_64)( x86_64)"
Package existence check passed for "glibc-2.5-24 (x86_64)( x86_64)"
Package existence check passed for "compat-libstdc++-33-3.2.3 (x86_64)( x86_64)"
Package existence check passed for "elfutils-libelf-0.125 (x86_64)( x86_64)"
Package existence check passed for "elfutils-libelf-devel-0.125( x86_64)"
Package existence check passed for "glibc-common-2.5( x86_64)"
Package existence check passed for "glibc-devel-2.5 (x86_64)( x86_64)"
Package existence check passed for "glibc-headers-2.5( x86_64)"
Package existence check passed for "gcc-c++-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libaio-devel-0.3.106 (x86_64)( x86_64)"
Package existence check passed for "libgcc-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libstdc++-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libstdc++-devel-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "sysstat-7.0.2( x86_64)"
Package existence check passed for "ksh-20060214( x86_64)"
Check for multiple users with UID value 0 passed
Current group ID check passed

Checking OCR integrity...

OCR integrity check passed

Checking Oracle Cluster Voting Disk configuration...

Oracle Cluster Voting Disk configuration check passed


Time zone consistency check passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...


NTP Configuration file check passed

Checking daemon liveness...


Liveness check passed for "ntpd"
Check for NTP daemon or service alive passed on all nodes

NTP daemon slewing option check passed

NTP daemon's boot time configuration check for slewing option passed

NTP common Time Server Check started...

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
74
PRVF-5408 : NTP Time Server "132.163.4.103" is common only to the following nodes
"icc-59"
PRVF-5408 : NTP Time Server "66.7.96.1" is common only to the following nodes
"icc-51"
PRVF-5408 : NTP Time Server "18.26.4.105" is common only to the following nodes
"icc-51,icc-60"
PRVF-5408 : NTP Time Server "192.12.19.20" is common only to the following nodes
"icc-60"
PRVF-5408 : NTP Time Server "69.25.96.13" is common only to the following nodes
"icc-59"
PRVF-5408 : NTP Time Server "172.16.65.22" is common only to the following nodes
"icc-60"
PRVF-5408 : NTP Time Server "207.200.81.113" is common only to the following
nodes "icc-59"
PRVF-5408 : NTP Time Server ".GPS." is common only to the following nodes "icc-
51"
Check of common NTP Time Server passed

Clock time offset check from NTP Time Server started...


Clock time offset check passed

Clock synchronization check using Network Time Protocol(NTP) passed

User "grid" is not part of "root" group. Check passed


Checking consistency of file "/etc/resolv.conf" across nodes

File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
All nodes have one search entry defined in file "/etc/resolv.conf"
The DNS response time for an unreachable node is within acceptable limit on all
nodes

File "/etc/resolv.conf" is consistent across nodes

Checking VIP configuration.


Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.

Pre-check for node addition was successful.


Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 8000 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual
65536 Passed
Oracle Universal Installer, Version 11.2.0.2.0 Production
Copyright (C) 1999, 2010, Oracle. All rights reserved.

Performing tests to see whether nodes icc-52,icc-59,icc-60 are available


............................................................... 100% Done.

.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
Source: /u01/app/11.2.0/grid
New Nodes
Space Requirements

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
75
New Nodes
icc-59
/: Required 9.03GB : Available 16.30GB
icc-60
/: Required 9.03GB : Available 14.42GB
Installed Products
Product Names
Oracle Grid Infrastructure 11.2.0.2.0
Sun JDK 1.5.0.24.08
Installer SDK Component 11.2.0.2.0
Oracle One-Off Patch Installer 11.2.0.0.2
Oracle Universal Installer 11.2.0.2.0
Oracle USM Deconfiguration 11.2.0.2.0
Oracle Configuration Manager Deconfiguration 10.3.1.0.0
Enterprise Manager Common Core Files 10.2.0.4.3
Oracle DBCA Deconfiguration 11.2.0.2.0
Oracle RAC Deconfiguration 11.2.0.2.0
Oracle Quality of Service Management (Server) 11.2.0.2.0
Installation Plugin Files 11.2.0.2.0
Universal Storage Manager Files 11.2.0.2.0
Oracle Text Required Support Files 11.2.0.2.0
Automatic Storage Management Assistant 11.2.0.2.0
Oracle Database 11g Multimedia Files 11.2.0.2.0
Oracle Multimedia Java Advanced Imaging 11.2.0.2.0
Oracle Globalization Support 11.2.0.2.0
Oracle Multimedia Locator RDBMS Files 11.2.0.2.0
Oracle Core Required Support Files 11.2.0.2.0
Bali Share 1.1.18.0.0
Oracle Database Deconfiguration 11.2.0.2.0
Oracle Quality of Service Management (Client) 11.2.0.2.0
Expat libraries 2.0.1.0.1
Oracle Containers for Java 11.2.0.2.0
Perl Modules 5.10.0.0.1
Secure Socket Layer 11.2.0.2.0
Oracle JDBC/OCI Instant Client 11.2.0.2.0
Oracle Multimedia Client Option 11.2.0.2.0
LDAP Required Support Files 11.2.0.2.0
Character Set Migration Utility 11.2.0.2.0
Perl Interpreter 5.10.0.0.1
PL/SQL Embedded Gateway 11.2.0.2.0
OLAP SQL Scripts 11.2.0.2.0
Database SQL Scripts 11.2.0.2.0
Oracle Extended Windowing Toolkit 3.4.47.0.0
SSL Required Support Files for InstantClient 11.2.0.2.0
SQL*Plus Files for Instant Client 11.2.0.2.0
Oracle Net Required Support Files 11.2.0.2.0
Oracle Database User Interface 2.2.13.0.0
RDBMS Required Support Files for Instant Client 11.2.0.2.0
RDBMS Required Support Files Runtime 11.2.0.2.0
XML Parser for Java 11.2.0.2.0
Oracle Security Developer Tools 11.2.0.2.0
Oracle Wallet Manager 11.2.0.2.0
Enterprise Manager plugin Common Files 11.2.0.2.0
Platform Required Support Files 11.2.0.2.0
Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
RDBMS Required Support Files 11.2.0.2.0
Oracle Ice Browser 5.2.3.6.0
Oracle Help For Java 4.2.9.0.0
Enterprise Manager Common Files 10.2.0.4.3
Deinstallation Tool 11.2.0.2.0
Oracle Java Client 11.2.0.2.0
Cluster Verification Utility Files 11.2.0.2.0

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
76
Oracle Notification Service (eONS) 11.2.0.2.0
Oracle LDAP administration 11.2.0.2.0
Cluster Verification Utility Common Files 11.2.0.2.0
Oracle Clusterware RDBMS Files 11.2.0.2.0
Oracle Locale Builder 11.2.0.2.0
Oracle Globalization Support 11.2.0.2.0
Buildtools Common Files 11.2.0.2.0
Oracle RAC Required Support Files-HAS 11.2.0.2.0
SQL*Plus Required Support Files 11.2.0.2.0
XDK Required Support Files 11.2.0.2.0
Agent Required Support Files 10.2.0.4.3
Parser Generator Required Support Files 11.2.0.2.0
Precompiler Required Support Files 11.2.0.2.0
Installation Common Files 11.2.0.2.0
Required Support Files 11.2.0.2.0
Oracle JDBC/THIN Interfaces 11.2.0.2.0
Oracle Multimedia Locator 11.2.0.2.0
Oracle Multimedia 11.2.0.2.0
HAS Common Files 11.2.0.2.0
Assistant Common Files 11.2.0.2.0
PL/SQL 11.2.0.2.0
HAS Files for DB 11.2.0.2.0
Oracle Recovery Manager 11.2.0.2.0
Oracle Database Utilities 11.2.0.2.0
Oracle Notification Service 11.2.0.2.0
SQL*Plus 11.2.0.2.0
Oracle Netca Client 11.2.0.2.0
Oracle Net 11.2.0.2.0
Oracle JVM 11.2.0.2.0
Oracle Internet Directory Client 11.2.0.2.0
Oracle Net Listener 11.2.0.2.0
Cluster Ready Services Files 11.2.0.2.0
Oracle Database 11g 11.2.0.2.0
-----------------------------------------------------------------------------

Instantiating scripts for add node (Tuesday, September 27, 2011 2:54:58 PM PDT)
. 1% Done.
Instantiation of add node scripts complete

Copying to remote nodes (Tuesday, September 27, 2011 2:55:02 PM PDT)


.................................................................................
.............. 96% Done.
Home copied to new nodes

Saving inventory on nodes (Tuesday, September 27, 2011 3:14:14 PM PDT)


. 100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in
each cluster node.
/u01/app/11.2.0/grid/root.sh #On nodes icc-59,icc-60
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node

The Cluster Node Addition of /u01/app/11.2.0/grid was successful.


Please check '/tmp/silentInstall.log' for more details.

Run root.sh on one of the new nodes, “icc-59”:

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
77
[root@icc-59 grid]# ./root.sh
Running Oracle 11g root script...

The following environment variables are set as:


ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:


The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by


Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file:
/u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
OLR initialization - successful
Adding daemon to inittab
ACFS-9200: Supported
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies - this may take some time.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9309: ADVM/ACFS installation correctness verified.
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS
daemon on node icc-51, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the
cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

After completing icc-59, then you can run root.sh on the other node, “icc-60”:
root@icc-60 grid]# ./root.sh
Running Oracle 11g root script...

The following environment variables are set as:


ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:


The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by


Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
78
Using configuration parameter file:
/u01/app/11.2.0/grid/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
OLR initialization - successful
Adding daemon to inittab
ACFS-9200: Supported
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies - this may take some time.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9309: ADVM/ACFS installation correctness verified.
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS
daemon on node icc-51, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the
cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
79
Appendix F: deleting cluster nodes from Oracle Real
Application Clusters
Before deleting cluster nodes from an Oracle RAC environment, you need to check the status of the node
by executing the olsnodes command on the node that is going to be deleted as root user: In this case,
we are going to delete “icc-59” from the cluster.
[root@icc-59 bin]# olsnodes -s -t
icc-51 Active Unpinned
icc-52 Active Unpinned
icc-59 Active Unpinned

After that, as root user, you unpin the Cluster Synchronization Services (CSS) on the node that you want
to keep only if it is pinned. In this example, “icc-59” is the node that is going to be deleted and “icc-51” is
the node that is going to remain.
[root@icc-51 ~]# crsctl unpin css -n icc-59
CRS-4667: Node icc-59 successfully unpinned.

Then, disable Oracle Clusterware applications and daemons running on the node that is going to be
deleted by running rootcrs.pl as user root. This script is located in $GRID_HOME/crs/install directory.
[root@icc-59 install]# ./rootcrs.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
PRCR-1119 : Failed to look up CRS resources of ora.cluster_vip_net1.type type
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.gsd is registered
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.ons is registered
Cannot communicate with crsd

ACFS-9200: Supported
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Stop failed, or completed with errors.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed
resources on 'icc-59'
CRS-2673: Attempting to stop 'ora.ctssd' on 'icc-59'
CRS-2673: Attempting to stop 'ora.evmd' on 'icc-59'
CRS-2673: Attempting to stop 'ora.asm' on 'icc-59'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'icc-59'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'icc-59'
CRS-2677: Stop of 'ora.asm' on 'icc-59' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'icc-59'
CRS-2677: Stop of 'ora.evmd' on 'icc-59' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on 'icc-59' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'icc-59' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'icc-59' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'icc-59' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'icc-59'
CRS-2677: Stop of 'ora.cssd' on 'icc-59' succeeded
CRS-2673: Attempting to stop 'ora.diskmon' on 'icc-59'
CRS-2673: Attempting to stop 'ora.crf' on 'icc-59'
CRS-2677: Stop of 'ora.crf' on 'icc-59' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'icc-59'
CRS-2677: Stop of 'ora.diskmon' on 'icc-59' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'icc-59' succeeded

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
80
CRS-2673: Attempting to stop 'ora.gpnpd' on 'icc-59'
CRS-2677: Stop of 'ora.gpnpd' on 'icc-59' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on
'icc-59' has completed
CRS-4133: Oracle High Availability Services has been stopped.
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node

On the node that is not going to be deleted, you stop the VIP resource and then delete the node from the
cluster. In this example, “icc-51” is the node that is not going to be deleted.
[root@icc-51 ~]# srvctl stop vip -i icc-59-vip -f
PRKO-2439 : VIP does not exist.
[root@icc-51 ~]# crsctl delete node -n icc-59
CRS-4661: Node icc-59 successfully deleted

Then, go to $GRID_HOME/oui/bin and execute runInstaller on the node that is going to be deleted to
update the NodeList of the cluster.
[grid@icc-59 bin]$ ./runInstaller -updateNodeList
ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES={icc-59}" CRS=TRUE -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 8000 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.

You will also need to detach GRID_HOME.


[grid@icc-59 bin]$ ./runInstaller -detachHome ORACLE_HOME=/u01/app/11.2.0/grid
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 8000 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'DetachHome' was successful.

Finally, run the Update NodeList on all the remaining cluster nodes. In this example, the remaining
cluster nodes are “icc-51” and “icc-52”.
[grid@icc-51 bin]$ ./runInstaller -updateNodeList
ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES={icc-51,icc-52}" CRS=TRUE
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 8000 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.

To check to see if “icc-59” has been removed from the cluster, you run cluvfy on the remaining node.
[grid@icc-51 bin]$ cluvfy stage -post nodedel -n icc-59

Performing post-checks for node removal


Checking CRS integrity...
CRS integrity check passed
Node removal check passed
Post-check for node removal was successful.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
81
Trademarks and special notices
© Copyright. IBM Corporation 1994-2011. All rights reserved.
References in this document to IBM products or services do not imply that IBM intends to make them
available in every country.
IBM. the IBM logo, ibm.com, BladeCenter, DS4000, DS6000, DS8000, System Storage, and System x
are trademarks or registered trademarks of International Business Machines Corporation in the United
States, other countries, or both:
Red Hat, the Red Hat "Shadow Man" logo, and all Red Hat-based trademarks and logos are trademarks
or registered trademarks of Red Hat, Inc., in the United States and other countries.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
The information provided in this document is distributed “AS IS” without any warranty, either express or
implied.
The information in this document may include technical inaccuracies or typographical errors.
Information concerning non-IBM products was obtained from a supplier of these products, published
announcement material, or other publicly available sources and does not constitute an endorsement of
such products by IBM. Sources for non-IBM list prices and performance numbers are taken from publicly
available information, including vendor announcements and vendor worldwide homepages. IBM has not
tested these products and cannot confirm the accuracy of performance, capability, or any other claims
related to non-IBM products. Questions on the capability of non-IBM products should be addressed to the
supplier of those products.

All statements regarding IBM future direction and intent are subject to change or withdrawal without
notice, and represent goals and objectives only. Contact your local IBM office or IBM authorized reseller
for the full text of the specific Statement of Direction.

Some information addresses anticipated future capabilities. Such information is not intended as a
definitive statement of a commitment to specific levels of performance, function or delivery schedules with
respect to any future products. Such commitments are only made in IBM product announcements. The
information is presented here to communicate IBM's current investment and development activities as a
good faith effort to help with our customers' future planning.
Performance is based on measurements and projections using standard IBM benchmarks in a controlled
environment. The actual throughput or performance that any user will experience will vary depending
upon considerations such as the amount of multiprogramming in the user's job stream, the I/O
configuration, the storage configuration, and the workload processed. Therefore, no assurance can be
given that an individual user will achieve throughput or performance improvements equivalent to the
ratios stated here.
Photographs shown are of engineering prototypes. Changes may be incorporated in production models.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
82
Any references in this information to non-IBM Web sites are provided for convenience only and do not in
any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part
of the materials for this IBM product and use of those Web sites is at your own risk.

Installing Oracle Database 11g Release 2 with Oracle RAC using multipath to configure ASM disks
http://www.ibm.com/support/techdocs
© Copyright 2011, IBM Corporation
83

Vous aimerez peut-être aussi