Vous êtes sur la page 1sur 230

Oracle Grid Infrastructure 11g:

Manage Clusterware and ASM



Student Guide - Volume II
D59999GC30
Edition 3.0
December 2012
D78227
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Copyright 2012, Oracle and/or it affiliates. All rights reserved.
Disclaimer

This document contains proprietary information and is protected by copyright and
other intellectual property laws. You may copy and print this document solely for your
own use in an Oracle training course. The document may not be modified or altered
in any way. Except where your use constitutes "fair use" under copyright law, you
may not use, share, download, upload, copy, print, display, perform, reproduce,
publish, license, post, transmit, or distribute this document in whole or in part without
the express authorization of Oracle.

The information contained in this document is subject to change without notice. If you
find any problems in the document, please report them in writing to: Oracle University,
500 Oracle Parkway, Redwood Shores, California 94065 USA. This document is not
warranted to be error-free.

Restricted Rights Notice

If this documentation is delivered to the United States Government or anyone using
the documentation on behalf of the United States Government, the following notice is
applicable:

U.S. GOVERNMENT RIGHTS
The U.S. Governments rights to use, modify, reproduce, release, perform, display, or
disclose these training materials are restricted by the terms of the applicable Oracle
license agreement and/or the applicable U.S. Government contract.

Trademark Notice

Oracle and J ava are registered trademarks of Oracle and/or its affiliates. Other names
may be trademarks of their respective owners.



Authors
James Womack
James Spiller
Technical Contributors
David Brower
Jean-Francois Verrier
Mark Fuller
Mike Leatherman
Barb Lundhild
S. Matt Taylor
Rick Wessman
Joel Goodman
Harald Van Breederode
Markus Michalewicz
Technical Reviewers
Christopher Andrews
Christian Bauwens
Michael Cebulla
Jonathan Creighton
Al Flournoy
Andy Fortunak
Harald
Michael Hazel
Pete Jones
Jerry Lee
Markus Michalewicz
Peter Sharman
Ranbir Singh
Linda Smalley
Janet Stern
Richard Strohm
Branislav Valny
Doug Williams
Publishers
Veena Narasimhan
Srividya Rameshkumar
Jobi Varghese
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

iii
Contents




1 Oracle Grid Infrastructure Concepts
Objectives 1-2
Oracle Grid Infrastructure 1-3
What Is a Cluster? 1-4
What Is Clusterware? 1-5
Oracle Clusterware 1-6
Oracle Clusterware Architecture and Services 1-7
Goals for Oracle Clusterware 1-8
Oracle Clusterware Networking 1-9
Grid Naming Service 1-11
Single-Client Access Name 1-12
Grid Plug and Play 1-14
GPnP Domain 1-15
GPnP Components 1-16
GPnP Profile 1-17
Oracle Automatic Storage Management (ASM) 1-18
ASM: Key Features and Benefits 1-19
ASM and Grid Infrastructure 1-20
Quiz 1-21
Summary 1-23

2 Oracle Clusterware Architecture
Objectives 2-2
Oracle Grid Infrastructure For a Cluster 2-3
Oracle Cluster Registry (OCR) 2-4
CSS Voting Disk Function 2-6
Oracle Local Registry and High Availability 2-7
Oracle Clusterware Initialization 2-8
Clusterware Startup Details 2-9
Clusterware Startup: The OHASD orarootagent 2-11
Clusterware Startup Details: The CRSD orarootagent 2-13
Clusterware Startup Details: The CRSD oraagent 2-14
Clusterware Startup Details: The OHASD oraagent 2-15
Controlling Oracle Clusterware 2-16
Verifying the Status of Oracle Clusterware 2-17
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

iv
Viewing the High Availability Services Stack 2-18
GPnP Architecture: Overview 2-19
How GPnP Works: Cluster Node Startup 2-21
How GPnP Works: Client Database Connections 2-22
Automatic Storage Management 2-23
Quiz 2-24
Summary 2-25

3 Grid Infrastructure Preinstallation Tasks
Objectives 3-2
Preinstallation Planning 3-3
Shared Storage Planning for Grid Infrastructure 3-4
Sizing Shared Storage for Oracle Clusterware 3-5
Storing the OCR in ASM 3-6
Managing Voting Disks in ASM 3-7
Installing ASMLib 3-8
Preparing ASMLib 3-9
Quiz 3-10
Grid Infrastructure Preinstallation Tasks 3-11
Oracle Grid Infrastructure 11g Installation 3-12
Checking System Requirements 3-13
Enabling the Name Service Cache Daemon (nscd) 3-14
Single-Client Access Name for the Cluster 3-15
Checking Network Requirements 3-16
IP Address Requirements with GNS 3-18
IP Address Requirements for Manual Configuration 3-19
Broadcast and Multicast Requirements 3-21
Interconnect NIC Guidelines 3-22
Redundant Interconnect Usage 3-23
Interconnect Link Aggregation: Single Switch 3-24
Interconnect Link Aggregation: Multiswitch 3-26
Additional Interconnect Guidelines 3-27
Software Requirements (Kernel) 3-28
32-Bit Software Requirements: Packages 3-29
64-Bit Software Requirements: Packages 3-30
Oracle Validated Configuration RPM 3-31
Creating Groups and Users 3-33
Creating Groups and Users: Example 3-34
Shell Settings for the Grid Infrastructure User 3-35
Quiz 3-37
Summary 3-38
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

v
4 Grid Infrastructure Installation
Objectives 4-2
Installing Grid Infrastructure 4-3
Choosing an Installation Type 4-4
Grid Plug and Play Support 4-5
Cluster Node Information 4-6
Specify Network Interface Usage 4-7
Storage Option Information 4-8
Specify Cluster Configuration: Typical Installation 4-9
Install Locations: Typical Installation 4-11
Failure Isolation Support with IPMI 4-12
Privileged Operating System Groups 4-14
Installation and Inventory Locations 4-15
Prerequisite Checks 4-16
Finishing the Installation 4-17
Verifying the Grid Infrastructure Installation 4-19
Modifying Oracle Clusterware Binaries After Installation 4-20
Quiz 4-22
Summary 4-23
Practice 4: Overview 4-24

5 Adding and Removing Cluster Nodes
Objectives 5-2
Adding Oracle Clusterware Homes 5-3
Prerequisite Steps for Running addNode.sh 5-4
Adding a Node with addNode.sh 5-6
Completing OUI Node Addition 5-7
Removing a Node from the Cluster 5-8
Deleting a Node from the Cluster 5-9
Deleting a Node from a Cluster (GNS in Use) 5-12
Deleting a Node from the Cluster 5-13
Quiz 5-14
Summary 5-15
Practice 5: Overview 5-16

6 Administering Oracle Clusterware
Objectives 6-2
Managing Oracle Clusterware 6-3
Managing Clusterware with Enterprise Manager 6-4
Controlling Oracle Clusterware 6-5
Controlling Oracle High Availability Services 6-6
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

vi
Verifying the Status of Oracle Clusterware 6-7
Determining the Location of Oracle Clusterware Configuration Files 6-8
Checking the Integrity of Oracle Clusterware Configuration Files 6-9
Backing Up and Recovering the Voting Disk 6-10
Adding, Deleting, or Migrating Voting Disks 6-11
Locating the OCR Automatic Backups 6-12
Changing the Automatic OCR Backup Location 6-13
Adding, Replacing, and Repairing OCR Locations 6-14
Removing an Oracle Cluster Registry Location 6-15
Migrating OCR Locations to ASM 6-16
Migrating OCR from ASM to Other Shared Storage 6-17
Performing Manual OCR Backups 6-18
Recovering the OCR by Using Physical Backups 6-19
Oracle Local Registry 6-20
Oracle Interface Configuration Tool: oifcfg 6-22
Determining the Current Network Settings 6-23
Configuring Redundant Interconnect Usage with oifcfg 6-24
Changing the Public VIP Addresses for Non-GPnP Clusters 6-25
Changing the Interconnect Adapter 6-27
Managing SCAN VIP and SCAN Listener Resources 6-29
Quiz 6-33
Summary 6-35
Practice 6: Overview 6-36

7 Upgrading and Patching Grid Infrastructure
Objectives 7-2
Out-of-Place Oracle Clusterware Upgrade 7-3
Oracle Clusterware Upgrade 7-4
Types of Patches 7-5
Patch Properties 7-7
Configuring the Software Library 7-8
Setting Up Patching 7-9
Starting the Provisioning Daemon 7-10
Obtaining Oracle Clusterware Patches 7-11
Downloading Patches 7-14
Rolling Patches 7-15
Checking Software Versions 7-16
Installing a Rolling Patchset with OUI 7-17
Patchset OUI 7-18
Installing a Rolling Patchset with OUI 7-19
OPatch: Overview 7-20
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

vii
OPatch: General Usage 7-21
Before Patching with OPatch 7-22
Installing a Rolling Patch with OPatch 7-23
Quiz 7-28
Summary 7-30
Practice 7: Overview 7-31

8 Troubleshooting Oracle Clusterware
Objectives 8-2
Golden Rule in Debugging Oracle Clusterware 8-3
Monitoring Oracle Clusterware 8-5
Cluster Health Monitor (CHM) 8-7
oclumon Utility 8-8
oclumon debug Command 8-9
clumon dumpnodeview Command 8-10
oclumon dumpnodeview Command 8-11
oclumon manage Command 8-12
Oracle Clusterware Main Log Files 8-13
Oracle Clusterware Alerts 8-14
Diagnostic Record Unique IDs (DRUIDs) 8-15
Diagnostics Collection Script 8-16
Cluster Verify: Overview 8-17
Cluster Verify Components 8-18
Cluster Verify Locations 8-19
Cluster Verify Configuration File 8-20
Cluster Verify Output: Example 8-22
Enabling Resource Debugging 8-23
Dynamic Debugging 8-25
Enabling Tracing for Java-Based Tools 8-27
Preserving Log Files Before Wrapping 8-28
Process Roles for Node Reboots 8-29
Determining Which Process Caused Reboot 8-30
Using ocrdump to View Logical Contents of the OCR 8-31
Checking the Integrity of the OCR 8-32
OCR-Related Tools for Debugging 8-33
Browsing My Oracle Support Knowledge Articles 8-35
Quiz 8-36
Summary 8-37
Practice 8: Overview 8-38
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

viii
9 Making Applications Highly Available with Oracle Clusterware
Objectives 9-2
Oracle Clusterware High Availability (HA) 9-3
Oracle Clusterware HA Components 9-4
Resource Management Options 9-5
Server Pools 9-6
Server Pool Attributes 9-7
GENERIC and FREE Server Pools 9-9
Assignment of Servers to Server Pools 9-11
Server Attributes and States 9-12
Creating Server Pools with srvctl and crsctl 9-14
Managing Server Pools with srvctl and crsctl 9-15
Adding Server Pools with Enterprise Manager 9-16
Managing Server Pools with Enterprise Manager 9-17
Clusterware Resource Modeling 9-18
Resource Types 9-20
Adding a Resource Type 9-21
Resource Type Parameters 9-23
Resource Type Advanced Settings 9-24
Defining Resource Dependencies 9-25
Creating an Application VIP by Using crsctl 9-27
Creating an Application VIP by Using EM 9-29
Managing Clusterware Resources with EM 9-30
Adding Resources with EM 9-31
Adding Resources by Using crsctl 9-36
Managing Resources with EM 9-37
Managing Resources with crsctl 9-40
HA Events: ONS and FAN 9-42
Managing Oracle Notification Server with srvctl 9-43
Quiz 9-44
Summary 9-46
Practice 9: Overview 9-47

10 ASM: Overview
Objectives 10-2
What Is Oracle ASM? 10-3
ASM and ASM Cluster File System 10-4
ASM Key Features and Benefits 10-6
ASM Instance Designs: Nonclustered ASM and Oracle Databases 10-7
ASM Instance Designs: Clustered ASM for Clustered Databases 10-8
ASM Instance Designs: Clustered ASM for Mixed Databases 10-9
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

ix
ASM System Privileges 10-10
ASM OS Groups with Role Separation 10-11
Authentication for Accessing ASM Instances 10-12
Password-Based Authentication for ASM 10-13
Managing the ASM Password File 10-14
Using a Single OS Group 10-15
Using Separate OS Groups 10-16
ASM Components: Software 10-17
ASM Components: ASM Instance 10-18
ASM Components: ASM Instance Primary Processes 10-20
ASM Components: Node Listener 10-21
ASM Components: Configuration Files 10-22
ASM Components: Group Services 10-23
ASM Components: ASM Disk Group 10-24
ASM Disk Group: Failure Groups 10-25
ASM Components: ASM Disks 10-26
ASM Components: ASM Files 10-27
ASM Files: Extents and Striping 10-28
ASM Files: Mirroring 10-29
ASM Components: ASM Clients 10-30
ASM Components: ASM Utilities 10-31
ASM Scalability 10-33
Quiz 10-35
Summary 10-36

11 Administering ASM
Objectives 11-2
Managing ASM with ASMCA 11-3
Starting and Stopping ASM Instances by Using ASMCA and ASMCMD 11-4
Starting and Stopping ASM Instances by Using srvctl 11-5
Starting and Stopping ASM Instances by Using SQL*Plus 11-6
Starting and Stopping ASM Instances Containing Cluster Files 11-8
ASM Initialization Parameters 11-9
ASM_DISKGROUPS 11-10
Disk Groups Mounted at Startup 11-11
ASM_DISKSTRING 11-12
ASM_POWER_LIMIT 11-14
INSTANCE_TYPE 11-15
CLUSTER_DATABASE 11-16
MEMORY_TARGET 11-17
Adjusting ASM Instance Parameters in SPFILEs 11-18
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

x
Starting and Stopping the Node Listener 11-19
ASM Dynamic Performance Views 11-20
ASM Dynamic Performance Views Diagram 11-21
Quiz 11-23
Summary 11-25
Practice 11 Overview: Administering ASM Instances 11-26

12 Administering ASM Disk Groups
Objectives 12-2
Disk Group: Overview 12-3
Creating a New Disk Group 12-4
Creating a New Disk Group with ASMCMD 12-6
Creating an ASM Disk Group with ASMCA 12-7
Creating an ASM Disk Group: Advanced Options 12-8
Creating a Disk Group with Enterprise Manager 12-9
Disk Group Attributes 12-11
V$ASM_ATTRIBUTE 12-13
Compatibility Attributes 12-14
Features Enabled by Disk Group Compatibility Attributes 12-15
Support for 4 KB Sector Disk Drives 12-16
Supporting 4 KB Sector Disks 12-17
ASM Support for 4 KB Sector Disks 12-18
Using the SECTOR_SIZE Clause 12-19
Viewing ASM Disk Groups 12-21
Viewing ASM Disk Information 12-23
Extending an Existing Disk Group 12-25
Dropping Disks from an Existing Disk Group 12-26
REBALANCE POWER 0 12-27
V$ASM_OPERATION 12-28
Adding and Dropping in the Same Command 12-30
Adding and Dropping Failure Groups 12-31
Undropping Disks in Disk Groups 12-32
Mounting and Dismounting Disk Groups 12-33
Viewing Connected Clients 12-34
Dropping Disk Groups 12-35
Checking the Consistency of Disk Group Metadata 12-36
ASM Fast Mirror Resync 12-37
Preferred Read Failure Groups 12-38
Preferred Read Failure Groups: Best Practice 12-39
Viewing ASM Disk Statistics 12-40
Performance, Scalability, and Manageability Considerations for Disk Groups 12-42
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

xi
Quiz 12-43
Summary 12-45
Practice 12 Overview: Administering ASM Disk Groups 12-46

13 Administering ASM Files, Directories, and Templates
Objectives 13-2
ASM Clients 13-3
Interaction Between Database Instances and ASM 13-5
Accessing ASM Files by Using RMAN 13-6
Accessing ASM Files by Using XML DB 13-8
Accessing ASM Files by Using DBMS_FILE_TRANSFER 13-9
Accessing ASM Files by Using ASMCMD 13-10
Fully Qualified ASM File Names 13-11
Other ASM File Names 13-13
Valid Contexts for the ASM File Name Forms 13-15
Single File Creation: Examples 13-16
Multiple File Creation: Example 13-17
View ASM Aliases, Files, and Directories 13-18
Viewing ASM Files 13-20
ASM Directories 13-21
Managing ASM Directories 13-22
Managing Alias File Names 13-23
Disk Group Templates 13-24
Viewing Templates 13-26
Managing Disk Group Templates 13-27
Managing Disk Group Templates with ASMCMD 13-28
Using Disk Group Templates 13-29
ASM Intelligent Data Placement 13-30
Guidelines for Intelligent Data Placement 13-31
Assigning Files to Disk Regions 13-32
Assigning Files to Disk Regions with Enterprise Manager 13-33
Monitoring Intelligent Data Placement 13-34
ASM Access Control Lists 13-35
ASM ACL Prerequisites 13-36
Managing ASM ACL with SQL Commands 13-37
Managing ASM ACL with ASMCMD Commands 13-38
Managing ASM ACL with Enterprise Manager 13-39
ASM ACL Guidelines 13-41
Quiz 13-42
Summary 13-44
Practice 13: Overview 13-45
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

xii
14 Administering ASM Cluster File Systems
Objectives 14-2
ASM Files and Volumes 14-3
ACFS and ADVM Architecture: Overview 14-4
ADVM Processes 14-6
ADVM Restrictions 14-7
ASM Cluster File System 14-8
ADVM Space Allocation 14-9
Striping Inside the Volume 14-10
Volume Striping: Example 14-11
Creating an ACFS Volume 14-13
Creating an ASM Dynamic Volume with Enterprise Manager 14-14
Managing ADVM Dynamic Volumes 14-17
Creating an ASM Cluster File System with Enterprise Manager 14-18
Managing Dynamic Volumes with SQL*PLUS 14-19
Registering an ACFS Volume 14-20
Creating an ACFS Volume with ASMCA 14-21
Creating the ACFS File System with ASMCA 14-22
Mounting the ACFS File System with ASMCA 14-23
Managing ACFS with EM 14-24
Extending ASMCMD for Dynamic Volumes 14-25
Linux-UNIX File System APIs 14-26
Linux-UNIX Extensions 14-27
ACFS Platform-Independent Commands 14-28
ACFS Snapshots 14-29
Managing ACFS Snapshots 14-30
Managing ACFS Snapshots with Enterprise Manager 14-32
Creating ACFS Snapshots 14-33
Managing Snapshots 14-34
Viewing Snapshots 14-35
ACFS Replication 14-36
ACFS Replication Requirements 14-38
Managing ACFS Replication 14-40
ACFS Backups 14-43
ACFS Performance 14-44
Using ACFS Volumes After Reboot 14-45
ACFS Views 14-46
Quiz 14-47
Summary 14-48
Practice 14: Overview 14-49

O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

xiii
A Practices and Solutions

B DHCP and DNS Configuration for GNS
Objectives B-2
GNS: Overview B-3
DHCP Service B-4
DHCP Configuration: Example B-5
DHCP Configuration Example B-6
DNS Concepts B-7
DNS Forwarding for GNS B-9
DNS Configuration: Example B-11
DNS Configuration: Detail B-13

C Cloning Grid Infrastructure
Objectives C-2
What Is Cloning? C-3
Benefits of Cloning Grid Infrastructure C-4
Creating a Cluster by Cloning Grid Infrastructure C-5
Preparing the Oracle Clusterware Home for Cloning C-6
Cloning to Create a New Oracle Clusterware Environment C-9
clone.pl Script C-13
clone.pl Environment Variables C-14
clone.pl Command Options C-15
Cloning to Create a New Oracle Clusterware Environment C-16
Log Files Generated During Cloning C-18
Cloning to Extend Oracle Clusterware to More Nodes C-19
Quiz C-25
Summary C-27

D RAC Concepts
Objectives D-2
Benefits of Using RAC D-3
Clusters and Scalability D-4
Levels of Scalability D-5
Scaleup and Speedup D-6
Speedup/Scaleup and Workloads D-7
I/O Throughput Balanced: Example D-8
Performance of Typical Components D-9
Necessity of Global Resources D-10
Global Resources Coordination D-11
Global Cache Coordination: Example D-12
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

xiv
Write to Disk Coordination: Example D-13
Dynamic Reconfiguration D-14
Object Affinity and Dynamic Remastering D-15
Global Dynamic Performance Views D-16
Efficient Internode Row-Level Locking D-17
Parallel Execution with RAC D-18
Summary D-19

O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D



Appendix A
Practices and Solutions



O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 2
Table of Contents

Practices for Lesson 4 ...................................................................................................... 4
Practice 4-1: Performing Preinstallation Tasks for Oracle Grid Infrastructure .............. 5
Practice 4-2: Installing Oracle Grid Infrastructure ...................................................... 16
Practice 4-3: Creating Additional ASM Disk Groups ................................................. 23
Practices for Lesson 5 .................................................................................................... 24
Practice 5-1: Adding a Third Node to Your Cluster ................................................... 25
Practices for Lesson 6 .................................................................................................... 49
Practice 6-1: Verifying, Starting, and Stopping Oracle Clusterware ........................... 50
Practice 6-2: Adding and Removing Oracle Clusterware Configuration Files ............. 58
Practice 6-3: Performing a Backup of the OCR and OLR ........................................... 61
Practices for Lesson 7 .................................................................................................... 63
Practice 7-1: Applying a PSU to the Grid Infrastructure Homes ................................. 64
Practices for Lesson 8 .................................................................................................... 84
Practice 8-1: Working with Log Files ........................................................................ 85
Practice 8-2: Working with OCRDUMP .................................................................... 88
Practice 8-3: Working with CLUVFY........................................................................ 91
Practices for Lesson 9 .................................................................................................... 95
Practice 9-1: Protecting the Apache Application ........................................................ 96
Practice 9-2: Perform RAC Installation .................................................................... 105
Practices for Lesson 11 ................................................................................................ 107
Practice 11-1: Administering ASM Instances ........................................................... 108
Practices for Lesson 12 ................................................................................................ 118
Practice 12-1: Administering ASM Disk Groups ..................................................... 119
Practices for Lesson 13 ................................................................................................ 124
Practice 13-1: Administering ASM Files, Directories, and Templates ...................... 125
Practices for Lesson 14 ................................................................................................ 135
Practice 14-1: Managing ACFS ............................................................................... 136
Optional Practice 14-2: Uninstalling the RAC Database with DEINSTALL ............ 146















O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 3

O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 4
Practices for Lesson 4
In the practices for this lesson, you will perform the tasks that are prerequisites to
successfully installing Oracle Grid Infrastructure. You will configure ASMLib to manage
your shared disks and, finally, you will install and verify Oracle Grid Infrastructure 11.2.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 5
Practi ce 4-1: Performi ng Preinstallati on Tasks for Oracl e Gri d
Infrastructure
In this practice, you perform various tasks that are required before installing Oracle Grid
Infrastructure. These tasks include:
Setting up required groups and users
Creating base directory
Configuring Network Time Protocol (NTPD)
Setting shell limits
Editing profile entries
Configuring ASMLib and shared disks
1) From a graphical terminal session, make sure that the groups asmadmi n, asmdba,
and asmoper exist (cat / et c/ gr oup). Make sure that the user gr i d exists with
the primary group of oi nst al l and the secondary groups of asmadmi n, asmdba,
and asmoper . Make sure that the or acl e users primary group is oi nst al l with
secondary groups of dba, oper , and asmdba. Running the script
/ home/ or acl e/ l abs/ l ess_04/ usr gr p. sh as the r oot user will complete
all these tasks. Perform this step on all three of your nodes.
[ r oot @host 01 ~] # cat /home/oracle/labs/less_04/usrgrp.sh
#! / bi n/ bash

gr oupadd - g 503 oper
gr oupadd - g 505 asmdba
gr oupadd - g 506 asmoper
gr oupadd - g 504 asmadmi n

gr ep - q gr i d / et c/ passwd
User Gr i dExi st s=$?
i f [ [ $User Gr i dExi st s == 0 ] ] ; t hen
user mod - g oi nst al l - G asmoper , asmdba, asmadmi n gr i d
el se
user add - u 502 - g oi nst al l - G asmoper , asmdba, asmadmi n gr i d
f i
echo 0r acl e | passwd - - st di n gr i d
user mod - g oi nst al l - G dba, oper , asmdba or acl e
chmod 755 / home/ gr i d

<<< On node 1 >>>

[ r oot @host 01 ~] # /home/oracle/labs/less_04/usrgrp.sh
Changi ng passwor d f or user gr i d.
passwd: al l aut hent i cat i on t okens updat ed successf ul l y.

<<< On node 2 >>>

[ r oot @host 01 ~] # ssh host02
/home/oracle/labs/less_04/usrgrp.sh
The aut hent i ci t y of host ' host 02 ( 192. 0. 2. 102) ' can' t be
est abl i shed.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 4-1: Performi ng Preinstallati on Tasks for Oracl e Gri d
Infrastructure (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 6
RSA key f i nger pr i nt i s
4a: 8c: b8: 48: 51: 04: 2e: 60: e4: f 4: e6: 39: 13: 39: 48: 8f .
Ar e you sur e you want t o cont i nue connect i ng ( yes/ no) ? yes
War ni ng: Per manent l y added ' host 02, 192. 0. 2. 102' ( RSA) t o t he
l i st of known host s.
r oot ' s passwor d: 0racle << password is not displayed
Changi ng passwor d f or user gr i d.
passwd: al l aut hent i cat i on t okens updat ed successf ul l y.
<<< On node 3 >>>

[ r oot @host 01 ~] # ssh host03
/home/oracle/labs/less_04/usrgrp.sh
The aut hent i ci t y of host ' host 03 ( 192. 0. 2. 103) ' can' t be
est abl i shed.
RSA key f i nger pr i nt i s
4a: 8c: b8: 48: 51: 04: 2e: 60: e4: f 4: e6: 39: 13: 39: 48: 8f .
Ar e you sur e you want t o cont i nue connect i ng ( yes/ no) ? yes
War ni ng: Per manent l y added ' host 03, 192. 0. 2. 103' ( RSA) t o t he
l i st of known host s.
r oot ' s passwor d: 0racle << password is not displayed
Changi ng passwor d f or user gr i d.
passwd: al l aut hent i cat i on t okens updat ed successf ul l y.

2) As the r oot user, create the or acl e and gr i d user base directories. Perform
this step on all three of your nodes.

<<< On Node 1 >>>
[ r oot @host 01 ~] # mkdir -p /u01/app/grid
[r oot @host 01 ~]# mkdir -p /u01/app/11.2.0/grid
[ r oot @host 01 ~] # chown -R grid:oinstall /u01/app
[ r oot @host 01 ~] # chmod -R 775 /u01/app/grid
[ r oot @host 01 ~] # chmod -R 775 /u01/app/11.2.0/grid
[ r oot @host 01 ~] # mkdir -p /u01/app/oracle
[ r oot @host 01 ~] # chown -R oracle:oinstall /u01/app/oracle

<<< On Node 2 >>>
[ r oot @host 02 ~] # mkdir -p /u01/app/grid
[r oot @host 02 ~]# mkdir -p /u01/app/11.2.0/grid
[ r oot @host 02 ~] # chown -R grid:oinstall /u01/app
[ r oot @host 02 ~] # chmod -R 775 /u01/app/grid
[ r oot @host 02 ~] # chmod -R 775 /u01/app/11.2.0/grid
[ r oot @host 02 ~] # mkdir -p /u01/app/oracle
[ r oot @host 02 ~] # chown -R oracle:oinstall /u01/app/oracle

<<< On Node 3 >>>
[ r oot @host 03 ~] # mkdir -p /u01/app/grid
[r oot @host 03 ~]# mkdir -p /u01/app/11.2.0/grid
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 4-1: Performi ng Preinstallati on Tasks for Oracl e Gri d
Infrastructure (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 7
[ r oot @host 03 ~] # chown -R grid:oinstall /u01/app
[ r oot @host 03 ~] # chmod -R 775 /u01/app/grid
[ r oot @host 03 ~] # chmod -R 775 /u01/app/11.2.0/grid
[ r oot @host 03 ~] # mkdir -p /u01/app/oracle
[ r oot @host 03 ~] # chown -R oracle:oinstall /u01/app/oracle

3) View the / et c/ sysconf i g/ nt pd file and confirm that the x option is specified
to address slewing. If necessary, change the file, and then restart the nt pd service
with the ser vi ce nt pd r est ar t command. Perform this step on all
three of your nodes.
[ r oot @host 01 ~] # cat /etc/sysconfig/ntpd
# Dr op r oot t o i d ' nt p: nt p' by def aul t .
OPTI ONS="-x - u nt p: nt p - p / var / r un/ nt pd. pi d"

# Set t o ' yes' t o sync hw cl ock af t er successf ul nt pdat e
SYNC_HWCLOCK=no

# Addi t i onal opt i ons f or nt pdat e
NTPDATE_OPTI ONS=""

4) As the r oot user, start the local naming cache daemon on all three nodes with the
ser vi ce nscd st ar t command. To make sure nscd starts at reboot, execute the
chkconf i g nscd command. Perform these steps on all three of your
nodes.
[ r oot @host 01 ~] # service nscd start
St ar t i ng nscd: [ OK ]
[ r oot @host 01 ~] # chkconfig nscd

[root@host01 ~]# ssh host02 service nscd start
r oot ' s passwor d: 0racle << password is not displayed
St ar t i ng nscd: [ OK ]
[ r oot @host 01 ~] # ssh host02 chkconfig nscd

[ r oot @host 01 ~] # ssh host03 service nscd start
r oot ' s passwor d: 0racle << password is not displayed
St ar t i ng nscd: [ OK ]
[ r oot @host 01 ~] # ssh host03 chkconfig nscd
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 4-1: Performi ng Preinstallati on Tasks for Oracl e Gri d
Infrastructure (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 8
5) As the r oot user, run the / home/ or acl e/ l abs/ l ess_04/ l i mi t s. sh script.
This script replaces the profile for the or acl e and gr i d users and replaces
/ et c/ pr of i l e. It replaces the / et c/ secur i t y/ l i mi t s. conf file with a new
one with entries for or acl e and gr i d. Display the contents of the
/ home/ or acl e/ l abs/ l ess_04/ bash_pr of i l e and
/ home/ or acl e/ l abs/ l ess_04/ pr of i l e files with the cat command.
Perform this step on all three of your nodes.
[ r oot @host 01 ~] # cat /home/oracle/labs/less_04/bash_profile
# . bash_pr of i l e

# Get t he al i ases and f unct i ons
i f [ - f ~/ . bashr c ] ; t hen
. ~/ . bashr c
f i

# User speci f i c envi r onment and st ar t up pr ogr ams

PATH=$PATH: $HOME/ bi n
expor t PATH

umask 022


[ r oot @host 01 ~] # cat /home/oracle/labs/less_04/profile
# / et c/ pr of i l e

# Syst emwi de envi r onment and st ar t up pr ogr ams, f or l ogi n
set up
# Funct i ons and al i ases go i n / et c/ bashr c

pat hmunge ( ) {
i f ! echo $PATH | / bi n/ egr ep - q " ( ^| : ) $1( $| : ) " ; t hen
i f [ "$2" = " af t er " ] ; t hen
PATH=$PATH: $1
el se
PATH=$1: $PATH
f i
f i
}

# ksh wor kar ound
i f [ - z "$EUI D" - a - x / usr / bi n/ i d ] ; t hen
EUI D=`i d - u`
UI D=`i d - r u`
f i

# Pat h mani pul at i on
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 4-1: Performi ng Preinstallati on Tasks for Oracl e Gri d
Infrastructure (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 9
i f [ " $EUI D" = " 0" ] ; t hen
pat hmunge / sbi n
pat hmunge / usr / sbi n
pat hmunge / usr / l ocal / sbi n
f i

# No cor e f i l es by def aul t
ul i mi t - S - c 0 > / dev/ nul l 2>&1

i f [ - x / usr / bi n/ i d ] ; t hen
USER=" `i d - un`"
LOGNAME=$USER
MAI L=" / var / spool / mai l / $USER"
f i

HOSTNAME=`/ bi n/ host name`
HI STSI ZE=1000

i f [ - z "$I NPUTRC" - a ! - f " $HOME/ . i nput r c" ] ; t hen
I NPUTRC=/ et c/ i nput r c
f i

expor t PATH USER LOGNAME MAI L HOSTNAME HI STSI ZE I NPUTRC

f or i i n / et c/ pr of i l e. d/ *. sh ; do
i f [ - r "$i " ] ; t hen
. $i
f i
done

i f [ $USER = " or acl e" ] | | [ $USER = " gr i d" ] ; t hen
umask 022
i f [ $SHELL = "/ bi n/ ksh" ] ; t hen
ul i mi t - p 16384
ul i mi t - n 65536
el se
ul i mi t - u 16384 - n 65536
f i
f i

unset i
unset pat hmunge


[ r oot @host 01 ~] # cat /home/oracle/labs/less_04/limits.conf

# - pr i or i t y - t he pr i or i t y t o r un user pr ocess wi t h
# - l ocks - max number of f i l e l ocks t he user can hol d
# - si gpendi ng - max number of pendi ng si gnal s
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 4-1: Performi ng Preinstallati on Tasks for Oracl e Gri d
Infrastructure (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 10
# - msgqueue - max memor y used by POSI X message queues
( byt es)
# - ni ce - max ni ce pr i or i t y al l owed t o r ai se t o
# - r t pr i o - max r eal t i me pr i or i t y
#<domai n> <t ype> <i t em> <val ue>

#* sof t cor e 0
#* har d r ss 10000
#@st udent har d npr oc 20
#@f acul t y sof t npr oc 20
#@f acul t y har d npr oc 50
#f t p har d npr oc 0
#@st udent - maxl ogi ns 4
# End of f i l e
or acl e sof t nof i l e 131072
or acl e har d nof i l e 131072
or acl e sof t npr oc 131072
or acl e har d npr oc 131072
or acl e sof t cor e unl i mi t ed
or acl e har d cor e unl i mi t ed
or acl e sof t meml ock 3500000
or acl e har d meml ock 3500000
gr i d sof t nof i l e 131072
gr i d har d nof i l e 131072
gr i d sof t npr oc 131072
gr i d har d npr oc 131072
gr i d sof t cor e unl i mi t ed
gr i d har d cor e unl i mi t ed
gr i d sof t meml ock 3500000
gr i d har d meml ock 3500000
# Recommended st ack har d l i mi t 32MB f or or acl e i nst al l at i ons
# or acl e har d st ack 32768

[ r oot @host 01 ~] # cat /home/oracle/labs/less_04/limits.sh
cp / home/ or acl e/ l abs/ l ess_04/ pr of i l e / et c/ pr of i l e
cp / home/ or acl e/ l abs/ l ess_04/ bash_pr of i l e
/ home/ or acl e/ . bash_pr of i l e
cp / home/ or acl e/ l abs/ l ess_04/ bash_pr of i l e
/ home/ gr i d/ . bash_pr of i l e
cp / home/ or acl e/ l abs/ l ess_04/ l i mi t s. conf
/ et c/ secur i t y/ l i mi t s. conf

<<< On Node 1 >>>
[ r oot @host 01 ~] # /home/oracle/labs/less_04/limits.sh

<<< On Node 2 >>>
[ r oot @host 01 ~] # ssh host02
/home/oracle/labs/less_04/limits.sh
r oot @host 02' s passwor d: 0racle << password is not displayed
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 4-1: Performi ng Preinstallati on Tasks for Oracl e Gri d
Infrastructure (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 11

<<< On Node 3 >>>>
[ r oot @host 01 ~] # ssh host03
/home/oracle/labs/less_04/limits.sh
r oot @host 03' s passwor d: 0racle << password is not displayed

6) As r oot , execute the or acl easmconf i gur e i command to configure the
Oracle ASM library driver. The owner should be gr i d and the group should be
asmadmi n. Make sure that the driver loads and scans disks on boot. Perform this
step on all three of your nodes.
<<< On Node 1 >>>
[ r oot @host 01 ~] # oracleasm configure -i
Conf i gur i ng t he Or acl e ASM l i br ar y dr i ver .

Thi s wi l l conf i gur e t he on- boot pr oper t i es of t he Or acl e ASM
l i br ar y
dr i ver . The f ol l owi ng quest i ons wi l l det er mi ne whet her t he
dr i ver i s
l oaded on boot and what per mi ssi ons i t wi l l have. The cur r ent
val ues
wi l l be shown i n br acket s ( ' [ ] ' ) . Hi t t i ng <ENTER> wi t hout
t ypi ng an
answer wi l l keep t hat cur r ent val ue. Ct r l - C wi l l abor t .

Def aul t user t o own t he dr i ver i nt er f ace [ ] : grid
Def aul t gr oup t o own t he dr i ver i nt er f ace [ ] : asmadmin
St ar t Or acl e ASM l i br ar y dr i ver on boot ( y/ n) [ n] : y
Scan f or Or acl e ASM di sks on boot ( y/ n) [ y] : y
Wr i t i ng Or acl e ASM l i br ar y dr i ver conf i gur at i on: done

<<< On Node 2 >>>
[ r oot @host 01 ~] # ssh host02 oracleasm configure -i

out put not shown

<<< On Node 3 >>>
[ r oot @host 01 ~] # ssh host03 oracleasm configure -i

out put not shown


7) (On the first node only) Create the ASM disks needed for the practices. The
/ home/ or acl e/ l abs/ l ess_04/ cr eat edi sk. sh script has been provided to
do this for you. Look at the script, and then execute it as the r oot user. Perform this
step on the first node only.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 4-1: Performi ng Preinstallati on Tasks for Oracl e Gri d
Infrastructure (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 12
[ r oot @host 01 ~] # cat /home/oracle/labs/less_04/createdisk.sh
or acl easmi ni t
or acl easmcr eat edi sk ASMDI SK01 / dev/ xvdf 1
or acl easmcr eat edi sk ASMDI SK02 / dev/ xvdf 2
or acl easmcr eat edi sk ASMDI SK03 / dev/ xvdf 3
or acl easmcr eat edi sk ASMDI SK04 / dev/ xvdf 5
or acl easmcr eat edi sk ASMDI SK05 / dev/ xvdf 6
or acl easmcr eat edi sk ASMDI SK06 / dev/ xvdf 7
or acl easmcr eat edi sk ASMDI SK07 / dev/ xvdf 8
or acl easmcr eat edi sk ASMDI SK08 / dev/ xvdf 9
or acl easmcr eat edi sk ASMDI SK09 / dev/ xvdf 10
or acl easmcr eat edi sk ASMDI SK10 / dev/ xvdf 11
or acl easmcr eat edi sk ASMDI SK11 / dev/ xvdg1
or acl easmcr eat edi sk ASMDI SK12 / dev/ xvdg2
or acl easmcr eat edi sk ASMDI SK13 / dev/ xvdg3
or acl easmcr eat edi sk ASMDI SK14 / dev/ xvdg5

r pm- i v / st age/ gr i d/ r pm/ cvuqdi sk- 1. 0. 9- 1. r pm
ssh host 02 r pm- i v / st age/ gr i d/ r pm/ cvuqdi sk- 1. 0. 9- 1. r pm
ssh host 03 r pm- i v / st age/ gr i d/ r pm/ cvuqdi sk- 1. 0. 9- 1. r pm

[ r oot @host 01 ~] # /home/oracle/labs/less_04/createdisk.sh
Cr eat i ng / dev/ or acl easmmount poi nt : / dev/ or acl easm
Loadi ng modul e " or acl easm" : or acl easm
Mount i ng ASMl i b dr i ver f i l esyst em: / dev/ or acl easm
Wr i t i ng di sk header : done
I nst ant i at i ng di sk: done
Wr i t i ng di sk header : done
I nst ant i at i ng di sk: done
Wr i t i ng di sk header : done
I nst ant i at i ng di sk: done
Wr i t i ng di sk header : done
I nst ant i at i ng di sk: done
Wr i t i ng di sk header : done
I nst ant i at i ng di sk: done
Wr i t i ng di sk header : done
I nst ant i at i ng di sk: done
Wr i t i ng di sk header : done
I nst ant i at i ng di sk: done
Wr i t i ng di sk header : done
I nst ant i at i ng di sk: done
Wr i t i ng di sk header : done
I nst ant i at i ng di sk: done
Wr i t i ng di sk header : done
I nst ant i at i ng di sk: done
Wr i t i ng di sk header : done
I nst ant i at i ng di sk: done
Wr i t i ng di sk header : done
I nst ant i at i ng di sk: done
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 4-1: Performi ng Preinstallati on Tasks for Oracl e Gri d
Infrastructure (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 13
Wr i t i ng di sk header : done
I nst ant i at i ng di sk: done
Wr i t i ng di sk header : done
I nst ant i at i ng di sk: done

Pr epar i ng packages f or i nst al l at i on. . .
Usi ng def aul t gr oup oi nst al l t o i nst al l package
cvuqdi sk- 1. 0. 9- 1

r oot @host 02' s passwor d:
Pr epar i ng packages f or i nst al l at i on. . .
Usi ng def aul t gr oup oi nst al l t o i nst al l package
cvuqdi sk- 1. 0. 9- 1

r oot @host 03' s passwor d:
Pr epar i ng packages f or i nst al l at i on. . .
Usi ng def aul t gr oup oi nst al l t o i nst al l package
cvuqdi sk- 1. 0. 9- 1

[ r oot @host 01 ~] #

8) As the r oot user, scan the disks to make sure that they are available with the
or acl easmscandi sks command. Perform an or acl easml i st di sks
command to make sure all the disks have been configured. Perform this step on
all three of your nodes.
<<< On Node 1 >>>

[ r oot @host 01 ~] # oracleasm exit
Unmount i ng ASMl i b dr i ver f i l esyst em: / dev/ or acl easm
Unl oadi ng modul e " or acl easm" : or acl easm

[ r oot @host 01 ~] # oracleasm init
Loadi ng modul e " or acl easm" : or acl easm
Mount i ng ASMl i b dr i ver f i l esyst em: / dev/ or acl easm

[ r oot @host 01 ~] # oracleasm scandisks
Rel oadi ng di sk par t i t i ons: done
Cl eani ng any st al e ASM di sks. . .
Scanni ng syst emf or ASM di sks. . .
I nst ant i at i ng di sk " ASMDI SK01"
I nst ant i at i ng di sk " ASMDI SK02"
I nst ant i at i ng di sk " ASMDI SK03"
I nst ant i at i ng di sk " ASMDI SK04"
I nst ant i at i ng di sk " ASMDI SK05"
I nst ant i at i ng di sk " ASMDI SK06"
I nst ant i at i ng di sk " ASMDI SK07"
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 4-1: Performi ng Preinstallati on Tasks for Oracl e Gri d
Infrastructure (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 14
I nst ant i at i ng di sk " ASMDI SK08"
I nst ant i at i ng di sk " ASMDI SK09"
I nst ant i at i ng di sk " ASMDI SK10"
I nst ant i at i ng di sk " ASMDI SK11"
I nst ant i at i ng di sk " ASMDI SK12"
I nst ant i at i ng di sk " ASMDI SK13"
I nst ant i at i ng di sk " ASMDI SK14"

[ r oot @host 01 ~] # oracleasm listdisks
ASMDI SK01
ASMDI SK02
ASMDI SK03
ASMDI SK04
ASMDI SK05
ASMDI SK06
ASMDI SK07
ASMDI SK08
ASMDI SK09
ASMDI SK10
ASMDI SK12
ASMDI SK13
ASMDI SK14

<<< On Node 2 >>>
[ r oot @host 01 ~] # ssh host02 oracleasm exit
r oot @host 02' s passwor d:

[ r oot @host 01 ~] # ssh host02 oracleasm init
r oot @host 02' s passwor d:
Cr eat i ng / dev/ or acl easmmount poi nt : / dev/ or acl easm
Loadi ng modul e " or acl easm" : or acl easm
Mount i ng ASMl i b dr i ver f i l esyst em: / dev/ or acl easm

[ r oot @host 01 ~] # ssh host02 oracleasm scandisks
r oot @host 02' s passwor d:
Rel oadi ng di sk par t i t i ons: done
Cl eani ng any st al e ASM di sks. . .
Scanni ng syst emf or ASM di sks. . .
I nst ant i at i ng di sk " ASMDI SK01"
I nst ant i at i ng di sk " ASMDI SK02"
I nst ant i at i ng di sk " ASMDI SK03"
I nst ant i at i ng di sk " ASMDI SK04"
I nst ant i at i ng di sk " ASMDI SK05"
I nst ant i at i ng di sk " ASMDI SK06"
I nst ant i at i ng di sk " ASMDI SK07"
I nst ant i at i ng di sk " ASMDI SK08"
I nst ant i at i ng di sk " ASMDI SK09"
I nst ant i at i ng di sk " ASMDI SK10"
I nst ant i at i ng di sk " ASMDI SK11"
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 4-1: Performi ng Preinstallati on Tasks for Oracl e Gri d
Infrastructure (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 15
I nst ant i at i ng di sk " ASMDI SK12"
I nst ant i at i ng di sk " ASMDI SK13"
I nst ant i at i ng di sk " ASMDI SK14"

<<< On Node 3 >>>

[ r oot @host 01 ~] # ssh host03 oracleasm exit
r oot @host 03' s passwor d:

[ r oot @host 01 ~] # ssh host03 oracleasm init
r oot @host 03' s passwor d:
Cr eat i ng / dev/ or acl easmmount poi nt : / dev/ or acl easm
Loadi ng modul e " or acl easm" : or acl easm
Mount i ng ASMl i b dr i ver f i l esyst em: / dev/ or acl easm

[ r oot @host 01 ~] # ssh host03 oracleasm scandisks
r oot @host 03' s passwor d:
Rel oadi ng di sk par t i t i ons: done
Cl eani ng any st al e ASM di sks. . .
Scanni ng syst emf or ASM di sks. . .
I nst ant i at i ng di sk " ASMDI SK01"
I nst ant i at i ng di sk " ASMDI SK02"
I nst ant i at i ng di sk " ASMDI SK03"
I nst ant i at i ng di sk " ASMDI SK04"
I nst ant i at i ng di sk " ASMDI SK05"
I nst ant i at i ng di sk " ASMDI SK06"
I nst ant i at i ng di sk " ASMDI SK07"
I nst ant i at i ng di sk " ASMDI SK08"
I nst ant i at i ng di sk " ASMDI SK09"
I nst ant i at i ng di sk " ASMDI SK10"
I nst ant i at i ng di sk " ASMDI SK11"
I nst ant i at i ng di sk " ASMDI SK12"
I nst ant i at i ng di sk " ASMDI SK13"
I nst ant i at i ng di sk " ASMDI SK14"
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 16
Practi ce 4-2: Instal ling Oracl e Gri d Infrastructure
In this practice, you install Oracle Grid Infrastructure.
1) Use the Oracle Universal Installer (r unI nst al l er ) to install Oracle Grid
Infrastructure.
Your assigned cluster nodes are host01, host02, and host03.
Your cluster name is cluster01.
Your SCAN is cluster01-scan.
Your Oracle Grid Infrastructure software location is / st age/ gr i d.
a) From your classroom PC desktop, execute ssh X gr i d@host 01 to open a
terminal session on host01 as the gr i d user.
[ vncuser @cl asssr oom_pc ~] # ssh - X gr i d@host 01
gr i d@host 01' s passwor d:
/ usr / bi n/ xaut h: cr eat i ng new aut hor i t y f i l e
/ home/ gr i d/ . Xaut hor i t y
[ gr i d@host 01 ~] $
b) Before starting the installation, make sure that the DNS server can resolve your
SCAN to three IP addresses.
[ gr i d@host 01 ~] $ nslookup cluster01-scan
Ser ver : 192. 0. 2. 1
Addr ess: 192. 0. 2. 1#53

Name: cl ust er 01- scan. exampl e. com
Addr ess: 192. 0. 2. 112
Name: cl ust er 01- scan. exampl e. com
Addr ess: 192. 0. 2. 113
Name: cl ust er 01- scan. exampl e. com
Addr ess: 192. 0. 2. 111
c) Change directory to the staged software location and start the OUI by executing
the r unI nst al l er command from the / st age/ gr i d directory.
[ gr i d@host 01 ~] $ id
ui d=502( gr i d) gi d=501( oi nst al l )
gr oups=501( oi nst al l ) , 504( asmadmi n) , 505( asmdba) , 506( asmoper )

[ gr i d@host 01 ~] $ cd /stage/grid

[ gr i d@host 01 ~] $ ./runInstaller

d) On the Software Updates page, select Skip Software Updates and click Next.
e) On the Select Installation Option page, select the Install and Configure Oracle
Grid Infrastructure for a Cluster option and click Next.
f) On the Select Installation Type page, select Advanced Installation and click Next.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 4-2: Instal ling Oracl e Gri d Infrastructure (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 17
g) On the Product Languages page, select all languages and click Next.
h) The Grid Plug and Play Information page appears next. Deselect the Configure
GNS check box. When you do this, the GNS Sub Domain and GNS VIP Address
fields will be disabled. You must input:
Cluster Name: cluster01
SCAN Name cluster01-scan
SCAN Port 1521 (default value)
Input the proper data carefully. DO NOT GUESS here. If you are unsure of a
value, PLEASE ASK YOUR INSTRUCTOR.
Hint: If you enter the cluster name (for example, cluster01), the SCAN name will
auto-fill correctly. Let the SCAN Port to default to 1521. Verify all data entered
on this page, and then click Next.
i) On the Cluster Node Information page, you add your second node only. DO
NOT for any reason install to all three nodes. Your first node will appear
in the page by default. Add your second node, host02. Click the Add button and
enter the fully qualified name of your second node (host 02. exampl e. com)
and the fully qualified host VIP address (host 02- vi p. exampl e. com) into
the box and click OK. Your second node should appear in the window under your
first node. Click the SSH Connectivity button. Enter the gr i d password, which is
0r acl e. Click the Setup button. A dialog box stating that you have successfully
established passwordless SSH connectivity appears. Click OK to close the dialog
box. Click Next to continue.
j) On the Specify Network Usage page, you must configure the correct interface
types for the listed network interface. Select public for eth0. To configure HAIP
for the cluster interconnects, select private for both eth1 and eth2. If you are
unsure, check with your instructor for proper network interface usage. When you
have correctly assigned the interface types, click Next to continue.
k) On the Storage Option Information page, select Oracle Automatic Storage
Management (ASM) and click Next.
l) On the Create ASM Disk Group page, make sure that Disk Group Name is DATA
and Redundancy is Normal. In the Add Disks region, select ORCL:ASMDISK01
ORCL:ASMDISK02, ORCL:ASMDISK03, and ORCL:ASMDISK04. Click
Next.
m) On the ASM Passwords page, click the Use Same Password for these accounts
button. In the Specify Password field, enter oracle_4U and confirm it in the
Confirm Password field. Click Next to continue.
n) Select the Do not use Intelligent Platform Management Interface (IPMI) option
on the Failure Isolation page and click Next to continue.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 4-2: Instal ling Oracl e Gri d Infrastructure (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 18
o) On the Privileged Operating System Groups page, select asmdba for the ASM
Database Administrator (OSDBA) group, asmoper for the ASM Instance
Administration Operator (OSOPER) group, and asmadmin for the ASM
Instance Administrator (OSASM) group. Click Next to continue.
p) On the Specify Installation Location page, make sure that Oracle Base is
/ u01/ app/ gr i d and Software Location is / u01/ app/ 11. 2. 0/ gr i d. Click
Next.
q) On the Create Inventory page, Inventory Directory should be
/ u01/ app/ or aI nvent or y and the oraInventory Group Name should be
oinstall. Click Next.
r) On the Perform System Prerequisites page, the Installer checks whether all the
systems involved in the installation meet the minimum system requirements for
that platform. If the check is successful, click Next. If any deficiencies are found,
the installer will generate a fixup script to be run as root on host01 and host02.
From a terminal window as the root user, execute the
/ t mp/ CVU_11. 2. 0. 3. 0_gr i d/ r unf i xup. sh script on host01, and then
execute the script on host02 as root. When finished, return to the installer and
click the Fix & Check Again button. Most likely, you will receive a Device
Checks for ASM warning. This can be safely ignored. Click Yes on the Ignore
All check box, then click Yes on the confirmation box, then click Next to
continue.
[ r oot @host 01 ~] # / t mp/ CVU_11. 2. 0. 3. 0_gr i d/ r unf i xup. sh
[ r oot @host 01 ~] # ssh host02
/ t mp/ CVU_11. 2. 0. 3. 0_gr i d/ r unf i xup. sh
Passwor d: 0racle << password not displayed
[ r oot @host 01 ~] #
s) Click Install on the Summary screen. From this screen, you can monitor the
progress of the installation.
t) When the remote operations have finished, the Execute Configuration Scripts
window appears. You are instructed to run the or ai nst Root . sh and r oot . sh
scripts as the r oot user on both nodes. As the r oot user, execute the
or ai nst Root . sh and r oot . sh scripts on host01 (first) and host02 (second).
When running r oot . sh, accept / usr / l ocal / bi n as the local bin directory
by pressing Enter when prompted. Note: You must wait until the r oot . sh
script finishes running on the first node before executing it on the second node.
[ gr i d@host 01 ~] # su
Passwor d: 0racle << password not displayed

(On the first node)

[ r oot @host 01 ~] # /u01/app/oraInventory/orainstRoot.sh
Changi ng per mi ssi ons of / u01/ app/ or aI nvent or y.
Addi ng r ead, wr i t e per mi ssi ons f or gr oup.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 4-2: Instal ling Oracl e Gri d Infrastructure (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 19
Removi ng r ead, wr i t e, execut e per mi ssi ons f or wor l d.

Changi ng gr oupname of / u01/ app/ or aI nvent or y t o oi nst al l .
The execut i on of t he scr i pt i s compl et e.

[ r oot @host 01 ~] # /u01/app/11.2.0/grid/root.sh
. / r oot . sh
Per f or mi ng r oot user oper at i on f or Or acl e 11g

The f ol l owi ng envi r onment var i abl es ar e set as:
ORACLE_OWNER= gr i d
ORACLE_HOME= / u01/ app/ 11. 2. 0/ gr i d

Cr eat i ng / et c/ or at ab f i l e. . .
Ent r i es wi l l be added t o t he / et c/ or at ab f i l e as needed by
Dat abase Conf i gur at i on Assi st ant when a dat abase i s cr eat ed
Fi ni shed r unni ng gener i c par t of r oot scr i pt .
Now pr oduct - speci f i c r oot act i ons wi l l be per f or med.
Usi ng conf i gur at i on par amet er f i l e:
/ u01/ app/ 11. 2. 0/ gr i d/ cr s/ i nst al l / cr sconf i g_par ams
Cr eat i ng t r ace di r ect or y
User i gnor ed Pr er equi si t es dur i ng i nst al l at i on
OLR i ni t i al i zat i on - successf ul
r oot wal l et
r oot wal l et cer t
r oot cer t expor t
peer wal l et
pr of i l e r eader wal l et
pa wal l et
peer wal l et keys
pa wal l et keys
peer cer t r equest
pa cer t r equest
peer cer t
pa cer t
peer r oot cer t TP
pr of i l e r eader r oot cer t TP
pa r oot cer t TP
peer pa cer t TP
pa peer cer t TP
pr of i l e r eader pa cer t TP
pr of i l e r eader peer cer t TP
peer user cer t
pa user cer t
Addi ng Cl ust er war e ent r i es t o i ni t t ab
CRS- 2672: At t empt i ng t o st ar t ' or a. mdnsd' on ' host 01'
CRS- 2676: St ar t of ' or a. mdnsd' on ' host 01' succeeded
CRS- 2672: At t empt i ng t o st ar t ' or a. gpnpd' on ' host 01'
CRS- 2676: St ar t of ' or a. gpnpd' on ' host 01' succeeded
CRS- 2672: At t empt i ng t o st ar t ' or a. cssdmoni t or ' on ' host 01'
CRS- 2672: At t empt i ng t o st ar t ' or a. gi pcd' on ' host 01'
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 4-2: Instal ling Oracl e Gri d Infrastructure (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 20
CRS- 2676: St ar t of ' or a. gi pcd' on ' host 01' succeeded
CRS- 2676: St ar t of ' or a. cssdmoni t or ' on ' host 01' succeeded
CRS- 2672: At t empt i ng t o st ar t ' or a. cssd' on ' host 01'
CRS- 2672: At t empt i ng t o st ar t ' or a. di skmon' on ' host 01'
CRS- 2676: St ar t of ' or a. di skmon' on ' host 01' succeeded
CRS- 2676: St ar t of ' or a. cssd' on ' host 01' succeeded

ASM cr eat ed and st ar t ed successf ul l y.

Di sk Gr oup DATA cr eat ed successf ul l y.

cl scf g: - i nst al l mode speci f i ed
Successf ul l y accumul at ed necessar y OCR keys.
Cr eat i ng OCR keys f or user ' r oot ' , pr i vgr p ' r oot ' . .
Oper at i on successf ul .
CRS- 4256: Updat i ng t he pr of i l e
Successf ul addi t i on of vot i ng di sk
a76bb6f 3c5a64f 75bf 38af 49001577f a.
Successf ul addi t i on of vot i ng di sk
0f f 19e4eaaf 14f 40bf 72b5ed81165f f 5.
Successf ul addi t i on of vot i ng di sk
40747dc0eca34f 2f bf 3bc84645f 455c3.
Successf ul l y r epl aced vot i ng di sk gr oup wi t h +DATA.
CRS- 4256: Updat i ng t he pr of i l e
CRS- 4266: Vot i ng f i l e( s) successf ul l y r epl aced
## STATE Fi l e Uni ver sal I d Fi l e Name Di sk
gr oup
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1. ONLI NE a76bb6f 3c5a64f 75bf 38af 49001577f a ( ORCL: ASMDI SK01)
[ DATA]
2. ONLI NE 0f f 19e4eaaf 14f 40bf 72b5ed81165f f 5 ( ORCL: ASMDI SK02)
[ DATA]
3. ONLI NE 40747dc0eca34f 2f bf 3bc84645f 455c3 ( ORCL: ASMDI SK03)
[ DATA]
Locat ed 3 vot i ng di sk( s) .
CRS- 2672: At t empt i ng t o st ar t ' or a. asm' on ' host 01'
CRS- 2676: St ar t of ' or a. asm' on ' host 01' succeeded
CRS- 2672: At t empt i ng t o st ar t ' or a. DATA. dg' on ' host 01'
CRS- 2676: St ar t of ' or a. DATA. dg' on ' host 01' succeeded
Conf i gur e Or acl e Gr i d I nf r ast r uct ur e f or a Cl ust er . . .
succeeded
#

(On the second node AFTER the root.sh script finishes on the first node)
[ r oot @host 01 ~] # ssh host02
/u01/app/oraInventory/orainstRoot.sh
r oot ' s passwor d: 0racle << password not displayed
Changi ng per mi ssi ons of / u01/ app/ or aI nvent or y.
Addi ng r ead, wr i t e per mi ssi ons f or gr oup.
Removi ng r ead, wr i t e, execut e per mi ssi ons f or wor l d.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 4-2: Instal ling Oracl e Gri d Infrastructure (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 21

Changi ng gr oupname of / u01/ app/ or aI nvent or y t o oi nst al l .
The execut i on of t he scr i pt i s compl et e.


[ r oot @host 01 ~] # ssh host02 /u01/app/11.2.0/grid/root.sh
r oot ' s passwor d: 0racle << password not displayed

Runni ng Or acl e 11g r oot . sh scr i pt . . .

Per f or mi ng r oot user oper at i on f or Or acl e 11g

The f ol l owi ng envi r onment var i abl es ar e set as:
ORACLE_OWNER= gr i d
ORACLE_HOME= / u01/ app/ 11. 2. 0/ gr i d

Ent er t he f ul l pat hname of t he l ocal bi n di r ect or y:
[ / usr / l ocal / bi n] :
Cr eat i ng / et c/ or at ab f i l e. . .
Ent r i es wi l l be added t o t he / et c/ or at ab f i l e as needed by
Dat abase Conf i gur at i on Assi st ant when a dat abase i s cr eat ed
Fi ni shed r unni ng gener i c par t of r oot scr i pt .
Now pr oduct - speci f i c r oot act i ons wi l l be per f or med.
Usi ng conf i gur at i on par amet er f i l e:
/ u01/ app/ 11. 2. 0/ gr i d/ cr s/ i nst al l / cr sconf i g_par ams
Cr eat i ng t r ace di r ect or y
User i gnor ed Pr er equi si t es dur i ng i nst al l at i on
OLR i ni t i al i zat i on - successf ul
Addi ng Cl ust er war e ent r i es t o i ni t t ab
CRS- 4402: The CSS daemon was st ar t ed i n excl usi ve mode but
f ound an act i ve CSS daemon on node host 01, number 1, and i s
t er mi nat i ng
An act i ve cl ust er was f ound dur i ng excl usi ve st ar t up,
r est ar t i ng t o j oi n t he cl ust er
Conf i gur e Or acl e Gr i d I nf r ast r uct ur e f or a Cl ust er . . .
succeeded
[ r oot @host 01 ~] # exi t
[ gr i d@host 01 ~] $

u) After the scripts are executed on both nodes, click the OK button to close the
dialog box. The configuration assistants will continue to execute from the Setup
page.
v) When the configuration assistants have finished, click the Close button on the
Finish page to exit the Installer.
2) When the installation finishes, you should verify the installation. You should check to
make sure that the software stack is running, as it should. Execute the cr sct l st at
r es t command:
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 4-2: Instal ling Oracl e Gri d Infrastructure (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 22
[ gr i d@host 01 ~] $ /u01/app/11.2.0/grid/bin/crsctl stat res t

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NAME TARGET STATE SERVER STATE_DETAI LS
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Local Resour ces
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
or a. DATA. dg
ONLI NE ONLI NE host 01
ONLI NE ONLI NE host 02
or a. LI STENER. l snr
ONLI NE ONLI NE host 01
ONLI NE ONLI NE host 02
or a. asm
ONLI NE ONLI NE host 01 St ar t ed
ONLI NE ONLI NE host 02 St ar t ed
or a. gsd
OFFLI NE OFFLI NE host 01
OFFLI NE OFFLI NE host 02
or a. net 1. net wor k
ONLI NE ONLI NE host 01
ONLI NE ONLI NE host 02
or a. ons
ONLI NE ONLI NE host 01
ONLI NE ONLI NE host 02
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Cl ust er Resour ces
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
or a. LI STENER_SCAN1. l snr
1 ONLI NE ONLI NE host 02
or a. LI STENER_SCAN2. l snr
1 ONLI NE ONLI NE host 01
or a. LI STENER_SCAN3. l snr
1 ONLI NE ONLI NE host 01
or a. cvu
1 ONLI NE ONLI NE host 01
or a. host 01. vi p
1 ONLI NE ONLI NE host 01
or a. host 02. vi p
1 ONLI NE ONLI NE host 02
or a. oc4j
1 ONLI NE ONLI NE host 01
or a. scan1. vi p
1 ONLI NE ONLI NE host 02
or a. scan2. vi p
1 ONLI NE ONLI NE host 01
or a. scan3. vi p
1 ONLI NE ONLI NE host 01

O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 23
Practi ce 4-3: Creati ng Addi ti onal ASM Disk Groups
In this practice, you create additional ASM disk groups to support the activities in the rest
of the course. You create a disk group to hold the Fast Recovery Area (FRA) and another
disk group to hold ACFS file systems.
1) From the same terminal window you used to install Grid Infrastructure, set the grid
user environment with the or aenv tool to the +ASM1 instance.
[ gr i d@host 01 ~] $ . oraenv
ORACLE_SI D = [ gr i d] ? +ASM1
The Or acl e base has been set t o / u01/ app/ gr i d
2) Start the ASM Configuration Assistant (ASMCA).
[ gr i d@host 01 ~] $ asmca
3) Create a disk group named FRA with four disks and external redundancychoose
disks ASMDISK05 through ASMDISK08.
Step Screen/Page Description Choices or Values
a. Configure ASM: oraDiskGroups Click Create.
b. Create DiskGroup Enter:
Disk Group Name: FRA
In the Redundancy section, select External
(None).
In the Select Member Disk section, select:
ASMDISK05
ASMDISK06
ASMDISK07
ASMDISK08
Click OK.
c. Disk Group: Creation Click OK.
Click Exit to dismiss ASMCA when
finished.
Click Yes in the ASM Configuration
Assistant dialog box dialog to verify.

O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 24
Practices for Lesson 5
In this practice, you will add a third node to your cluster.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 25
Practi ce 5-1: Adding a Third Node to Your Cluster
The goal of this practice is to extend your cluster to a third node.

Before you start this practice, make sure that you went through the following steps
on your third node: Practice 4-1: 1, 2, 3, 4, 5, 6, 8

.
In this practice, you extend your cluster to the third node.
Note:
Unless specified otherwise, you are connected on your first node as the grid
user
1) Set up the ssh user equivalence for the gr i d user between your first node and your
third node.
using a terminal session.
[ gr i d@host 01 ~] $ /home/oracle/labs/less_05/ssh_config.sh
Set t i ng up SSH user equi val ency.
gr i d@host 02' s passwor d:
gr i d@host 03' s passwor d:
Checki ng SSH user equi val ency.
host 01
host 02
host 03

[ gr i d@host 01 ~] $
2) Make sure that you can connect from your first node to the third one without being
prompted for passwords.
[ gr i d@host 01 ~] $ ssh host03 date
Fr i May 4 10: 38: 10 EDT 2012
[ gr i d@host 01 ~] $
3) Make sure that you set up your environment variables correctly for the gr i d user to
point to your grid installation.
[ gr i d@host 01 ~] $ . oraenv
ORACLE_SI D = [ gr i d] ? +ASM1
The Or acl e base has been set t o / u01/ app/ gr i d
$
4) Check your pre-grid installation for your third node using the Cluster Verification
Utility.
[ gr i d@host 01 ~] $ cluvfy stage -pre crsinst -n host03

Per f or mi ng pr e- checks f or cl ust er ser vi ces set up

Checki ng node r eachabi l i t y. . .
Node r eachabi l i t y check passed f r omnode "host 01"


Checki ng user equi val ence. . .
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 26
User equi val ence check passed f or user "gr i d"

Checki ng node connect i vi t y. . .

Checki ng host s conf i g f i l e. . .

Ver i f i cat i on of t he host s conf i g f i l e successf ul

Node connect i vi t y passed f or subnet "192. 0. 2. 0" wi t h node( s)
host 03
TCP connect i vi t y check passed f or subnet "192. 0. 2. 0"

Node connect i vi t y passed f or subnet "192. 168. 1. 0" wi t h node( s)
host 03
TCP connect i vi t y check passed f or subnet "192. 168. 1. 0"


I nt er f aces f ound on subnet " 192. 0. 2. 0" t hat ar e l i kel y
candi dat es f or VI P ar e:
host 03 et h0: 192. 0. 2. 103

WARNI NG:
I nt er f ace subnet " 192. 0. 2. 0" does not have a gat eway def i ned

I nt er f aces f ound on subnet " 192. 168. 1. 0" t hat ar e l i kel y
candi dat es f or a pr i vat e i nt er connect ar e:
host 03 et h1: 192. 168. 1. 103

I nt er f aces f ound on subnet " 192. 168. 1. 0" t hat ar e l i kel y
candi dat es f or a pr i vat e i nt er connect ar e:
host 03 et h2: 192. 168. 1. 203

Node connect i vi t y check passed

Checki ng mul t i cast communi cat i on. . .

Checki ng subnet "192. 0. 2. 0" f or mul t i cast communi cat i on wi t h
mul t i cast gr oup "230. 0. 1. 0". . .
Check of subnet "192. 0. 2. 0" f or mul t i cast communi cat i on wi t h
mul t i cast gr oup "230. 0. 1. 0" passed.

Checki ng subnet "192. 168. 1. 0" f or mul t i cast communi cat i on wi t h
mul t i cast gr oup "230. 0. 1. 0". . .
Check of subnet "192. 168. 1. 0" f or mul t i cast communi cat i on wi t h
mul t i cast gr oup "230. 0. 1. 0" passed.

Check of mul t i cast communi cat i on passed.

Checki ng ASMLi b conf i gur at i on.
Check f or ASMLi b conf i gur at i on passed.
Tot al memor y check passed
Avai l abl e memor y check passed
Swap space check passed
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 27
Fr ee di sk space check passed f or " host 03: / u01/ app/ 11. 2. 0/ gr i d"
Fr ee di sk space check passed f or " host 03: / t mp"
Check f or mul t i pl e user s wi t h UI D val ue 502 passed
User exi st ence check passed f or "gr i d"
Gr oup exi st ence check passed f or " oi nst al l "
Gr oup exi st ence check passed f or " dba"
Member shi p check f or user "gr i d" i n gr oup " oi nst al l " [ as
Pr i mar y] passed
Member shi p check f or user "gr i d" i n gr oup " dba" f ai l ed
Check f ai l ed on nodes:
host 03
Run l evel check passed
Har d l i mi t s check passed f or " maxi mumopen f i l e descr i pt or s"
Sof t l i mi t s check passed f or " maxi mumopen f i l e descr i pt or s"
Har d l i mi t s check passed f or " maxi mumuser pr ocesses"
Sof t l i mi t s check passed f or " maxi mumuser pr ocesses"
Syst emar chi t ect ur e check passed
Ker nel ver si on check passed
Ker nel par amet er check passed f or "semmsl "
Ker nel par amet er check passed f or "semmns"
Ker nel par amet er check passed f or "semopm"
Ker nel par amet er check passed f or "semmni "
Ker nel par amet er check passed f or "shmmax"
Ker nel par amet er check passed f or "shmmni "
Ker nel par amet er check passed f or "shmal l "
Ker nel par amet er check passed f or "f i l e- max"
Ker nel par amet er check passed f or "i p_l ocal _por t _r ange"
Ker nel par amet er check passed f or "r mem_def aul t "
Ker nel par amet er check passed f or "r mem_max"
Ker nel par amet er check passed f or "wmem_def aul t "
Ker nel par amet er check passed f or "wmem_max"
Ker nel par amet er check passed f or "ai o- max- nr "
Package exi st ence check passed f or " make"
Package exi st ence check passed f or " bi nut i l s"
Package exi st ence check passed f or " gcc( x86_64) "
Package exi st ence check passed f or " l i bai o( x86_64) "
Package exi st ence check passed f or " gl i bc( x86_64) "
Package exi st ence check passed f or " compat - l i bst dc++-
33( x86_64) "
Package exi st ence check passed f or " el f ut i l s- l i bel f ( x86_64) "
Package exi st ence check passed f or " el f ut i l s- l i bel f - devel "
Package exi st ence check passed f or " gl i bc- common"
Package exi st ence check passed f or " gl i bc- devel ( x86_64) "
Package exi st ence check passed f or " gl i bc- header s"
Package exi st ence check passed f or " gcc- c++( x86_64) "
Package exi st ence check passed f or " l i bai o- devel ( x86_64) "
Package exi st ence check passed f or " l i bgcc( x86_64) "
Package exi st ence check passed f or " l i bst dc++( x86_64) "
Package exi st ence check passed f or " l i bst dc++- devel ( x86_64) "
Package exi st ence check passed f or " sysst at "
Package exi st ence check passed f or " ksh"
Check f or mul t i pl e user s wi t h UI D val ue 0 passed
Cur r ent gr oup I D check passed
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 28

St ar t i ng check f or consi st ency of pr i mar y gr oup of r oot user

Check f or consi st ency of r oot user ' s pr i mar y gr oup passed

St ar t i ng Cl ock synchr oni zat i on checks usi ng Net wor k Ti me
Pr ot ocol ( NTP) . . .

NTP Conf i gur at i on f i l e check st ar t ed. . .
NTP Conf i gur at i on f i l e check passed

Checki ng daemon l i veness. . .
Li veness check passed f or "nt pd"
Check f or NTP daemon or ser vi ce al i ve passed on al l nodes

NTP daemon sl ewi ng opt i on check passed

NTP daemon' s boot t i me conf i gur at i on check f or sl ewi ng opt i on
passed

NTP common Ti me Ser ver Check st ar t ed. . .
Check of common NTP Ti me Ser ver passed

Cl ock t i me of f set check f r omNTP Ti me Ser ver st ar t ed. . .
Cl ock t i me of f set check passed

Cl ock synchr oni zat i on check usi ng Net wor k Ti me Pr ot ocol ( NTP)
passed

Cor e f i l e name pat t er n consi st ency check passed.

User " gr i d" i s not par t of " r oot " gr oup. Check passed
Def aul t user f i l e cr eat i on mask check passed
Checki ng consi st ency of f i l e " / et c/ r esol v. conf " acr oss nodes

Fi l e " / et c/ r esol v. conf " does not have bot h domai n and sear ch
ent r i es def i ned
domai n ent r y i n f i l e " / et c/ r esol v. conf " i s consi st ent acr oss
nodes
sear ch ent r y i n f i l e " / et c/ r esol v. conf " i s consi st ent acr oss
nodes
Al l nodes have one sear ch ent r y def i ned i n f i l e
" / et c/ r esol v. conf "
The DNS r esponse t i me f or an unr eachabl e node i s wi t hi n
accept abl e l i mi t on al l nodes

Fi l e " / et c/ r esol v. conf " i s consi st ent acr oss nodes

Ti me zone consi st ency check passed

Pr e- check f or cl ust er ser vi ces set up was unsuccessf ul on al l
t he nodes. $
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 29
5) Generate the fixup scripts with cl uvf y with the f i xup option.
[ gr i d@host 01 ~] $ cluvfy stage -pre crsinst -n host03 fixup
Per f or mi ng pr e- checks f or cl ust er ser vi ces set up

Checki ng node r eachabi l i t y. . .
Node r eachabi l i t y check passed f r omnode "host 01"


Checki ng user equi val ence. . .
User equi val ence check passed f or user "gr i d"

Checki ng node connect i vi t y. . .

Checki ng host s conf i g f i l e. . .

Ver i f i cat i on of t he host s conf i g f i l e successf ul

Node connect i vi t y passed f or subnet "192. 0. 2. 0" wi t h node( s)
host 03
TCP connect i vi t y check passed f or subnet "192. 0. 2. 0"

Node connect i vi t y passed f or subnet "192. 168. 1. 0" wi t h node( s)
host 03
TCP connect i vi t y check passed f or subnet "192. 168. 1. 0"


I nt er f aces f ound on subnet " 192. 0. 2. 0" t hat ar e l i kel y
candi dat es f or VI P ar e:
host 03 et h0: 192. 0. 2. 103

WARNI NG:
I nt er f ace subnet " 192. 0. 2. 0" does not have a gat eway def i ned

I nt er f aces f ound on subnet " 192. 168. 1. 0" t hat ar e l i kel y
candi dat es f or a pr i vat e i nt er connect ar e:
host 03 et h1: 192. 168. 1. 103

I nt er f aces f ound on subnet " 192. 168. 1. 0" t hat ar e l i kel y
candi dat es f or a pr i vat e i nt er connect ar e:
host 03 et h2: 192. 168. 1. 203

Node connect i vi t y check passed

Checki ng mul t i cast communi cat i on. . .

Checki ng subnet "192. 0. 2. 0" f or mul t i cast communi cat i on wi t h
mul t i cast gr oup "230. 0. 1. 0". . .
Check of subnet "192. 0. 2. 0" f or mul t i cast communi cat i on wi t h
mul t i cast gr oup "230. 0. 1. 0" passed.

Checki ng subnet "192. 168. 1. 0" f or mul t i cast communi cat i on wi t h
mul t i cast gr oup "230. 0. 1. 0". . .
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 30
Check of subnet "192. 168. 1. 0" f or mul t i cast communi cat i on wi t h
mul t i cast gr oup "230. 0. 1. 0" passed.

Check of mul t i cast communi cat i on passed.

Checki ng ASMLi b conf i gur at i on.
Check f or ASMLi b conf i gur at i on passed.
Tot al memor y check passed
Avai l abl e memor y check passed
Swap space check passed
Fr ee di sk space check passed f or " host 03: / u01/ app/ 11. 2. 0/ gr i d"
Fr ee di sk space check passed f or " host 03: / t mp"
Check f or mul t i pl e user s wi t h UI D val ue 502 passed
User exi st ence check passed f or "gr i d"
Gr oup exi st ence check passed f or " oi nst al l "
Gr oup exi st ence check passed f or " dba"
Member shi p check f or user "gr i d" i n gr oup " oi nst al l " [ as
Pr i mar y] passed
Member shi p check f or user "gr i d" i n gr oup " dba" f ai l ed
Check f ai l ed on nodes:
host 03
Run l evel check passed
Har d l i mi t s check passed f or " maxi mumopen f i l e descr i pt or s"
Sof t l i mi t s check passed f or " maxi mumopen f i l e descr i pt or s"
Har d l i mi t s check passed f or " maxi mumuser pr ocesses"
Sof t l i mi t s check passed f or " maxi mumuser pr ocesses"
Syst emar chi t ect ur e check passed
Ker nel ver si on check passed
Ker nel par amet er check passed f or "semmsl "
Ker nel par amet er check passed f or "semmns"
Ker nel par amet er check passed f or "semopm"
Ker nel par amet er check passed f or "semmni "
Ker nel par amet er check passed f or "shmmax"
Ker nel par amet er check passed f or "shmmni "
Ker nel par amet er check passed f or "shmal l "
Ker nel par amet er check passed f or "f i l e- max"
Ker nel par amet er check passed f or "i p_l ocal _por t _r ange"
Ker nel par amet er check passed f or "r mem_def aul t "
Ker nel par amet er check passed f or "r mem_max"
Ker nel par amet er check passed f or "wmem_def aul t "
Ker nel par amet er check passed f or "wmem_max"
Ker nel par amet er check passed f or "ai o- max- nr "
Package exi st ence check passed f or " make"
Package exi st ence check passed f or " bi nut i l s"
Package exi st ence check passed f or " gcc( x86_64) "
Package exi st ence check passed f or " l i bai o( x86_64) "
Package exi st ence check passed f or " gl i bc( x86_64) "
Package exi st ence check passed f or " compat - l i bst dc++-
33( x86_64) "
Package exi st ence check passed f or " el f ut i l s- l i bel f ( x86_64) "
Package exi st ence check passed f or " el f ut i l s- l i bel f - devel "
Package exi st ence check passed f or " gl i bc- common"
Package exi st ence check passed f or " gl i bc- devel ( x86_64) "
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 31
Package exi st ence check passed f or " gl i bc- header s"
Package exi st ence check passed f or " gcc- c++( x86_64) "
Package exi st ence check passed f or " l i bai o- devel ( x86_64) "
Package exi st ence check passed f or " l i bgcc( x86_64) "
Package exi st ence check passed f or " l i bst dc++( x86_64) "
Package exi st ence check passed f or " l i bst dc++- devel ( x86_64) "
Package exi st ence check passed f or " sysst at "
Package exi st ence check passed f or " ksh"
Check f or mul t i pl e user s wi t h UI D val ue 0 passed
Cur r ent gr oup I D check passed

St ar t i ng check f or consi st ency of pr i mar y gr oup of r oot user

Check f or consi st ency of r oot user ' s pr i mar y gr oup passed

St ar t i ng Cl ock synchr oni zat i on checks usi ng Net wor k Ti me
Pr ot ocol ( NTP) . . .

NTP Conf i gur at i on f i l e check st ar t ed. . .
NTP Conf i gur at i on f i l e check passed

Checki ng daemon l i veness. . .
Li veness check passed f or "nt pd"
Check f or NTP daemon or ser vi ce al i ve passed on al l nodes

NTP daemon sl ewi ng opt i on check passed

NTP daemon' s boot t i me conf i gur at i on check f or sl ewi ng opt i on
passed

NTP common Ti me Ser ver Check st ar t ed. . .
Check of common NTP Ti me Ser ver passed

Cl ock t i me of f set check f r omNTP Ti me Ser ver st ar t ed. . .
Cl ock t i me of f set check passed

Cl ock synchr oni zat i on check usi ng Net wor k Ti me Pr ot ocol ( NTP)
passed

Cor e f i l e name pat t er n consi st ency check passed.

User " gr i d" i s not par t of " r oot " gr oup. Check passed
Def aul t user f i l e cr eat i on mask check passed
Checki ng consi st ency of f i l e " / et c/ r esol v. conf " acr oss nodes

Fi l e " / et c/ r esol v. conf " does not have bot h domai n and sear ch
ent r i es def i ned
domai n ent r y i n f i l e " / et c/ r esol v. conf " i s consi st ent acr oss
nodes
sear ch ent r y i n f i l e " / et c/ r esol v. conf " i s consi st ent acr oss
nodes
Al l nodes have one sear ch ent r y def i ned i n f i l e
" / et c/ r esol v. conf "
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 32
The DNS r esponse t i me f or an unr eachabl e node i s wi t hi n
accept abl e l i mi t on al l nodes

Fi l e " / et c/ r esol v. conf " i s consi st ent acr oss nodes

Ti me zone consi st ency check passed
Fi xup i nf or mat i on has been gener at ed f or f ol l owi ng node( s) :
host 03
Please run the following script on each node as "root" user to
execute the fixups:
'/tmp/CVU_11.2.0.3.0_grid/runfixup.sh'

Pr e- check f or cl ust er ser vi ces set up was unsuccessf ul on al l
t he nodes.
6) Run the fixup script as directed.
[ r oot @host 01 ~] # ssh root@host03
/tmp/CVU_11.2.0.3.0_grid/runfixup.sh
r oot @host 03' s passwor d:
Response f i l e bei ng used i s
: / t mp/ CVU_11. 2. 0. 3. 0_gr i d/ f i xup. r esponse
Enabl e f i l e bei ng used i s
: / t mp/ CVU_11. 2. 0. 3. 0_gr i d/ f i xup. enabl e
Log f i l e l ocat i on: / t mp/ CVU_11. 2. 0. 3. 0_gr i d/ or ar un. l og

[ r oot @host 01 ~] # exit

[ gr i d@host 01 ~] $
7) After setting your environment, use the Cluster Verification Utility to make sure that
you can add your third node to the cluster.
[ gr i d@host 01 ~] $ . or aenv
ORACLE_SI D = [ gr i d] ? +ASM1
The Or acl e base has been set t o / u01/ app/ gr i d

[ gr i d@host 01 ~] $ cluvfy stage -pre nodeadd -n host03

Per f or mi ng pr e- checks f or node addi t i on

Checki ng node r eachabi l i t y. . .
Node r eachabi l i t y check passed f r omnode "host 01"


Checki ng user equi val ence. . .
User equi val ence check passed f or user "gr i d"

Checki ng node connect i vi t y. . .

Checki ng host s conf i g f i l e. . .

Ver i f i cat i on of t he host s conf i g f i l e successf ul

O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 33
Check: Node connect i vi t y f or i nt er f ace "et h0"
Node connect i vi t y passed f or i nt er f ace "et h0"
TCP connect i vi t y check passed f or subnet "192. 0. 2. 0"

Checki ng subnet mask consi st ency. . .
Subnet mask consi st ency check passed f or subnet " 192. 0. 2. 0" .
Subnet mask consi st ency check passed.

Node connect i vi t y check passed

Checki ng mul t i cast communi cat i on. . .

Checki ng subnet "192. 0. 2. 0" f or mul t i cast communi cat i on wi t h
mul t i cast gr oup "230. 0. 1. 0". . .
Check of subnet "192. 0. 2. 0" f or mul t i cast communi cat i on wi t h
mul t i cast gr oup "230. 0. 1. 0" passed.

Check of mul t i cast communi cat i on passed.

Checki ng CRS i nt egr i t y. . .

Cl ust er war e ver si on consi st ency passed

CRS i nt egr i t y check passed

Checki ng shar ed r esour ces. . .

Checki ng CRS home l ocat i on. . .
" / u01/ app/ 11. 2. 0/ gr i d" i s shar ed
Shar ed r esour ces check f or node addi t i on passed


Checki ng node connect i vi t y. . .

Checki ng host s conf i g f i l e. . .

Ver i f i cat i on of t he host s conf i g f i l e successf ul

Check: Node connect i vi t y f or i nt er f ace "et h0"
Node connect i vi t y passed f or i nt er f ace "et h0"
TCP connect i vi t y check passed f or subnet "192. 0. 2. 0"


Check: Node connect i vi t y f or i nt er f ace "et h1"
Node connect i vi t y passed f or i nt er f ace "et h1"
TCP connect i vi t y check passed f or subnet "192. 168. 1. 0"


Check: Node connect i vi t y f or i nt er f ace "et h2"
Checki ng subnet mask consi st ency. . .
Subnet mask consi st ency check passed f or subnet " 192. 0. 2. 0" .
Subnet mask consi st ency check passed f or subnet " 192. 168. 1. 0" .
Subnet mask consi st ency check passed.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 34

Node connect i vi t y check passed

Checki ng mul t i cast communi cat i on. . .

Checki ng subnet "192. 0. 2. 0" f or mul t i cast communi cat i on wi t h
mul t i cast gr oup "230. 0. 1. 0". . .
Check of subnet "192. 0. 2. 0" f or mul t i cast communi cat i on wi t h
mul t i cast gr oup "230. 0. 1. 0" passed.

Checki ng subnet "192. 168. 1. 0" f or mul t i cast communi cat i on wi t h
mul t i cast gr oup "230. 0. 1. 0". . .
Check of subnet "192. 168. 1. 0" f or mul t i cast communi cat i on wi t h
mul t i cast gr oup "230. 0. 1. 0" passed.

Check of mul t i cast communi cat i on passed.
Tot al memor y check passed
Avai l abl e memor y check passed
Swap space check passed
Fr ee di sk space check passed f or " host 03: / u01/ app/ 11. 2. 0/ gr i d"
Fr ee di sk space check passed f or " host 01: / u01/ app/ 11. 2. 0/ gr i d"
Fr ee di sk space check passed f or " host 03: / t mp"
Fr ee di sk space check passed f or " host 01: / t mp"
Check f or mul t i pl e user s wi t h UI D val ue 502 passed
User exi st ence check passed f or "gr i d"
Run l evel check passed
Har d l i mi t s check passed f or " maxi mumopen f i l e descr i pt or s"
Sof t l i mi t s check passed f or " maxi mumopen f i l e descr i pt or s"
Har d l i mi t s check passed f or " maxi mumuser pr ocesses"
Sof t l i mi t s check passed f or " maxi mumuser pr ocesses"
Syst emar chi t ect ur e check passed
Ker nel ver si on check passed
Ker nel par amet er check passed f or "semmsl "
Ker nel par amet er check passed f or "semmns"
Ker nel par amet er check passed f or "semopm"
Ker nel par amet er check passed f or "semmni "
Ker nel par amet er check passed f or "shmmax"
Ker nel par amet er check passed f or "shmmni "
Ker nel par amet er check passed f or "shmal l "
Ker nel par amet er check passed f or "f i l e- max"
Ker nel par amet er check passed f or "i p_l ocal _por t _r ange"
Ker nel par amet er check passed f or "r mem_def aul t "
Ker nel par amet er check passed f or "r mem_max"
Ker nel par amet er check passed f or "wmem_def aul t "
Ker nel par amet er check passed f or "wmem_max"
Ker nel par amet er check passed f or "ai o- max- nr "
Package exi st ence check passed f or " make"
Package exi st ence check passed f or " bi nut i l s"
Package exi st ence check passed f or " gcc( x86_64) "
Package exi st ence check passed f or " l i bai o( x86_64) "
Package exi st ence check passed f or " gl i bc( x86_64) "
Package exi st ence check passed f or " compat - l i bst dc++-
33( x86_64) "
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 35
Package exi st ence check passed f or " el f ut i l s- l i bel f ( x86_64) "
Package exi st ence check passed f or " el f ut i l s- l i bel f - devel "
Package exi st ence check passed f or " gl i bc- common"
Package exi st ence check passed f or " gl i bc- devel ( x86_64) "
Package exi st ence check passed f or " gl i bc- header s"
Package exi st ence check passed f or " gcc- c++( x86_64) "
Package exi st ence check passed f or " l i bai o- devel ( x86_64) "
Package exi st ence check passed f or " l i bgcc( x86_64) "
Package exi st ence check passed f or " l i bst dc++( x86_64) "
Package exi st ence check passed f or " l i bst dc++- devel ( x86_64) "
Package exi st ence check passed f or " sysst at "
Package exi st ence check passed f or " ksh"
Check f or mul t i pl e user s wi t h UI D val ue 0 passed
Cur r ent gr oup I D check passed

St ar t i ng check f or consi st ency of pr i mar y gr oup of r oot user

Check f or consi st ency of r oot user ' s pr i mar y gr oup passed

Checki ng OCR i nt egr i t y. . .

OCR i nt egr i t y check passed

Checki ng Or acl e Cl ust er Vot i ng Di sk conf i gur at i on. . .

Or acl e Cl ust er Vot i ng Di sk conf i gur at i on check passed
Ti me zone consi st ency check passed

St ar t i ng Cl ock synchr oni zat i on checks usi ng Net wor k Ti me
Pr ot ocol ( NTP) . . .

NTP Conf i gur at i on f i l e check st ar t ed. . .
NTP Conf i gur at i on f i l e check passed

Checki ng daemon l i veness. . .
Li veness check passed f or "nt pd"
Check f or NTP daemon or ser vi ce al i ve passed on al l nodes

NTP daemon sl ewi ng opt i on check passed

NTP daemon' s boot t i me conf i gur at i on check f or sl ewi ng opt i on
passed

NTP common Ti me Ser ver Check st ar t ed. . .
Check of common NTP Ti me Ser ver passed

Cl ock t i me of f set check f r omNTP Ti me Ser ver st ar t ed. . .
Cl ock t i me of f set check passed

Cl ock synchr oni zat i on check usi ng Net wor k Ti me Pr ot ocol ( NTP)
passed


O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 36
User " gr i d" i s not par t of " r oot " gr oup. Check passed
Checki ng consi st ency of f i l e " / et c/ r esol v. conf " acr oss nodes

Fi l e " / et c/ r esol v. conf " does not have bot h domai n and sear ch
ent r i es def i ned
domai n ent r y i n f i l e " / et c/ r esol v. conf " i s consi st ent acr oss
nodes
sear ch ent r y i n f i l e " / et c/ r esol v. conf " i s consi st ent acr oss
nodes
Al l nodes have one sear ch ent r y def i ned i n f i l e
" / et c/ r esol v. conf "
The DNS r esponse t i me f or an unr eachabl e node i s wi t hi n
accept abl e l i mi t on al l nodes

Fi l e " / et c/ r esol v. conf " i s consi st ent acr oss nodes


Pr e- check f or node addi t i on was successf ul .
$
8) Add your third node to the cluster from your first node:
[ gr i d@host 01 ~] $ cd $ORACLE_HOME/oui/bin

[ gr i d@host 01 bi n] $ ./addNode.sh -silent
"CLUSTER_NEW_NODES={host03}"
"CLUSTER_NEW_VIRTUAL_HOSTNAMES={host03-vip}"

Per f or mi ng pr e- checks f or node addi t i on

Checki ng node r eachabi l i t y. . .
Node r eachabi l i t y check passed f r omnode "host 01"


Checki ng user equi val ence. . .
User equi val ence check passed f or user "gr i d"

Checki ng node connect i vi t y. . .

Checki ng host s conf i g f i l e. . .

Ver i f i cat i on of t he host s conf i g f i l e successf ul

Check: Node connect i vi t y f or i nt er f ace "et h0"
Node connect i vi t y passed f or i nt er f ace "et h0"
TCP connect i vi t y check passed f or subnet "192. 0. 2. 0"

Checki ng subnet mask consi st ency. . .
Subnet mask consi st ency check passed f or subnet " 192. 0. 2. 0" .
Subnet mask consi st ency check passed.

Node connect i vi t y check passed

O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 37
Checki ng mul t i cast communi cat i on. . .

Checki ng subnet "192. 0. 2. 0" f or mul t i cast communi cat i on wi t h
mul t i cast gr oup "230. 0. 1. 0". . .
Check of subnet "192. 0. 2. 0" f or mul t i cast communi cat i on wi t h
mul t i cast gr oup "230. 0. 1. 0" passed.

Check of mul t i cast communi cat i on passed.

Checki ng CRS i nt egr i t y. . .

Cl ust er war e ver si on consi st ency passed

CRS i nt egr i t y check passed

Checki ng shar ed r esour ces. . .

Checki ng CRS home l ocat i on. . .
" / u01/ app/ 11. 2. 0/ gr i d" i s shar ed
Shar ed r esour ces check f or node addi t i on passed


Checki ng node connect i vi t y. . .

Checki ng host s conf i g f i l e. . .

Ver i f i cat i on of t he host s conf i g f i l e successf ul

Check: Node connect i vi t y f or i nt er f ace "et h0"
Node connect i vi t y passed f or i nt er f ace "et h0"
TCP connect i vi t y check passed f or subnet "192. 0. 2. 0"


Check: Node connect i vi t y f or i nt er f ace "et h1"
Node connect i vi t y passed f or i nt er f ace "et h1"
TCP connect i vi t y check passed f or subnet "192. 168. 1. 0"


Check: Node connect i vi t y f or i nt er f ace "et h2"
Checki ng subnet mask consi st ency. . .
Subnet mask consi st ency check passed f or subnet " 192. 0. 2. 0" .
Subnet mask consi st ency check passed f or subnet " 192. 168. 1. 0" .
Subnet mask consi st ency check passed.

Node connect i vi t y check passed

Checki ng mul t i cast communi cat i on. . .

Checki ng subnet "192. 0. 2. 0" f or mul t i cast communi cat i on wi t h
mul t i cast gr oup "230. 0. 1. 0". . .
Check of subnet "192. 0. 2. 0" f or mul t i cast communi cat i on wi t h
mul t i cast gr oup "230. 0. 1. 0" passed.

O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 38
Checki ng subnet "192. 168. 1. 0" f or mul t i cast communi cat i on wi t h
mul t i cast gr oup "230. 0. 1. 0". . .
Check of subnet "192. 168. 1. 0" f or mul t i cast communi cat i on wi t h
mul t i cast gr oup "230. 0. 1. 0" passed.

Check of mul t i cast communi cat i on passed.
Tot al memor y check passed
Avai l abl e memor y check passed
Swap space check passed
Fr ee di sk space check passed f or " host 03: / u01/ app/ 11. 2. 0/ gr i d"
Fr ee di sk space check passed f or " host 01: / u01/ app/ 11. 2. 0/ gr i d"
Fr ee di sk space check passed f or " host 03: / t mp"
Fr ee di sk space check passed f or " host 01: / t mp"
Check f or mul t i pl e user s wi t h UI D val ue 502 passed
User exi st ence check passed f or "gr i d"
Run l evel check passed
Har d l i mi t s check passed f or " maxi mumopen f i l e descr i pt or s"
Sof t l i mi t s check passed f or " maxi mumopen f i l e descr i pt or s"
Har d l i mi t s check passed f or " maxi mumuser pr ocesses"
Sof t l i mi t s check passed f or " maxi mumuser pr ocesses"
Syst emar chi t ect ur e check passed
Ker nel ver si on check passed
Ker nel par amet er check passed f or "semmsl "
Ker nel par amet er check passed f or "semmns"
Ker nel par amet er check passed f or "semopm"
Ker nel par amet er check passed f or "semmni "
Ker nel par amet er check passed f or "shmmax"
Ker nel par amet er check passed f or "shmmni "
Ker nel par amet er check passed f or "shmal l "
Ker nel par amet er check passed f or "f i l e- max"
Ker nel par amet er check passed f or "i p_l ocal _por t _r ange"
Ker nel par amet er check passed f or "r mem_def aul t "
Ker nel par amet er check passed f or "r mem_max"
Ker nel par amet er check passed f or "wmem_def aul t "
Ker nel par amet er check passed f or "wmem_max"
Ker nel par amet er check passed f or "ai o- max- nr "
Package exi st ence check passed f or " make"
Package exi st ence check passed f or " bi nut i l s"
Package exi st ence check passed f or " gcc( x86_64) "
Package exi st ence check passed f or " l i bai o( x86_64) "
Package exi st ence check passed f or " gl i bc( x86_64) "
Package exi st ence check passed f or " compat - l i bst dc++-
33( x86_64) "
Package exi st ence check passed f or " el f ut i l s- l i bel f ( x86_64) "
Package exi st ence check passed f or " el f ut i l s- l i bel f - devel "
Package exi st ence check passed f or " gl i bc- common"
Package exi st ence check passed f or " gl i bc- devel ( x86_64) "
Package exi st ence check passed f or " gl i bc- header s"
Package exi st ence check passed f or " gcc- c++( x86_64) "
Package exi st ence check passed f or " l i bai o- devel ( x86_64) "
Package exi st ence check passed f or " l i bgcc( x86_64) "
Package exi st ence check passed f or " l i bst dc++( x86_64) "
Package exi st ence check passed f or " l i bst dc++- devel ( x86_64) "
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 39
Package exi st ence check passed f or " sysst at "
Package exi st ence check passed f or " ksh"
Check f or mul t i pl e user s wi t h UI D val ue 0 passed
Cur r ent gr oup I D check passed

St ar t i ng check f or consi st ency of pr i mar y gr oup of r oot user

Check f or consi st ency of r oot user ' s pr i mar y gr oup passed

Checki ng OCR i nt egr i t y. . .

OCR i nt egr i t y check passed

Checki ng Or acl e Cl ust er Vot i ng Di sk conf i gur at i on. . .

Or acl e Cl ust er Vot i ng Di sk conf i gur at i on check passed
Ti me zone consi st ency check passed

St ar t i ng Cl ock synchr oni zat i on checks usi ng Net wor k Ti me
Pr ot ocol ( NTP) . . .

NTP Conf i gur at i on f i l e check st ar t ed. . .
NTP Conf i gur at i on f i l e check passed

Checki ng daemon l i veness. . .
Li veness check passed f or "nt pd"
Check f or NTP daemon or ser vi ce al i ve passed on al l nodes

NTP daemon sl ewi ng opt i on check passed

NTP daemon' s boot t i me conf i gur at i on check f or sl ewi ng opt i on
passed

NTP common Ti me Ser ver Check st ar t ed. . .
Check of common NTP Ti me Ser ver passed

Cl ock t i me of f set check f r omNTP Ti me Ser ver st ar t ed. . .
Cl ock t i me of f set check passed

Cl ock synchr oni zat i on check usi ng Net wor k Ti me Pr ot ocol ( NTP)
passed


User " gr i d" i s not par t of " r oot " gr oup. Check passed
Checki ng consi st ency of f i l e " / et c/ r esol v. conf " acr oss nodes

Fi l e " / et c/ r esol v. conf " does not have bot h domai n and sear ch
ent r i es def i ned
domai n ent r y i n f i l e " / et c/ r esol v. conf " i s consi st ent acr oss
nodes
sear ch ent r y i n f i l e " / et c/ r esol v. conf " i s consi st ent acr oss
nodes
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 40
Al l nodes have one sear ch ent r y def i ned i n f i l e
" / et c/ r esol v. conf "
The DNS r esponse t i me f or an unr eachabl e node i s wi t hi n
accept abl e l i mi t on al l nodes

Fi l e " / et c/ r esol v. conf " i s consi st ent acr oss nodes

Checki ng VI P conf i gur at i on.
Checki ng VI P Subnet conf i gur at i on.
Check f or VI P Subnet conf i gur at i on passed.
Checki ng VI P r eachabi l i t y
Check f or VI P r eachabi l i t y passed.

Pr e- check f or node addi t i on was successf ul .
St ar t i ng Or acl e Uni ver sal I nst al l er . . .

Checki ng swap space: must be gr eat er t han 500 MB. Act ual
4081 MB Passed
Or acl e Uni ver sal I nst al l er , Ver si on 11. 2. 0. 3. 0 Pr oduct i on
Copyr i ght ( C) 1999, 2011, Or acl e. Al l r i ght s r eser ved.


Per f or mi ng t est s t o see whet her nodes host 02, host 03 ar e
avai l abl e
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. 100%Done.

.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - -
Cl ust er Node Addi t i on Summar y
Gl obal Set t i ngs
Sour ce: / u01/ app/ 11. 2. 0/ gr i d
New Nodes
Space Requi r ement s
New Nodes
host 03
/ u01: Requi r ed 4. 19GB : Avai l abl e 13. 52GB
I nst al l ed Pr oduct s
Pr oduct Names
Or acl e Gr i d I nf r ast r uct ur e 11. 2. 0. 3. 0
Sun J DK 1. 5. 0. 30. 03
I nst al l er SDK Component 11. 2. 0. 3. 0
Or acl e One- Of f Pat ch I nst al l er 11. 2. 0. 1. 7
Or acl e Uni ver sal I nst al l er 11. 2. 0. 3. 0
Or acl e USM Deconf i gur at i on 11. 2. 0. 3. 0
Or acl e Conf i gur at i on Manager Deconf i gur at i on 10. 3. 1. 0. 0
Ent er pr i se Manager Common Cor e Fi l es 10. 2. 0. 4. 4
Or acl e DBCA Deconf i gur at i on 11. 2. 0. 3. 0
Or acl e RAC Deconf i gur at i on 11. 2. 0. 3. 0
Or acl e Qual i t y of Ser vi ce Management ( Ser ver ) 11. 2. 0. 3. 0
I nst al l at i on Pl ugi n Fi l es 11. 2. 0. 3. 0
Uni ver sal St or age Manager Fi l es 11. 2. 0. 3. 0
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 41
Or acl e Text Requi r ed Suppor t Fi l es 11. 2. 0. 3. 0
Aut omat i c St or age Management Assi st ant 11. 2. 0. 3. 0
Or acl e Dat abase 11g Mul t i medi a Fi l es 11. 2. 0. 3. 0
Or acl e Mul t i medi a J ava Advanced I magi ng 11. 2. 0. 3. 0
Or acl e Gl obal i zat i on Suppor t 11. 2. 0. 3. 0
Or acl e Mul t i medi a Locat or RDBMS Fi l es 11. 2. 0. 3. 0
Or acl e Cor e Requi r ed Suppor t Fi l es 11. 2. 0. 3. 0
Bal i Shar e 1. 1. 18. 0. 0
Or acl e Dat abase Deconf i gur at i on 11. 2. 0. 3. 0
Or acl e Qual i t y of Ser vi ce Management ( Cl i ent ) 11. 2. 0. 3. 0
Expat l i br ar i es 2. 0. 1. 0. 1
Or acl e Cont ai ner s f or J ava 11. 2. 0. 3. 0
Per l Modul es 5. 10. 0. 0. 1
Secur e Socket Layer 11. 2. 0. 3. 0
Or acl e J DBC/ OCI I nst ant Cl i ent 11. 2. 0. 3. 0
Or acl e Mul t i medi a Cl i ent Opt i on 11. 2. 0. 3. 0
LDAP Requi r ed Suppor t Fi l es 11. 2. 0. 3. 0
Char act er Set Mi gr at i on Ut i l i t y 11. 2. 0. 3. 0
Per l I nt er pr et er 5. 10. 0. 0. 2
PL/ SQL Embedded Gat eway 11. 2. 0. 3. 0
OLAP SQL Scr i pt s 11. 2. 0. 3. 0
Dat abase SQL Scr i pt s 11. 2. 0. 3. 0
Or acl e Ext ended Wi ndowi ng Tool ki t 3. 4. 47. 0. 0
SSL Requi r ed Suppor t Fi l es f or I nst ant Cl i ent 11. 2. 0. 3. 0
SQL*Pl us Fi l es f or I nst ant Cl i ent 11. 2. 0. 3. 0
Or acl e Net Requi r ed Suppor t Fi l es 11. 2. 0. 3. 0
Or acl e Dat abase User I nt er f ace 2. 2. 13. 0. 0
RDBMS Requi r ed Suppor t Fi l es f or I nst ant Cl i ent
11. 2. 0. 3. 0
RDBMS Requi r ed Suppor t Fi l es Runt i me 11. 2. 0. 3. 0
XML Par ser f or J ava 11. 2. 0. 3. 0
Or acl e Secur i t y Devel oper Tool s 11. 2. 0. 3. 0
Or acl e Wal l et Manager 11. 2. 0. 3. 0
Ent er pr i se Manager pl ugi n Common Fi l es 11. 2. 0. 3. 0
Pl at f or mRequi r ed Suppor t Fi l es 11. 2. 0. 3. 0
Or acl e J FC Ext ended Wi ndowi ng Tool ki t 4. 2. 36. 0. 0
RDBMS Requi r ed Suppor t Fi l es 11. 2. 0. 3. 0
Or acl e I ce Br owser 5. 2. 3. 6. 0
Or acl e Hel p For J ava 4. 2. 9. 0. 0
Ent er pr i se Manager Common Fi l es 10. 2. 0. 4. 3
Dei nst al l at i on Tool 11. 2. 0. 3. 0
Or acl e J ava Cl i ent 11. 2. 0. 3. 0
Cl ust er Ver i f i cat i on Ut i l i t y Fi l es 11. 2. 0. 3. 0
Or acl e Not i f i cat i on Ser vi ce ( eONS) 11. 2. 0. 3. 0
Or acl e LDAP admi ni st r at i on 11. 2. 0. 3. 0
Cl ust er Ver i f i cat i on Ut i l i t y Common Fi l es 11. 2. 0. 3. 0
Or acl e Cl ust er war e RDBMS Fi l es 11. 2. 0. 3. 0
Or acl e Local e Bui l der 11. 2. 0. 3. 0
Or acl e Gl obal i zat i on Suppor t 11. 2. 0. 3. 0
Bui l dt ool s Common Fi l es 11. 2. 0. 3. 0
Or acl e RAC Requi r ed Suppor t Fi l es- HAS 11. 2. 0. 3. 0
SQL*Pl us Requi r ed Suppor t Fi l es 11. 2. 0. 3. 0
XDK Requi r ed Suppor t Fi l es 11. 2. 0. 3. 0
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 42
Agent Requi r ed Suppor t Fi l es 10. 2. 0. 4. 3
Par ser Gener at or Requi r ed Suppor t Fi l es 11. 2. 0. 3. 0
Pr ecompi l er Requi r ed Suppor t Fi l es 11. 2. 0. 3. 0
I nst al l at i on Common Fi l es 11. 2. 0. 3. 0
Requi r ed Suppor t Fi l es 11. 2. 0. 3. 0
Or acl e J DBC/ THI N I nt er f aces 11. 2. 0. 3. 0
Or acl e Mul t i medi a Locat or 11. 2. 0. 3. 0
Or acl e Mul t i medi a 11. 2. 0. 3. 0
HAS Common Fi l es 11. 2. 0. 3. 0
Assi st ant Common Fi l es 11. 2. 0. 3. 0
PL/ SQL 11. 2. 0. 3. 0
HAS Fi l es f or DB 11. 2. 0. 3. 0
Or acl e Recover y Manager 11. 2. 0. 3. 0
Or acl e Dat abase Ut i l i t i es 11. 2. 0. 3. 0
Or acl e Not i f i cat i on Ser vi ce 11. 2. 0. 3. 0
SQL*Pl us 11. 2. 0. 3. 0
Or acl e Net ca Cl i ent 11. 2. 0. 3. 0
Or acl e Net 11. 2. 0. 3. 0
Or acl e J VM 11. 2. 0. 3. 0
Or acl e I nt er net Di r ect or y Cl i ent 11. 2. 0. 3. 0
Or acl e Net Li st ener 11. 2. 0. 3. 0
Cl ust er Ready Ser vi ces Fi l es 11. 2. 0. 3. 0
Or acl e Dat abase 11g 11. 2. 0. 3. 0
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - -


I nst ant i at i ng scr i pt s f or add node ( Thur sday, Apr i l 26, 2012
7: 41: 26 AM UTC)
.
1%Done.
I nst ant i at i on of add node scr i pt s compl et e

Copyi ng t o r emot e nodes ( Thur sday, Apr i l 26, 2012 7: 41: 30 AM
UTC)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
96%Done.
Home copi ed t o new nodes

Savi ng i nvent or y on nodes ( Thur sday, Apr i l 26, 2012 7: 46: 19 AM
UTC)
.
100%Done.
Save i nvent or y compl et e
WARNI NG: A new i nvent or y has been cr eat ed on one or mor e nodes
i n t hi s sessi on. However , i t has not yet been r egi st er ed as
t he cent r al i nvent or y of t hi s syst em.
To r egi st er t he new i nvent or y pl ease r un t he scr i pt at
' / u01/ app/ or aI nvent or y/ or ai nst Root . sh' wi t h r oot pr i vi l eges on
nodes ' host 03' .
I f you do not r egi st er t he i nvent or y, you may not be abl e t o
updat e or pat ch t he pr oduct s you i nst al l ed.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 43
The f ol l owi ng conf i gur at i on scr i pt s need t o be execut ed as t he
" r oot " user i n each new cl ust er node. Each scr i pt i n t he l i st
bel ow i s f ol l owed by a l i st of nodes.
/ u01/ app/ or aI nvent or y/ or ai nst Root . sh #On nodes host 03
/ u01/ app/ 11. 2. 0/ gr i d/ r oot . sh #On nodes host 03
To execut e t he conf i gur at i on scr i pt s:
1. Open a t er mi nal wi ndow
2. Log i n as " r oot "
3. Run t he scr i pt s i n each cl ust er node

The Cl ust er Node Addi t i on of / u01/ app/ 11. 2. 0/ gr i d was
successf ul .
Pl ease check ' / t mp/ si l ent I nst al l . l og' f or mor e det ai l s.
9) Connected as the r oot user on your third node using a terminal session, execute the
following scripts: / u01/ app/ or aI nvent or y/ or ai nst Root . sh and
/ u01/ app/ 11. 2. 0/ gr i d/ r oot . sh.
[ gr i d@host 01 ~] # ssh root@host03
r oot @host 03' s passwor d:
Last l ogi n: Tue Sep 29 09: 59: 03 2009 f r omhost 01. exampl e. com
[ r oot @host 03 gr i d] # /u01/app/oraInventory/orainstRoot.sh
Cr eat i ng t he Or acl e i nvent or y poi nt er f i l e ( / et c/ or aI nst . l oc)
Changi ng per mi ssi ons of / u01/ app/ or aI nvent or y.
Addi ng r ead, wr i t e per mi ssi ons f or gr oup.
Removi ng r ead, wr i t e, execut e per mi ssi ons f or wor l d.

Changi ng gr oupname of / u01/ app/ or aI nvent or y t o oi nst al l .
The execut i on of t he scr i pt i s compl et e.

[ r oot @host 03 ~] # cd /u01/app/11.2.0/grid

[ r oot @host 03 gr i d] # ./root.sh
Per f or mi ng r oot user oper at i on f or Or acl e 11g

The f ol l owi ng envi r onment var i abl es ar e set as:
ORACLE_OWNER= gr i d
ORACLE_HOME= / u01/ app/ 11. 2. 0/ gr i d

Ent er t he f ul l pat hname of t he l ocal bi n di r ect or y:
[ / usr / l ocal / bi n] :
Copyi ng dbhome t o / usr / l ocal / bi n . . .
Copyi ng or aenv t o / usr / l ocal / bi n . . .
Copyi ng cor aenv t o / usr / l ocal / bi n . . .


Cr eat i ng / et c/ or at ab f i l e. . .
Ent r i es wi l l be added t o t he / et c/ or at ab f i l e as needed by
Dat abase Conf i gur at i on Assi st ant when a dat abase i s cr eat ed
Fi ni shed r unni ng gener i c par t of r oot scr i pt .
Now pr oduct - speci f i c r oot act i ons wi l l be per f or med.
Usi ng conf i gur at i on par amet er f i l e:
/ u01/ app/ 11. 2. 0/ gr i d/ cr s/ i nst al l / cr sconf i g_par ams
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 44
Cr eat i ng t r ace di r ect or y
User i gnor ed Pr er equi si t es dur i ng i nst al l at i on
OLR i ni t i al i zat i on - successf ul
Addi ng Cl ust er war e ent r i es t o i ni t t ab
CRS- 4402: The CSS daemon was st ar t ed i n excl usi ve mode but
f ound an act i ve CSS daemon on node host 01, number 1, and i s
t er mi nat i ng
An act i ve cl ust er was f ound dur i ng excl usi ve st ar t up,
r est ar t i ng t o j oi n t he cl ust er
cl scf g: EXI STI NG conf i gur at i on ver si on 5 det ect ed.
cl scf g: ver si on 5 i s 11g Rel ease 2.
Successf ul l y accumul at ed necessar y OCR keys.
Cr eat i ng OCR keys f or user ' r oot ' , pr i vgr p ' r oot ' . .
Oper at i on successf ul .
Conf i gur e Or acl e Gr i d I nf r ast r uct ur e f or a Cl ust er . . .
succeeded

[ r oot @host 03 gr i d] # exi t

[ gr i d@host 01 ~] $
10) From your first node, check that your cluster is integrated and that the cluster is not
divided into separate parts.
[ gr i d@host 01 ~] $ cluvfy stage -post nodeadd -n host03

Per f or mi ng post - checks f or node addi t i on

Checki ng node r eachabi l i t y. . .
Node r eachabi l i t y check passed f r omnode "host 01"


Checki ng user equi val ence. . .
User equi val ence check passed f or user "gr i d"

Checki ng node connect i vi t y. . .

Checki ng host s conf i g f i l e. . .

Ver i f i cat i on of t he host s conf i g f i l e successf ul

Check: Node connect i vi t y f or i nt er f ace "et h3"
Node connect i vi t y passed f or i nt er f ace "et h3"

Node connect i vi t y check passed


Checki ng cl ust er i nt egr i t y. . .


Cl ust er i nt egr i t y check passed


O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 45
Checki ng CRS i nt egr i t y. . .

CRS i nt egr i t y check passed

Checki ng shar ed r esour ces. . .
Shar ed r esour ces check f or node addi t i on passed


Checki ng node connect i vi t y. . .

Checki ng host s conf i g f i l e. . .

Ver i f i cat i on of t he host s conf i g f i l e successf ul

Node connect i vi t y passed f or subnet "10. 216. 52. 0" wi t h node( s)
host 03, host 02, host 01
TCP connect i vi t y check passed f or subnet "10. 216. 52. 0"

Node connect i vi t y passed f or subnet "10. 216. 96. 0" wi t h node( s)
host 03, host 02, host 01
TCP connect i vi t y check passed f or subnet "10. 216. 96. 0"

Node connect i vi t y passed f or subnet "10. 196. 28. 0" wi t h node( s)
host 03, host 02, host 01
TCP connect i vi t y check passed f or subnet "10. 196. 28. 0"

Node connect i vi t y passed f or subnet "10. 196. 180. 0" wi t h
node( s) host 03, host 02, host 01
TCP connect i vi t y check passed f or subnet "10. 196. 180. 0"


I nt er f aces f ound on subnet " 10. 216. 52. 0" t hat ar e l i kel y
candi dat es f or VI P ar e:
host 03 et h0: 10. 216. 54. 235
host 02 et h0: 10. 216. 54. 234
host 01 et h0: 10. 216. 54. 233

I nt er f aces f ound on subnet " 10. 216. 96. 0" t hat ar e l i kel y
candi dat es f or a pr i vat e i nt er connect ar e:
host 03 et h1: 10. 216. 100. 226
host 02 et h1: 10. 216. 96. 144
host 01 et h1: 10. 216. 101. 101

I nt er f aces f ound on subnet " 10. 196. 28. 0" t hat ar e l i kel y
candi dat es f or a pr i vat e i nt er connect ar e:
host 03 et h2: 10. 196. 31. 17
host 02 et h2: 10. 196. 31. 16
host 01 et h2: 10. 196. 31. 15

I nt er f aces f ound on subnet " 10. 196. 180. 0" t hat ar e l i kel y
candi dat es f or a pr i vat e i nt er connect ar e:
host 03 et h3: 10. 196. 180. 17 et h3: 10. 196. 182. 229
et h3: 10. 196. 180. 224
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 46
host 02 et h3: 10. 196. 180. 16 et h3: 10. 196. 181. 231
et h3: 10. 196. 181. 244
host 01 et h3: 10. 196. 180. 15 et h3: 10. 196. 181. 239
et h3: 10. 196. 183. 15 et h3: 10. 196. 180. 232

Node connect i vi t y check passed

Checki ng node appl i cat i on exi st ence. . .

Checki ng exi st ence of VI P node appl i cat i on ( r equi r ed)
Check passed.

Checki ng exi st ence of ONS node appl i cat i on ( opt i onal )
Check passed.

Checki ng exi st ence of GSD node appl i cat i on ( opt i onal )
Check i gnor ed.

Checki ng exi st ence of EONS node appl i cat i on ( opt i onal )
Check passed.

Checki ng exi st ence of NETWORK node appl i cat i on ( opt i onal )
Check passed.


Checki ng Si ngl e Cl i ent Access Name ( SCAN) . . .

Checki ng name r esol ut i on set up f or " cl 7215-
scan. cl 7215. exampl e. com" . . .

Ver i f i cat i on of SCAN VI P and Li st ener set up passed

User " gr i d" i s not par t of " r oot " gr oup. Check passed

Checki ng i f Cl ust er war e i s i nst al l ed on al l nodes. . .
Check of Cl ust er war e i nst al l passed

Checki ng i f CTSS Resour ce i s r unni ng on al l nodes. . .
CTSS r esour ce check passed


Quer yi ng CTSS f or t i me of f set on al l nodes. . .
Quer y of CTSS f or t i me of f set passed

Check CTSS st at e st ar t ed. . .
CTSS i s i n Obser ver st at e. Swi t chi ng over t o cl ock
synchr oni zat i on checks usi ng NTP


St ar t i ng Cl ock synchr oni zat i on checks usi ng Net wor k Ti me
Pr ot ocol ( NTP) . . .

NTP Conf i gur at i on f i l e check st ar t ed. . .
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 47
NTP Conf i gur at i on f i l e check passed

Checki ng daemon l i veness. . .
Li veness check passed f or "nt pd"

NTP daemon sl ewi ng opt i on check passed

NTP daemon' s boot t i me conf i gur at i on check f or sl ewi ng opt i on
passed

NTP common Ti me Ser ver Check st ar t ed. . .
Check of common NTP Ti me Ser ver passed

Cl ock t i me of f set check f r omNTP Ti me Ser ver st ar t ed. . .
Cl ock t i me of f set check passed

Cl ock synchr oni zat i on check usi ng Net wor k Ti me Pr ot ocol ( NTP)
passed


Or acl e Cl ust er Ti me Synchr oni zat i on Ser vi ces check passed

Post - check f or node addi t i on was successf ul .
11) Make sure that local and cluster resources are placed properly and that the FRA ASM
disk group is mounted on all three nodes.
[ gr i d@host 01~] $ crsctl stat res -t
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NAME TARGET STATE SERVER STATE_DETAI LS
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Local Resour ces
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
or a. DATA. dg
ONLI NE ONLI NE host 01
ONLI NE ONLI NE host 02
ONLI NE ONLI NE host 03
or a. FRA. dg
ONLI NE ONLI NE host 01
ONLI NE ONLI NE host 02
ONLI NE ONLI NE host 03
or a. LI STENER. l snr
ONLI NE ONLI NE host 01
ONLI NE ONLI NE host 02
ONLI NE ONLI NE host 03
or a. asm
ONLI NE ONLI NE host 01 St ar t ed
ONLI NE ONLI NE host 02 St ar t ed
ONLI NE ONLI NE host 03 St ar t ed
or a. gsd
OFFLI NE OFFLI NE host 01
OFFLI NE OFFLI NE host 02
OFFLI NE OFFLI NE host 03
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 48
or a. net 1. net wor k
ONLI NE ONLI NE host 01
ONLI NE ONLI NE host 02
ONLI NE ONLI NE host 03
or a. ons
ONLI NE ONLI NE host 01
ONLI NE ONLI NE host 02
ONLI NE ONLI NE host 03
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Cl ust er Resour ces
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
or a. LI STENER_SCAN1. l snr
1 ONLI NE ONLI NE host 02
or a. LI STENER_SCAN2. l snr
1 ONLI NE ONLI NE host 03
or a. LI STENER_SCAN3. l snr
1 ONLI NE ONLI NE host 01
or a. cvu
1 ONLI NE ONLI NE host 01
or a. host 01. vi p
1 ONLI NE ONLI NE host 01
or a. host 02. vi p
1 ONLI NE ONLI NE host 02
or a. host 03. vi p
1 ONLI NE ONLI NE host 03
or a. oc4j
1 ONLI NE ONLI NE host 01
or a. scan1. vi p
1 ONLI NE ONLI NE host 02
or a. scan2. vi p
1 ONLI NE ONLI NE host 03
or a. scan3. vi p
1 ONLI NE ONLI NE host 01
[ gr i d@host 01] $

O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 49
Practices for Lesson 6
In these practices, you will verify, stop, and start Oracle Clusterware. You will add and
remove Oracle Clusterware configuration files and back up the Oracle Cluster Registry
and the Oracle Local Registry.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 50
Practi ce 6-1: Veri fyi ng, Starti ng, and Stoppi ng Oracle
Clusterware
In this practice, you check the status of Oracle Clusterware using both the operating
system commands and the cr sct l utility. You will also start and stop Oracle
Clusterware.
1) Connect to the first node of your cluster as the gr i d user. You can use the or aenv
script to define ORACLE_SI D, ORACLE_HOME, PATH, ORACLE_BASE, and
LD_LI BRARY_PATH for your environment.
[ gr i d@host 01~] $ id
ui d=502( gr i d) gi d=501( oi nst al l )
gr oups=501( oi nst al l ) , 504( asmadmi n) , 505( asmdba) , 506( asmoper )
[ gr i d@host 01~] $ . oraenv
ORACLE_SI D = [ gr i d] ? +ASM1
The Or acl e base has been set t o / u01/ app/ gr i d

2) Using the operating system commands, verify that the Oracle Clusterware daemon
processes are running on the current node. (Hint: Most of the Oracle Clusterware
daemon processes have names that end with d. bi n.)
[ gr i d@host 01~] $ pgrep -l d.bin
20129 ohasd. bi n
20272 mdnsd. bi n
20284 gpnpd. bi n
20297 gi pcd. bi n
20313 osysmond. bi n
20366 ocssd. bi n
20497 oct ssd. bi n
20521 evmd. bi n
20722 cr sd. bi n
3) Using the cr sct l utility, verify that Oracle Clusterware is running on the current
node.
[ gr i d@host 01~] $ crsctl check crs
CRS- 4638: Or acl e Hi gh Avai l abi l i t y Ser vi ces i s onl i ne
CRS- 4537: Cl ust er Ready Ser vi ces i s onl i ne
CRS- 4529: Cl ust er Synchr oni zat i on Ser vi ces i s onl i ne
CRS- 4533: Event Manager i s onl i ne
4) Verify the status of all cluster resources that are being managed by Oracle
Clusterware for all nodes.
[ gr i d@host 01~] $ crsctl stat res t

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NAME TARGET STATE SERVER STATE_DETAI LS
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Local Resour ces
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 6-1: Veri fyi ng, Starti ng, and Stoppi ng Oracle
Clusterware (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 51
or a. DATA. dg
ONLI NE ONLI NE host 01
ONLI NE ONLI NE host 02
ONLI NE ONLI NE host 03
or a. FRA. dg
ONLI NE ONLI NE host 01
ONLI NE ONLI NE host 02
ONLI NE ONLI NE host 03
or a. LI STENER. l snr
ONLI NE ONLI NE host 01
ONLI NE ONLI NE host 02
ONLI NE ONLI NE host 03
or a. asm
ONLI NE ONLI NE host 01 St ar t ed
ONLI NE ONLI NE host 02 St ar t ed
ONLI NE ONLI NE host 03 St ar t ed
or a. gsd
OFFLI NE OFFLI NE host 01
OFFLI NE OFFLI NE host 02
OFFLI NE OFFLI NE host 03
or a. net 1. net wor k
ONLI NE ONLI NE host 01
ONLI NE ONLI NE host 02
ONLI NE ONLI NE host 03
or a. ons
ONLI NE ONLI NE host 01
ONLI NE ONLI NE host 02
ONLI NE ONLI NE host 03
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Cl ust er Resour ces
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
or a. LI STENER_SCAN1. l snr
1 ONLI NE ONLI NE host 02
or a. LI STENER_SCAN2. l snr
1 ONLI NE ONLI NE host 03
or a. LI STENER_SCAN3. l snr
1 ONLI NE ONLI NE host 01
or a. cvu
1 ONLI NE ONLI NE host 01
or a. host 01. vi p
1 ONLI NE ONLI NE host 01
or a. host 02. vi p
1 ONLI NE ONLI NE host 02
or a. host 03. vi p
1 ONLI NE ONLI NE host 03
or a. oc4j
1 ONLI NE ONLI NE host 01
or a. scan1. vi p
1 ONLI NE ONLI NE host 02
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 6-1: Veri fyi ng, Starti ng, and Stoppi ng Oracle
Clusterware (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 52
or a. scan2. vi p
1 ONLI NE ONLI NE host 03
or a. scan3. vi p
1 ONLI NE ONLI NE host 01
5) Attempt to stop Oracle Clusterware on the current node while logged in as the gr i d
user. What happens and why?
[ gr i d@host 01~] $ crsctl stop crs
CRS- 4563: I nsuf f i ci ent user pr i vi l eges.
CRS- 4000: Command St op f ai l ed, or compl et ed wi t h er r or s.
6) Switch to the r oot account and stop Oracle Clusterware only on the current node.
Exit the switch user command when the stop succeeds.
[ gr i d@host 01~] $ su -
Passwor d: 0racle << Password is not displayed

[root@host01 ~]# /u01/app/11.2.0/grid/bin/crsctl stop crs
CRS- 2791: St ar t i ng shut down of Or acl e Hi gh Avai l abi l i t y
Ser vi ces- managed r esour ces on ' host 01'
CRS- 2673: At t empt i ng t o st op ' or a. cr sd' on ' host 01'
CRS- 2790: St ar t i ng shut down of Cl ust er Ready Ser vi ces- managed
r esour ces on ' host 01'
CRS- 2673: At t empt i ng t o st op ' or a. LI STENER_SCAN3. l snr ' on
' host 01'
CRS- 2673: At t empt i ng t o st op ' or a. oc4j ' on ' host 01'
CRS- 2673: At t empt i ng t o st op ' or a. cvu' on ' host 01'
CRS- 2673: At t empt i ng t o st op ' or a. DATA. dg' on ' host 01'
CRS- 2673: At t empt i ng t o st op ' or a. FRA. dg' on ' host 01'
CRS- 2673: At t empt i ng t o st op ' or a. LI STENER. l snr ' on ' host 01'
CRS- 2677: St op of ' or a. LI STENER_SCAN3. l snr ' on ' host 01'
succeeded
CRS- 2673: At t empt i ng t o st op ' or a. scan3. vi p' on ' host 01'
CRS- 2677: St op of ' or a. LI STENER. l snr ' on ' host 01' succeeded
CRS- 2673: At t empt i ng t o st op ' or a. host 01. vi p' on ' host 01'
CRS- 2677: St op of ' or a. scan3. vi p' on ' host 01' succeeded
CRS- 2672: At t empt i ng t o st ar t ' or a. scan3. vi p' on ' host 02'
CRS- 2677: St op of ' or a. host 01. vi p' on ' host 01' succeeded
CRS- 2672: At t empt i ng t o st ar t ' or a. host 01. vi p' on ' host 03'
CRS- 2676: St ar t of ' or a. host 01. vi p' on ' host 03' succeeded
CRS- 2676: St ar t of ' or a. scan3. vi p' on ' host 02' succeeded
CRS- 2672: At t empt i ng t o st ar t ' or a. LI STENER_SCAN3. l snr ' on
' host 02'
CRS- 2677: St op of ' or a. FRA. dg' on ' host 01' succeeded
CRS- 2676: St ar t of ' or a. LI STENER_SCAN3. l snr ' on ' host 02'
succeeded
CRS- 2677: St op of ' or a. oc4j ' on ' host 01' succeeded
CRS- 2672: At t empt i ng t o st ar t ' or a. oc4j ' on ' host 02'
CRS- 2677: St op of ' or a. cvu' on ' host 01' succeeded
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 6-1: Veri fyi ng, Starti ng, and Stoppi ng Oracle
Clusterware (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 53
CRS- 2672: At t empt i ng t o st ar t ' or a. cvu' on ' host 02'
CRS- 2676: St ar t of ' or a. cvu' on ' host 02' succeeded
CRS- 2676: St ar t of ' or a. oc4j ' on ' host 02' succeeded
CRS- 2677: St op of ' or a. DATA. dg' on ' host 01' succeeded
CRS- 2673: At t empt i ng t o st op ' or a. asm' on ' host 01'
CRS- 2677: St op of ' or a. asm' on ' host 01' succeeded
CRS- 2673: At t empt i ng t o st op ' or a. ons' on ' host 01'
CRS- 2677: St op of ' or a. ons' on ' host 01' succeeded
CRS- 2673: At t empt i ng t o st op ' or a. net 1. net wor k' on ' host 01'
CRS- 2677: St op of ' or a. net 1. net wor k' on ' host 01' succeeded
CRS- 2792: Shut down of Cl ust er Ready Ser vi ces- managed r esour ces
on ' host 01' has compl et ed
CRS- 2677: St op of ' or a. cr sd' on ' host 01' succeeded
CRS- 2673: At t empt i ng t o st op ' or a. mdnsd' on ' host 01'
CRS- 2673: At t empt i ng t o st op ' or a. cr f ' on ' host 01'
CRS- 2673: At t empt i ng t o st op ' or a. ct ssd' on ' host 01'
CRS- 2673: At t empt i ng t o st op ' or a. evmd' on ' host 01'
CRS- 2673: At t empt i ng t o st op ' or a. asm' on ' host 01'
CRS- 2677: St op of ' or a. evmd' on ' host 01' succeeded
CRS- 2677: St op of ' or a. cr f ' on ' host 01' succeeded
CRS- 2677: St op of ' or a. mdnsd' on ' host 01' succeeded
CRS- 2677: St op of ' or a. ct ssd' on ' host 01' succeeded
CRS- 2677: St op of ' or a. asm' on ' host 01' succeeded
CRS- 2673: At t empt i ng t o st op ' or a. cl ust er _i nt er connect . hai p'
on ' host 01'
CRS- 2677: St op of ' or a. cl ust er _i nt er connect . hai p' on ' host 01'
succeeded
CRS- 2673: At t empt i ng t o st op ' or a. cssd' on ' host 01'
CRS- 2677: St op of ' or a. cssd' on ' host 01' succeeded
CRS- 2673: At t empt i ng t o st op ' or a. gi pcd' on ' host 01'
CRS- 2677: St op of ' or a. gi pcd' on ' host 01' succeeded
CRS- 2673: At t empt i ng t o st op ' or a. gpnpd' on ' host 01'
CRS- 2677: St op of ' or a. gpnpd' on ' host 01' succeeded
CRS- 2793: Shut down of Or acl e Hi gh Avai l abi l i t y Ser vi ces-
managed r esour ces on ' host 01' has compl et ed
CRS- 4133: Or acl e Hi gh Avai l abi l i t y Ser vi ces has been st opped.
[ r oot @host 01~] $ exi t
[ gr i d@host 01~] $
7) Attempt to check the status of Oracle Clusterware now that it has been successfully
stopped.
[ gr i d@host 01~] $ crsctl check crs
CRS- 4639: Coul d not cont act Or acl e Hi gh Avai l abi l i t y Ser vi ces

[ gr i d@host 01~] $ crsctl check cluster
CRS- 4639: Coul d not cont act Or acl e Hi gh Avai l abi l i t y Ser vi ces
CRS- 4000: Command Check f ai l ed, or compl et ed wi t h er r or s.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 6-1: Veri fyi ng, Starti ng, and Stoppi ng Oracle
Clusterware (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 54
8) Connect to the second node of your cluster and verify that Oracle Clusterware is still
running on that node. You may need to set your environment for the second node by
using the or aenv utility.
[ gr i d@host 01 ~] $ ssh host02
Last l ogi n: Thu Aug 27 17: 28: 29 2009 f r omhost 01. exampl e. com

[ gr i d@host 02 ~] $ . oraenv
ORACLE_SI D = [ gr i d] ? +ASM2
The Or acl e base has been set t o / u01/ app/ gr i d

[ gr i d@host 02 ~] $ crsctl check crs
CRS- 4638: Or acl e Hi gh Avai l abi l i t y Ser vi ces i s onl i ne
CRS- 4537: Cl ust er Ready Ser vi ces i s onl i ne
CRS- 4529: Cl ust er Synchr oni zat i on Ser vi ces i s onl i ne
CRS- 4533: Event Manager i s onl i ne
9) Verify that all cluster resources are running on the second node, stopped on the first
node, and that the VIP resources from the first node have migrated or failed over to
the second node. The or a. oc4j and the or a. gsd resources are expected to be
offline. Exit the connection to the second node when done.
[ gr i d@host 02 ~] $ crsctl stat res -t
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NAME TARGET STATE SERVER
STATE_DETAI LS
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Local Resour ces
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
or a. DATA. dg
ONLI NE ONLI NE host 02
ONLI NE ONLI NE host 03
or a. FRA. dg
ONLI NE ONLI NE host 02
ONLI NE ONLI NE host 03
or a. LI STENER. l snr
ONLI NE ONLI NE host 02
ONLI NE ONLI NE host 03
or a. asm
ONLI NE ONLI NE host 02 St ar t ed
ONLI NE ONLI NE host 03 St ar t ed
or a. gsd
OFFLI NE OFFLI NE host 02
OFFLI NE OFFLI NE host 03
or a. net 1. net wor k
ONLI NE ONLI NE host 02
ONLI NE ONLI NE host 03
or a. ons
ONLI NE ONLI NE host 02
ONLI NE ONLI NE host 03
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 6-1: Veri fyi ng, Starti ng, and Stoppi ng Oracle
Clusterware (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 55
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Cl ust er Resour ces
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
or a. LI STENER_SCAN1. l snr
1 ONLI NE ONLI NE host 02
or a. LI STENER_SCAN2. l snr
1 ONLI NE ONLI NE host 03
or a. LI STENER_SCAN3. l snr
1 ONLI NE ONLI NE host 02
or a. cvu
1 ONLI NE ONLI NE host 02
or a. host 01. vi p
1 ONLI NE I NTERMEDI ATE host 03 FAI LED OVER
or a. host 02. vi p
1 ONLI NE ONLI NE host 02
or a. host 03. vi p
1 ONLI NE ONLI NE host 03
or a. oc4j
1 ONLI NE ONLI NE host 02
or a. scan1. vi p
1 ONLI NE ONLI NE host 02
or a. scan2. vi p
1 ONLI NE ONLI NE host 03
or a. scan3. vi p
1 ONLI NE ONLI NE host 02

[ gr i d@host 02 ~] $ exit
Connect i on t o host 02 cl osed.
[ gr i d@host 01 ~] $
10) Restart Oracle Clusterware on the first node as the r oot user. Return to the gr i d
account and verify the results.
Note: You may need to check the status of all the resources several times until they
all have been restarted. You can tell that they are all complete when the
or a. or cl . db resource has a State Details of Open. It may take several minutes to
completely restart all resources.
[gr i d@host 01 ~] $ su -
Passwor d: 0racle << Password is not displayed

[ r oot @host 01 ~] # /u01/app/11.2.0/grid/bin/crsctl start crs
CRS- 4123: Or acl e Hi gh Avai l abi l i t y Ser vi ces has been st ar t ed.

[ r oot @host 01 ~] # exit

[ gr i d@host 01 ~] $ crsctl stat res -t
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NAME TARGET STATE SERVER
STATE_DETAI LS
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 6-1: Veri fyi ng, Starti ng, and Stoppi ng Oracle
Clusterware (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 56
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Local Resour ces
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
or a. DATA. dg
ONLI NE ONLI NE host 01
ONLI NE ONLI NE host 02
ONLI NE ONLI NE host 03
or a. FRA. dg
ONLI NE ONLI NE host 01
ONLI NE ONLI NE host 02
ONLI NE ONLI NE host 03
or a. LI STENER. l snr
ONLI NE ONLI NE host 01
ONLI NE ONLI NE host 02
ONLI NE ONLI NE host 03
or a. asm
ONLI NE ONLI NE host 01 St ar t ed
ONLI NE ONLI NE host 02 St ar t ed
ONLI NE ONLI NE host 03 St ar t ed
or a. gsd
OFFLI NE OFFLI NE host 01
OFFLI NE OFFLI NE host 02
OFFLI NE OFFLI NE host 03
or a. net 1. net wor k
ONLI NE ONLI NE host 01
ONLI NE ONLI NE host 02
ONLI NE ONLI NE host 03
or a. ons
ONLI NE ONLI NE host 01
ONLI NE ONLI NE host 02
ONLI NE ONLI NE host 03
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Cl ust er Resour ces
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
or a. LI STENER_SCAN1. l snr
1 ONLI NE ONLI NE host 01
or a. LI STENER_SCAN2. l snr
1 ONLI NE ONLI NE host 03
or a. LI STENER_SCAN3. l snr
1 ONLI NE ONLI NE host 02
or a. cvu
1 ONLI NE ONLI NE host 02
or a. host 01. vi p
1 ONLI NE ONLI NE host 01
or a. host 02. vi p
1 ONLI NE ONLI NE host 02
or a. host 03. vi p
1 ONLI NE ONLI NE host 03
or a. oc4j
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 6-1: Veri fyi ng, Starti ng, and Stoppi ng Oracle
Clusterware (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 57
1 ONLI NE ONLI NE host 02
or a. scan1. vi p
1 ONLI NE ONLI NE host 01
or a. scan2. vi p
1 ONLI NE ONLI NE host 03
or a. scan3. vi p
1 ONLI NE ONLI NE host 02

[ gr i d@host 01 ~] $



O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 58
Practi ce 6-2: Adding and Removing Oracl e Cl usterware
Confi gurati on Fi les
In this practice, you determine the current location of your voting disks and Oracle
Cluster Registry (OCR) files. You will then add another OCR location and remove it.
1) Use the cr sct l utility to determine the location of the voting disks that are currently
used by your Oracle Clusterware installation.
[ gr i d@host 01 ~] $ crsctl query css votedisk
## STATE Fi l e Uni ver sal I d Fi l e Name Di sk gr oup
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1. ONLI NE a76bb6f 3c5a64f 7. . . 1577f a ( ORCL: ASMDI SK01) [ DATA]
2. ONLI NE 0f f 19e4eaaf 14f 4. . . 165f f 5 ( ORCL: ASMDI SK02) [ DATA]
3. ONLI NE 40747dc0eca34f 2. . . f 455c3 ( ORCL: ASMDI SK03) [ DATA]
Locat ed 3 vot i ng di sk( s) .
2) Use the ocr check utility to determine the location of the Oracle Clusterware
Registry (OCR) files.
[ gr i d@host 01 ~] $ ocrcheck
St at us of Or acl e Cl ust er Regi st r y i s as f ol l ows :
Ver si on : 3
Tot al space ( kbyt es) : 262120
Used space ( kbyt es) : 2736
Avai l abl e space ( kbyt es) : 259384
I D : 1510392211
Devi ce/ Fi l e Name : +DATA

Devi ce/ Fi l e i nt egr i t y check succeeded

Devi ce/ Fi l e not conf i gur ed

Devi ce/ Fi l e not conf i gur ed

Devi ce/ Fi l e not conf i gur ed

Devi ce/ Fi l e not conf i gur ed

Cl ust er r egi st r y i nt egr i t y check succeeded

Logi cal cor r upt i on check bypassed due t o non- pr i vi l eged user
3) Verify that the FRA ASM disk group is currently online for all nodes using the
cr sct l utility.
[ gr i d@host 01 ~] $ crsctl stat res ora.FRA.dg -t
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NAME TARGET STATE SERVER
STATE_DETAI LS
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Local Resour ces
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
or a. FRA. dg
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 6-2: Adding and Removing Oracl e Cl usterware
Confi gurati on Fi les (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 59
ONLI NE ONLI NE host 01
ONLI NE ONLI NE host 02
ONLI NE ONLI NE host 03
4) If the FRA ASM disk group is not online, use the asmcmd utility to mount the FRA
disk group as the gr i d user.
Note: This step may not be necessary if it is already in an online state on each node.
Verify the results. You may have to run the commands on each node.
[ gr i d@host 01 ~]$ asmcmd mount FRA

[ gr i d@host 01 ~] $ crsctl stat res ora.FRA.dg -t
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NAME TARGET STATE SERVER STATE_DETAI LS
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Local Resour ces
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
or a. FRA. dg
ONLI NE ONLI NE host 01
ONLI NE ONLI NE host 02
ONLI NE ONLI NE host 03
5) Switch to the r oot account and add a second OCR location that is to be stored in the
FRA ASM disk group. Use the ocr check command to verify the results.
[ gr i d@host 01 ~] $ su -
Passwor d: 0racle << Password is not displayed

[ r oot @host 01 ~] # /u01/app/11.2.0/grid/bin/ocrconfig -add +FRA

[ r oot @host 01 ~] # /u01/app/11.2.0/grid/bin/ocrcheck
St at us of Or acl e Cl ust er Regi st r y i s as f ol l ows :
Ver si on : 3
Tot al space ( kbyt es) : 262120
Used space ( kbyt es) : 2736
Avai l abl e space ( kbyt es) : 259384
I D : 1510392211
Devi ce/ Fi l e Name : +DATA
Devi ce/ Fi l e i nt egr i t y check succeeded
Devi ce/ Fi l e Name : +FRA
Devi ce/ Fi l e i nt egr i t y check succeeded

Devi ce/ Fi l e not conf i gur ed

Devi ce/ Fi l e not conf i gur ed

Devi ce/ Fi l e not conf i gur ed

Cl ust er r egi st r y i nt egr i t y check succeeded

O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 6-2: Adding and Removing Oracl e Cl usterware
Confi gurati on Fi les (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 60

Devi ce/ Fi l e not conf i gur ed

Cl ust er r egi st r y i nt egr i t y check succeeded

Logi cal cor r upt i on check succeeded

6) Examine the contents of the ocr . l oc configuration file to see the changes made to
the file referencing the new OCR location.
[ r oot @host 01 ~] # cat /etc/oracle/ocr.loc
#Devi ce/ f i l e get t i ng r epl aced by devi ce +FRA
ocr conf i g_l oc=+DATA
ocr mi r r or conf i g_l oc=+FRA

7) Open a connection to your second node as the r oot user, and remove the second
OCR file that was added from the first node. Exit the remote connection and verify
the results when completed.
[ r oot @host 01 ~] # ssh host02
r oot @host 02' s passwor d: 0racle << Password is not displayed
Last l ogi n: Thu Apr 26 15: 45: 53 2012 f r omhost 01. exampl e. com

[ r oot @host 02~] # /u01/app/11.2.0/grid/bin/ocrconfig -delete +FRA

[ r oot @host 02~] # exit
Connect i on t o host 02 cl osed.

[ r oot @host 01 ~] # / u01/ app/ 11. 2. 0/ gr i d/ bi n/ ocr check
St at us of Or acl e Cl ust er Regi st r y i s as f ol l ows :
Ver si on : 3
Tot al space ( kbyt es) : 262120
Used space ( kbyt es) : 2736
Avai l abl e space ( kbyt es) : 259384
I D : 1510392211
Devi ce/ Fi l e Name : +DATA
Devi ce/ Fi l e i nt egr i t y check succeeded

Devi ce/ Fi l e not conf i gur ed

Devi ce/ Fi l e not conf i gur ed

Devi ce/ Fi l e not conf i gur ed


Cl ust er r egi st r y i nt egr i t y check succeeded
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 61
Practi ce 6-3: Performi ng a Backup of the OCR and OLR
In this practice, you determine the location of the Oracle Local Registry (OLR) and
perform backups of the OCR and OLR files.
1) Use the ocr conf i g utility to list the automatic backups of the Oracle Cluster
Registry (OCR) and the node or nodes on which they have been performed.
Note: You will see backups listed only if it has been more than four hours since Grid
Infrastructure was installed.
[ r oot @host 01~] # /u01/app/11.2.0/grid/bin/ocrconfig -showbackup

host 02 2012/ 04/ 27 12: 50: 44
/ u01/ app/ 11. 2. 0/ gr i d/ cdat a/ cl ust er 01/ backup00. ocr

host 01 2012/ 04/ 27 08: 28: 29
/ u01/ app/ 11. 2. 0/ gr i d/ cdat a/ cl ust er 01/ backup01. ocr

host 01 2012/ 04/ 27 04: 28: 28
/ u01/ app/ 11. 2. 0/ gr i d/ cdat a/ cl ust er 01/ backup02. ocr

host 01 2012/ 04/ 26 00: 49: 02
/ u01/ app/ 11. 2. 0/ gr i d/ cdat a/ cl ust er 01/ day. ocr

host 01 2012/ 04/ 24 16: 48: 56
/ u01/ app/ 11. 2. 0/ gr i d/ cdat a/ cl ust er 01/ week. ocr
PROT- 25: Manual backups f or t he Or acl e Cl ust er Regi st r y ar e
not avai l abl e
2) Perform a manual backup of the OCR.
[ r oot @host 01~] # /u01/app/11.2.0/grid/bin/ocrconfig -manualbackup

host 02 2012/ 04/ 27 13: 55: 27
/ u01/ app/ 11. 2. 0/ gr i d/ cdat a/ cl ust er 01/ backup_20120427_135527. ocr
3) Display only the manual backups that have been performed and identify the node for
which the backup was stored. Do logical backups appear in the display?
[ r oot @host 01 ~] # /u01/app/11.2.0/grid/bin/ocrconfig -showbackup
manual

host 02 2012/ 04/ 27 13: 55: 27
/ u01/ app/ 11. 2. 0/ gr i d/ cdat a/ cl ust er 01/ backup_20120427_135527. ocr
4) Determine the location of the Oracle Local Registry (OLR) using the ocr check
utility.
[ r oot @host 01 ~] # /u01/app/11.2.0/grid/bin/ocrcheck -local
St at us of Or acl e Local Regi st r y i s as f ol l ows :
Ver si on : 3
Tot al space ( kbyt es) : 262120
Used space ( kbyt es) : 2672
Avai l abl e space ( kbyt es) : 259448
I D : 1610122891
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 62
Devi ce/ Fi l e Name :
/ u01/ app/ 11. 2. 0/ gr i d/ cdat a/ host 01. ol r
Devi ce/ Fi l e i nt egr i t y
check succeeded

Local r egi st r y i nt egr i t y check succeeded

Logi cal cor r upt i on check succeeded






































O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 63
Practices for Lesson 7
In this practice, you will install a Patch Set Update for your Oracle Grid Infrastructure
11.2.0.3 installation.


O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 64
Practi ce 7-1: Applyi ng a PSU to the Grid Infrastructure Homes
The goal of this practice is to apply a Grid Infrastructure Patch Set Update to your
cluster.

In this practice, you will apply OPatch patch p6880880_112000_LI NUX. zi p and
Grid Infrastructure Patch Set Update 11.2.0.3.2 to the nodes in your cluster. These
patches are located in the / shar e directory, which is NFS mounted on all three of your
cluster nodes.

1) The latest version of OPatch must be installed in each <Grid_home> to be patched.
The OPatch patch p6880880_112000_LI NUX. zi p is located in / shar e. As the
gr i d user, unzip the patch to / u01/ app/ 11. 2. 0/ gr i d on hosts host 01,
host 02, and host 03.
[ gr i d@host 01 ~] $ cd /share
[ gr i d@host 01 shar e] $ ls la
- r w- r - - r - - 1 r oot r oot 32510817 Apr 26 12: 18
p6880880_112000_LI NUX. zi p
. . .

[ gr i d@host 01 shar e] $ unzip o p6880880_112000_LINUX.zip -d
/u01/app/11.2.0/grid
Ar chi ve: p6880880_112000_LI NUX. zi p
cr eat i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ README. ht ml
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ README. t xt
cr eat i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ opl an. j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ or acl e. opl an. cl asspat h.
j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ aut omat i on. j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ OsysModel . j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ EMr epoDr i ver s. j ar
cr eat i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ apache-
commons/
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ apache-
commons/ commons- cl i - 1. 0. j ar
cr eat i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ j axb/
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ j axb/ act i vat i on. j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ j axb/ j axb-
api . j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ j axb/ j axb-
i mpl . j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ j axb/ j sr 173_1. 0_api . j ar
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 65
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ osysmodel -
ut i l s. j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ CRSPr oduct Dr i ver . j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ opl an
r epl ace / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ docs/ FAQ? [ y] es, [ n] o,
[ A] l l , [ N] one, [ r ] ename: A
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ docs/ FAQ
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ docs/ User s_Gui de. t xt
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ docs/ Pr er eq_User s_Gui de. t xt
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ j l i b/ opat ch. j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ j l i b/ opat chsdk. j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ j l i b/ or acl e. opat ch. cl asspat h. j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ j l i b/ or acl e. opat ch. cl asspat h. uni x.
j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ j l i b/ or acl e. opat ch. cl asspat h. wi ndo
ws. j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat chpr er eqs/ opat ch/ opat ch_pr er eq
. xml
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat chpr er eqs/ opat ch/ r ul emap. xml
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat chpr er eqs/ opat ch/ r unt i me_pr er e
q. xml
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat chpr er eqs/ oui / knowl edgesr c. xml
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat chpr er eqs/ pr er equi si t e. pr oper t
i es
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat ch
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat ch. bat
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat ch. pl
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat ch. i ni
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat chdi ag
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat chdi ag. bat
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ emdpat ch. pl
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ README. t xt
cr eat i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ bi n/
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ bi n/ emocmr sp
cr eat i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ doc/
cr eat i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ emocmcl nt -
14. j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ emocmcl nt . j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ emocmcommon. j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ ht t p_cl i ent . j ar
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 66
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ j cer t . j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ j net . j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ j sse. j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ l og4j -
cor e. j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ osdt _cor e3. j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ osdt _j ce. j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ r egexp. j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ xml par ser v2. j ar
ext r act i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ ocm. zi p
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ ocm_pl at f or ms. t xt
cr eat i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/
cr eat i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ l og/
ext r act i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ l og/ dummy
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ aut o_pat ch. pl
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ cr sconf i g_l i b. pm
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ cr sdel et e. pm
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ cr spat ch. pm
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ i nst al l Pat ch. excl
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ or acss. pm
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ pat ch112. pl
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ s_cr sconf i g_def s
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ s_cr sconf i g_l i b. pm

[ gr i d@host 01 shar e] $ ssh host02
Last l ogi n: Thu Apr 26 12: 28: 13 2012 f r omhost 01. exampl e. com

[gr i d@host 02 ~] $ cd /share

[ gr i d@host 02 shar e] $ unzip o p6880880_112000_LINUX.zip -d
/u01/app/11.2.0/grid
Ar chi ve: p6880880_112000_LI NUX. zi p
cr eat i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ README. ht ml
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ README. t xt
cr eat i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ opl an. j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ or acl e. opl an. cl asspat h.
j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ aut omat i on. j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ OsysModel . j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ EMr epoDr i ver s. j ar
cr eat i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ apache-
commons/
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ apache-
commons/ commons- cl i - 1. 0. j ar
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 67
cr eat i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ j axb/
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ j axb/ act i vat i on. j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ j axb/ j axb-
api . j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ j axb/ j axb-
i mpl . j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ j axb/ j sr 173_1. 0_api . j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ osysmodel -
ut i l s. j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ CRSPr oduct Dr i ver . j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ opl an
r epl ace / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ docs/ FAQ? [ y] es, [ n] o,
[ A] l l , [ N] one, [ r ] ename: A
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ docs/ FAQ
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ docs/ User s_Gui de. t xt
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ docs/ Pr er eq_User s_Gui de. t xt
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ j l i b/ opat ch. j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ j l i b/ opat chsdk. j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ j l i b/ or acl e. opat ch. cl asspat h. j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ j l i b/ or acl e. opat ch. cl asspat h. uni x.
j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ j l i b/ or acl e. opat ch. cl asspat h. wi ndo
ws. j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat chpr er eqs/ opat ch/ opat ch_pr er eq
. xml
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat chpr er eqs/ opat ch/ r ul emap. xml
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat chpr er eqs/ opat ch/ r unt i me_pr er e
q. xml
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat chpr er eqs/ oui / knowl edgesr c. xml
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat chpr er eqs/ pr er equi si t e. pr oper t
i es
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat ch
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat ch. bat
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat ch. pl
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat ch. i ni
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat chdi ag
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat chdi ag. bat
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ emdpat ch. pl
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ README. t xt
cr eat i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ bi n/
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ bi n/ emocmr sp
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 68
cr eat i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ doc/
cr eat i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ emocmcl nt -
14. j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ emocmcl nt . j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ emocmcommon. j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ ht t p_cl i ent . j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ j cer t . j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ j net . j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ j sse. j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ l og4j -
cor e. j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ osdt _cor e3. j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ osdt _j ce. j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ r egexp. j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ xml par ser v2. j ar
ext r act i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ ocm. zi p
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ ocm_pl at f or ms. t xt
cr eat i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/
cr eat i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ l og/
ext r act i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ l og/ dummy
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ aut o_pat ch. pl
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ cr sconf i g_l i b. pm
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ cr sdel et e. pm
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ cr spat ch. pm
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ i nst al l Pat ch. excl
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ or acss. pm
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ pat ch112. pl
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ s_cr sconf i g_def s
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ s_cr sconf i g_l i b. pm

[ gr i d@host 02 shar e] $ ssh host03
Last l ogi n: Thu Apr 26 12: 28: 13 2012 f r omhost 01. exampl e. com

[ gr i d@host 03 ~] $ cd /share

[ gr i d@host 03 shar e] $ unzip o p6880880_112000_LINUX.zip -d
/u01/app/11.2.0/grid
Ar chi ve: p6880880_112000_LI NUX. zi p
cr eat i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ README. ht ml
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ README. t xt
cr eat i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ opl an. j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ or acl e. opl an. cl asspat h.
j ar
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 69
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ aut omat i on. j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ OsysModel . j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ EMr epoDr i ver s. j ar
cr eat i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ apache-
commons/
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ apache-
commons/ commons- cl i - 1. 0. j ar
cr eat i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ j axb/
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ j axb/ act i vat i on. j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ j axb/ j axb-
api . j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ j axb/ j axb-
i mpl . j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ j axb/ j sr 173_1. 0_api . j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ osysmodel -
ut i l s. j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ j l i b/ CRSPr oduct Dr i ver . j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opl an/ opl an
r epl ace / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ docs/ FAQ? [ y] es, [ n] o,
[ A] l l , [ N] one, [ r ] ename: A
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ docs/ FAQ
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ docs/ User s_Gui de. t xt
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ docs/ Pr er eq_User s_Gui de. t xt
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ j l i b/ opat ch. j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ j l i b/ opat chsdk. j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ j l i b/ or acl e. opat ch. cl asspat h. j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ j l i b/ or acl e. opat ch. cl asspat h. uni x.
j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ j l i b/ or acl e. opat ch. cl asspat h. wi ndo
ws. j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat chpr er eqs/ opat ch/ opat ch_pr er eq
. xml
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat chpr er eqs/ opat ch/ r ul emap. xml
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat chpr er eqs/ opat ch/ r unt i me_pr er e
q. xml
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat chpr er eqs/ oui / knowl edgesr c. xml
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat chpr er eqs/ pr er equi si t e. pr oper t
i es
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 70
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat ch
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat ch. bat
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat ch. pl
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat ch. i ni
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat chdi ag
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ opat chdi ag. bat
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ emdpat ch. pl
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ README. t xt
cr eat i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ bi n/
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ bi n/ emocmr sp
cr eat i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ doc/
cr eat i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ emocmcl nt -
14. j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ emocmcl nt . j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ emocmcommon. j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ ht t p_cl i ent . j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ j cer t . j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ j net . j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ j sse. j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ l og4j -
cor e. j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ osdt _cor e3. j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ osdt _j ce. j ar
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ r egexp. j ar
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ l i b/ xml par ser v2. j ar
ext r act i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ ocm. zi p
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm/ ocm_pl at f or ms. t xt
cr eat i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/
cr eat i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ l og/
ext r act i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ l og/ dummy
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ aut o_pat ch. pl
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ cr sconf i g_l i b. pm
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ cr sdel et e. pm
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ cr spat ch. pm
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ i nst al l Pat ch. excl
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ or acss. pm
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ pat ch112. pl
i nf l at i ng: / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ s_cr sconf i g_def s
i nf l at i ng:
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ s_cr sconf i g_l i b. pm

[ gr i d@host 03 shar e] $ exit
Connect i on t o host 03 cl osed.

[ gr i d@host 02 shar e] $ exit
Connect i on t o host 02 cl osed.

[ gr i d@host 01 shar e]
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 71

2) The opat ch utility will prompt for your OCM (Oracle Configuration Manager)
response file when it is run. Because this file has not yet been created, you will need
to execute the emocmr sp command as the gr i d user on all nodes. Do not provide
an email address. The emocmr sp command creates the response file in the current
directory, so navigate to / u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocmbefore
executing.
[ gr i d@host 01 ~] $ id
ui d=502( gr i d) gi d=54321( oi nst al l )
gr oups=504( asmadmi n) , 505( asmdba) , 506( asmoper ) , 54321( oi nst al l )

[ gr i d@host 01 ~] $ . oraenv
ORACLE_SI D = [ gr i d] ? +ASM1
The Or acl e base has been set t o / u01/ app/ gr i d

[ gr i d@host 01 ~] $ cd /u01/app/11.2.0/grid/OPatch/ocm

[ gr i d@host 01 ocm] $
/u01/app/11.2.0/grid/OPatch/ocm/bin/emocmrsp

OCM I nst al l at i on Response Gener at or 10. 3. 4. 0. 0 - Pr oduct i on
Copyr i ght ( c) 2005, 2010, Or acl e and/ or i t s af f i l i at es. Al l
r i ght s r eser ved.

Pr ovi de your emai l addr ess t o be i nf or med of secur i t y i ssues,
i nst al l and
i ni t i at e Or acl e Conf i gur at i on Manager . Easi er f or you i f you
use your My
Or acl e Suppor t Emai l addr ess/ User Name.
Vi si t ht t p: / / www. or acl e. com/ suppor t / pol i ci es. ht ml f or det ai l s.
Emai l addr ess/ User Name: <<< No email address >>>

You have not pr ovi ded an emai l addr ess f or not i f i cat i on of
secur i t y i ssues.
Do you wi sh t o r emai n uni nf or med of secur i t y i ssues ( [ Y] es,
[ N] o) [ N] : Y
The OCM conf i gur at i on r esponse f i l e ( ocm. r sp) was successf ul l y
cr eat ed. .

*** On host02 ***

[ gr i d@host 01 ocm] $ ssh host02

[ gr i d@host 02 ~] $ . oraenv
ORACLE_SI D = [ gr i d] ? +ASM2
The Or acl e base has been set t o / u01/ app/ gr i d

[ gr i d@host 02 ~] $ cd /u01/app/11.2.0/grid/OPatch/ocm

[ gr i d@host 02 ocm] $
/u01/app/11.2.0/grid/OPatch/ocm/bin/emocmrsp
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 72
OCM I nst al l at i on Response Gener at or 10. 3. 4. 0. 0 - Pr oduct i on
Copyr i ght ( c) 2005, 2010, Or acl e and/ or i t s af f i l i at es. Al l
r i ght s r eser ved.

Pr ovi de your emai l addr ess t o be i nf or med of secur i t y i ssues,
i nst al l and
i ni t i at e Or acl e Conf i gur at i on Manager . Easi er f or you i f you
use your My
Or acl e Suppor t Emai l addr ess/ User Name.
Vi si t ht t p: / / www. or acl e. com/ suppor t / pol i ci es. ht ml f or det ai l s.
Emai l addr ess/ User Name: <<< No email address >>>

You have not pr ovi ded an emai l addr ess f or not i f i cat i on of
secur i t y i ssues.
Do you wi sh t o r emai n uni nf or med of secur i t y i ssues ( [ Y] es,
[ N] o) [ N] : Y
The OCM conf i gur at i on r esponse f i l e ( ocm. r sp) was successf ul l y
cr eat ed.

*** On host03 ***

[ gr i d@host 02 ocm] $ ssh host03

[ gr i d@host 03 ~] $ . oraenv
ORACLE_SI D = [ gr i d] ? +ASM3
The Or acl e base has been set t o / u01/ app/ gr i d

[ gr i d@host 03 ~] $ cd /u01/app/11.2.0/grid/OPatch/ocm

[ gr i d@host 03 ocm] $
/u01/app/11.2.0/grid/OPatch/ocm/bin/emocmrsp
OCM I nst al l at i on Response Gener at or 10. 3. 4. 0. 0 - Pr oduct i on
Copyr i ght ( c) 2005, 2010, Or acl e and/ or i t s af f i l i at es. Al l
r i ght s r eser ved.

Pr ovi de your emai l addr ess t o be i nf or med of secur i t y i ssues,
i nst al l and
i ni t i at e Or acl e Conf i gur at i on Manager . Easi er f or you i f you
use your My
Or acl e Suppor t Emai l addr ess/ User Name.
Vi si t ht t p: / / www. or acl e. com/ suppor t / pol i ci es. ht ml f or det ai l s.
Emai l addr ess/ User Name: <<< No email address >>>

You have not pr ovi ded an emai l addr ess f or not i f i cat i on of
secur i t y i ssues.
Do you wi sh t o r emai n uni nf or med of secur i t y i ssues ( [ Y] es,
[ N] o) [ N] : Y
The OCM conf i gur at i on r esponse f i l e ( ocm. r sp) was successf ul l y
cr eat ed.
[ gr i d@host 03 ocm] $ exi t
[ gr i d@host 02 ocm] $ exi t
[ gr i d@host 01 ocm] $
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 73
3) Before beginning patch application, check the consistency of inventory information
for each Grid Infrastructure home to be patched. Run the following command as the
gr i d user to check the consistency.
[ gr i d@host 01 ~] $ /u01/app/11.2.0/grid/OPatch/opatch
lsinventory -detail -oh /u01/app/11.2.0/grid
Or acl e I nt er i mPat ch I nst al l er ver si on 11. 2. 0. 3. 0
Copyr i ght ( c) 2012, Or acl e Cor por at i on. Al l r i ght s r eser ved.


Or acl e Home : / u01/ app/ 11. 2. 0/ gr i d
Cent r al I nvent or y : / u01/ app/ or aI nvent or y
f r om : / u01/ app/ 11. 2. 0/ gr i d/ or aI nst . l oc
OPat ch ver si on : 11. 2. 0. 3. 0
OUI ver si on : 11. 2. 0. 3. 0
Log f i l e l ocat i on :
/ u01/ app/ 11. 2. 0/ gr i d/ cf gt ool l ogs/ opat ch/ opat ch2012- 04- 26_12-
56- 26PM_1. l og

Lsi nvent or y Out put f i l e l ocat i on :
/ u01/ app/ 11. 2. 0/ gr i d/ cf gt ool l ogs/ opat ch/ l si nv/ l si nvent or y2012-
04- 26_12- 56- 26PM. t xt

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
I nst al l ed Top- l evel Pr oduct s ( 1) :

Or acl e Gr i d I nf r ast r uct ur e
11. 2. 0. 3. 0
Ther e ar e 1 pr oduct s i nst al l ed i n t hi s Or acl e Home.


I nst al l ed Pr oduct s ( 88) :

Agent Requi r ed Suppor t Fi l es 10. 2. 0. 4. 3
Assi st ant Common Fi l es 11. 2. 0. 3. 0
Aut omat i c St or age Management Assi st ant 11. 2. 0. 3. 0
Bal i Shar e 1. 1. 18. 0. 0
Bui l dt ool s Common Fi l es 11. 2. 0. 3. 0
Char act er Set Mi gr at i on Ut i l i t y 11. 2. 0. 3. 0
Cl ust er Ready Ser vi ces Fi l es 11. 2. 0. 3. 0
Cl ust er Ver i f i cat i on Ut i l i t y Common Fi l es 11. 2. 0. 3. 0
Cl ust er Ver i f i cat i on Ut i l i t y Fi l es 11. 2. 0. 3. 0
Dat abase SQL Scr i pt s 11. 2. 0. 3. 0
Dei nst al l at i on Tool 11. 2. 0. 3. 0
Ent er pr i se Manager Common Cor e Fi l es 10. 2. 0. 4. 4
Ent er pr i se Manager Common Fi l es 10. 2. 0. 4. 3
Ent er pr i se Manager pl ugi n Common Fi l es 11. 2. 0. 3. 0
Expat l i br ar i es 2. 0. 1. 0. 1
HAS Common Fi l es 11. 2. 0. 3. 0
HAS Fi l es f or DB 11. 2. 0. 3. 0
I nst al l at i on Common Fi l es 11. 2. 0. 3. 0
I nst al l at i on Pl ugi n Fi l es 11. 2. 0. 3. 0
I nst al l er SDK Component 11. 2. 0. 3. 0
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 74
LDAP Requi r ed Suppor t Fi l es 11. 2. 0. 3. 0
OLAP SQL Scr i pt s 11. 2. 0. 3. 0
Or acl e Cl ust er war e RDBMS Fi l es 11. 2. 0. 3. 0
Or acl e Conf i gur at i on Manager Deconf i gur at i on 10. 3. 1. 0. 0
. . .
Ther e ar e 88 pr oduct s i nst al l ed i n t hi s Or acl e Home.


Ther e ar e no I nt er i mpat ches i nst al l ed i n t hi s Or acl e Home.


Rac syst emcompr i si ng of mul t i pl e nodes
Local node = host 01
Remot e node = host 02
Remot e node = host 03

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
OPat ch succeeded.

[ gr i d@host 01 ~] $ ssh host02
Last l ogi n: Thu Apr 26 12: 50: 18 2012 f r omhost 01. exampl e. com
[ gr i d@host 02 ~] $ /u01/app/11.2.0/grid/OPatch/opatch
lsinventory -detail -oh /u01/app/11.2.0/grid
Or acl e I nt er i mPat ch I nst al l er ver si on 11. 2. 0. 3. 0
Copyr i ght ( c) 2012, Or acl e Cor por at i on. Al l r i ght s r eser ved.


Or acl e Home : / u01/ app/ 11. 2. 0/ gr i d
Cent r al I nvent or y : / u01/ app/ or aI nvent or y
f r om : / u01/ app/ 11. 2. 0/ gr i d/ or aI nst . l oc
OPat ch ver si on : 11. 2. 0. 3. 0
OUI ver si on : 11. 2. 0. 3. 0
. . .
OPat ch succeeded.

[ gr i d@host 02 ~] $ ssh host03
Last l ogi n: Thu Apr 26 12: 50: 39 2012 f r omhost 02. exampl e. com
[grid@host03 ~]$ /u01/app/11.2.0/grid/OPatch/opatch
lsinventory -detail -oh /u01/app/11.2.0/grid
Or acl e I nt er i mPat ch I nst al l er ver si on 11. 2. 0. 3. 0
Copyr i ght ( c) 2012, Or acl e Cor por at i on. Al l r i ght s r eser ved.


Or acl e Home : / u01/ app/ 11. 2. 0/ gr i d
Cent r al I nvent or y : / u01/ app/ or aI nvent or y
f r om : / u01/ app/ 11. 2. 0/ gr i d/ or aI nst . l oc
OPat ch ver si on : 11. 2. 0. 3. 0
OUI ver si on : 11. 2. 0. 3. 0
. . .
OPat ch succeeded.

[ gr i d@host 03 ~] $ exit
Connect i on t o host 03 cl osed.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 75

[ gr i d@host 02 ~] $ exit
Connect i on t o host 02 cl osed.

[ gr i d@host 01 ~]

4) The opat ch utility has automated the patch application for the Oracle Grid
Infrastructure and Oracle RAC database homes. It operates by querying existing
configurations and automating the steps required for patching each Oracle RAC
database home of same version and the GI home.
As the root user, add the directory containing opat ch to your path and execute
opat ch on each node.
The patch is applied in a rolling fashion. Do not run opat ch in parallel on your
cluster nodes.
[root@host01~] export PATH=/u01/app/11.2.0/grid/OPatch:$PATH

[root@host01~]# opatch auto /share -oh /u01/app/11.2.0/grid
-ocmrf /u01/app/11.2.0/grid/OPatch/ocm/ocm.rsp

Execut i ng / usr / bi n/ per l
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ pat ch112. pl - pat chdi r / -
pat chn shar e - oh / u01/ app/ 11. 2. 0/ gr i d - ocmr f
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm. r sp - par amf i l e
/ u01/ app/ 11. 2. 0/ gr i d/ cr s/ i nst al l / cr sconf i g_par ams
opat ch aut o l og f i l e l ocat i on i s
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ . . / . . / cf gt ool l ogs/ opat chaut o20
12- 04- 26_14- 18- 59. l og
Det ect ed Or acl e Cl ust er war e i nst al l
Usi ng conf i gur at i on par amet er f i l e:
/ u01/ app/ 11. 2. 0/ gr i d/ cr s/ i nst al l / cr sconf i g_par ams
Successf ul l y unl ock / u01/ app/ 11. 2. 0/ gr i d
pat ch / / shar e/ 13696251 appl y successf ul f or home
/ u01/ app/ 11. 2. 0/ gr i d
pat ch / / shar e/ 13696216 appl y successf ul f or home
/ u01/ app/ 11. 2. 0/ gr i d
ACFS- 9300: ADVM/ ACFS di st r i but i on f i l es f ound.
ACFS- 9307: I nst al l i ng r equest ed ADVM/ ACFS sof t war e.
ACFS- 9308: Loadi ng i nst al l ed ADVM/ ACFS dr i ver s.
ACFS- 9321: Cr eat i ng udev f or ADVM/ ACFS.
ACFS- 9323: Cr eat i ng modul e dependenci es - t hi s may t ake some
t i me.
ACFS- 9154: Loadi ng ' or acl eoks. ko' dr i ver .
ACFS- 9154: Loadi ng ' or acl eadvm. ko' dr i ver .
ACFS- 9154: Loadi ng ' or acl eacf s. ko' dr i ver .
ACFS- 9327: Ver i f yi ng ADVM/ ACFS devi ces.
ACFS- 9156: Det ect i ng cont r ol devi ce ' / dev/ asm/ . asm_ct l _spec' .
ACFS- 9156: Det ect i ng cont r ol devi ce ' / dev/ of sct l ' .
ACFS- 9309: ADVM/ ACFS i nst al l at i on cor r ect ness ver i f i ed.
CRS- 4123: Or acl e Hi gh Avai l abi l i t y Ser vi ces has been st ar t ed.

O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 76
#### Proceed to second node ####
[root@host01~]# ssh host02
r oot @host 02' s passwor d:

[ r oot @host 02 ~] # PATH=/u01/app/11.2.0/grid/OPatch:$PATH

[ r oot @host 02 ~] # opatch auto /share -oh /u01/app/11.2.0/grid
-ocmrf /u01/app/11.2.0/grid/OPatch/ocm/ocm.rsp

Execut i ng / usr / bi n/ per l
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ pat ch112. pl - pat chdi r / -
pat chn shar e - oh / u01/ app/ 11. 2. 0/ gr i d - ocmr f
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm. r sp - par amf i l e
/ u01/ app/ 11. 2. 0/ gr i d/ cr s/ i nst al l / cr sconf i g_par ams
opat ch aut o l og f i l e l ocat i on i s
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ . . / . . / cf gt ool l ogs/ opat chaut o20
12- 04- 26_15- 46- 38. l og
Det ect ed Or acl e Cl ust er war e i nst al l
Usi ng conf i gur at i on par amet er f i l e:
/ u01/ app/ 11. 2. 0/ gr i d/ cr s/ i nst al l / cr sconf i g_par ams
Can' t change per mi ssi ons of / / shar e: Read- onl y f i l e syst em
Successf ul l y unl ock / u01/ app/ 11. 2. 0/ gr i d
pat ch / / shar e/ 13696251 appl y successf ul f or home
/ u01/ app/ 11. 2. 0/ gr i d
pat ch / / shar e/ 13696216 appl y successf ul f or home
/ u01/ app/ 11. 2. 0/ gr i d
ACFS- 9300: ADVM/ ACFS di st r i but i on f i l es f ound.
ACFS- 9307: I nst al l i ng r equest ed ADVM/ ACFS sof t war e.
ACFS- 9308: Loadi ng i nst al l ed ADVM/ ACFS dr i ver s.
ACFS- 9321: Cr eat i ng udev f or ADVM/ ACFS.
ACFS- 9323: Cr eat i ng modul e dependenci es - t hi s may t ake some
t i me.
ACFS- 9154: Loadi ng ' or acl eoks. ko' dr i ver .
ACFS- 9154: Loadi ng ' or acl eadvm. ko' dr i ver .
ACFS- 9154: Loadi ng ' or acl eacf s. ko' dr i ver .
ACFS- 9327: Ver i f yi ng ADVM/ ACFS devi ces.
ACFS- 9156: Det ect i ng cont r ol devi ce ' / dev/ asm/ . asm_ct l _spec' .
ACFS- 9156: Det ect i ng cont r ol devi ce ' / dev/ of sct l ' .
ACFS- 9309: ADVM/ ACFS i nst al l at i on cor r ect ness ver i f i ed.
CRS- 4123: Or acl e Hi gh Avai l abi l i t y Ser vi ces has been st ar t ed.

#### Proceed to third node ####
[ r oot @host 02~] # ssh host03
r oot @host 03' s passwor d:

[ r oot @host 03 ~] # export PATH=/u01/app/11.2.0/grid/OPatch:$PATH

[ r oot @host 03 ~] # opatch auto /share -oh /u01/app/11.2.0/grid
-ocmrf /u01/app/11.2.0/grid/OPatch/ocm/ocm.rsp

Execut i ng / usr / bi n/ per l
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ pat ch112. pl - pat chdi r / -
pat chn shar e - oh / u01/ app/ 11. 2. 0/ gr i d - ocmr f
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 77
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ ocm. r sp - par amf i l e
/ u01/ app/ 11. 2. 0/ gr i d/ cr s/ i nst al l / cr sconf i g_par ams
opat ch aut o l og f i l e l ocat i on i s
/ u01/ app/ 11. 2. 0/ gr i d/ OPat ch/ cr s/ . . / . . / cf gt ool l ogs/ opat chaut o20
12- 04- 26_16- 11- 06. l og
Det ect ed Or acl e Cl ust er war e i nst al l
Usi ng conf i gur at i on par amet er f i l e:
/ u01/ app/ 11. 2. 0/ gr i d/ cr s/ i nst al l / cr sconf i g_par ams
Successf ul l y unl ock / u01/ app/ 11. 2. 0/ gr i d
pat ch / / shar e/ 13696251 appl y successf ul f or home
/ u01/ app/ 11. 2. 0/ gr i d
pat ch / / shar e/ 13696216 appl y successf ul f or home
/ u01/ app/ 11. 2. 0/ gr i d
ACFS- 9300: ADVM/ ACFS di st r i but i on f i l es f ound.
ACFS- 9307: I nst al l i ng r equest ed ADVM/ ACFS sof t war e.
ACFS- 9308: Loadi ng i nst al l ed ADVM/ ACFS dr i ver s.
ACFS- 9321: Cr eat i ng udev f or ADVM/ ACFS.
ACFS- 9323: Cr eat i ng modul e dependenci es - t hi s may t ake some
t i me.
ACFS- 9154: Loadi ng ' or acl eoks. ko' dr i ver .
ACFS- 9154: Loadi ng ' or acl eadvm. ko' dr i ver .
ACFS- 9154: Loadi ng ' or acl eacf s. ko' dr i ver .
ACFS- 9327: Ver i f yi ng ADVM/ ACFS devi ces.
ACFS- 9156: Det ect i ng cont r ol devi ce ' / dev/ asm/ . asm_ct l _spec' .
ACFS- 9156: Det ect i ng cont r ol devi ce ' / dev/ of sct l ' .
ACFS- 9309: ADVM/ ACFS i nst al l at i on cor r ect ness ver i f i ed.
CRS- 4123: Or acl e Hi gh Avai l abi l i t y Ser vi ces has been st ar t ed.
5) Make sure the patch has been successfully applied on all three nodes using the
opat ch l si nv command.
[ gr i d@host 01 ~] $ /u01/app/11.2.0/grid/OPatch/opatch lsinv
Or acl e I nt er i mPat ch I nst al l er ver si on 11. 2. 0. 3. 0
Copyr i ght ( c) 2012, Or acl e Cor por at i on. Al l r i ght s r eser ved.


Or acl e Home : / u01/ app/ 11. 2. 0/ gr i d
Cent r al I nvent or y : / u01/ app/ or aI nvent or y
f r om : / u01/ app/ 11. 2. 0/ gr i d/ or aI nst . l oc
OPat ch ver si on : 11. 2. 0. 3. 0
OUI ver si on : 11. 2. 0. 3. 0
Log f i l e l ocat i on :
/ u01/ app/ 11. 2. 0/ gr i d/ cf gt ool l ogs/ opat ch/ opat ch2012- 04- 27_04-
26- 15AM_1. l og

Lsi nvent or y Out put f i l e l ocat i on :
/ u01/ app/ 11. 2. 0/ gr i d/ cf gt ool l ogs/ opat ch/ l si nv/ l si nvent or y2012-
04- 27_04- 26- 15AM. t xt

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
I nst al l ed Top- l evel Pr oduct s ( 1) :

O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 78
Or acl e Gr i d I nf r ast r uct ur e
11. 2. 0. 3. 0
Ther e ar e 1 pr oduct s i nst al l ed i n t hi s Or acl e Home.


I nt er i mpat ches ( 2) :

Pat ch 13696216 : appl i ed on Thu Apr 26 14: 42: 39 UTC 2012
Uni que Pat ch I D: 14600705
Pat ch descr i pt i on: "Dat abase Pat ch Set Updat e : 11. 2. 0. 3. 2
( 13696216) "
Cr eat ed on 3 Apr 2012, 22: 02: 51 hr s PST8PDT
Sub- pat ch 13343438; " Dat abase Pat ch Set Updat e : 11. 2. 0. 3. 1
( 13343438) "
Bugs f i xed:
13070939, 13035804, 10350832, 13632717, 13041324,
12919564, 13420224
13742437, 12861463, 12834027, 13742438, 13332439,
13036331, 13499128
12998795, 12829021, 13492735, 9873405, 13742436,
13503598, 12960925
12718090, 13742433, 12662040, 9703627, 12905058,
12938841, 13742434
12849688, 12950644, 13362079, 13742435, 12620823,
12917230, 12845115
12656535, 12764337, 13354082, 12588744, 11877623,
12612118, 12847466
13742464, 13528551, 12894807, 13343438, 12582664,
12780983, 12748240
12797765, 12780098, 13696216, 12923168, 13466801,
13772618, 11063191, 13554409

Pat ch 13696251 : appl i ed on Thu Apr 26 14: 33: 58 UTC 2012
Uni que Pat ch I D: 14600705
Pat ch descr i pt i on: "Gr i d I nf r ast r uct ur e Pat ch Set Updat e :
11. 2. 0. 3. 2 ( 13696251) "
Cr eat ed on 5 Apr 2012, 07: 21: 52 hr s PST8PDT
Bugs f i xed:
13696251, 13348650, 12659561, 13039908, 13036424,
12794268, 13011520
13569812, 12758736, 13077654, 13001901, 13430715,
12538907, 13066371
12594616, 12897651, 12897902, 12896850, 12726222,
12829429, 12728585
13079948, 12876314, 13090686, 12925041, 12995950,
13251796, 12398492
12848480, 13652088, 12990582, 12975811, 12917897,
13082238, 12947871
13037709, 13371153, 12878750, 11772838, 13058611,
13001955, 11836951
12965049, 13440962, 12765467, 13425727, 12885323,
12784559, 13332363
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 79
13074261, 12971251, 12857064, 13396284, 12899169,
13111013, 12867511
12639013, 13085732, 12829917, 12934171, 12849377,
12349553, 12914824
12730342, 13334158, 12950823, 13355963, 13531373,
13002015, 13024624
12791719, 13886023, 13019958, 13255295, 12810890,
12782756, 13502441
12873909, 13243172, 12820045, 12842804, 13045518,
12765868, 12772345
12823838, 13345868, 12823042, 12932852, 12825835,
12695029, 13146560
13038806, 13263435, 13025879, 13410987, 13396356,
12827493, 13637590
13247273, 13258062, 12834777, 13068077


Rac syst emcompr i si ng of mul t i pl e nodes
Local node = host 01
Remot e node = host 02
Remot e node = host 03

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

OPat ch succeeded.
[ gr i d@host 01 ~] $ ssh host02
Last l ogi n: Thu Apr 26 15: 43: 10 2012 f r omhost 01. exampl e. com

[ gr i d@host 02 ~] $ /u01/app/11.2.0/grid/OPatch/opatch lsinv
Or acl e I nt er i mPat ch I nst al l er ver si on 11. 2. 0. 3. 0
Copyr i ght ( c) 2012, Or acl e Cor por at i on. Al l r i ght s r eser ved.


Or acl e Home : / u01/ app/ 11. 2. 0/ gr i d
Cent r al I nvent or y : / u01/ app/ or aI nvent or y
f r om : / u01/ app/ 11. 2. 0/ gr i d/ or aI nst . l oc
OPat ch ver si on : 11. 2. 0. 3. 0
OUI ver si on : 11. 2. 0. 3. 0
Log f i l e l ocat i on :
/ u01/ app/ 11. 2. 0/ gr i d/ cf gt ool l ogs/ opat ch/ opat ch2012- 04- 27_04-
27- 52AM_1. l og

Lsi nvent or y Out put f i l e l ocat i on :
/ u01/ app/ 11. 2. 0/ gr i d/ cf gt ool l ogs/ opat ch/ l si nv/ l si nvent or y2012-
04- 27_04- 27- 52AM. t xt

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
I nst al l ed Top- l evel Pr oduct s ( 1) :

Or acl e Gr i d I nf r ast r uct ur e
11. 2. 0. 3. 0
Ther e ar e 1 pr oduct s i nst al l ed i n t hi s Or acl e Home.

O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 80

I nt er i mpat ches ( 2) :

Pat ch 13696216 : appl i ed on Thu Apr 26 15: 58: 40 UTC 2012
Uni que Pat ch I D: 14600705
Pat ch descr i pt i on: "Dat abase Pat ch Set Updat e : 11. 2. 0. 3. 2
( 13696216) "
Cr eat ed on 3 Apr 2012, 22: 02: 51 hr s PST8PDT
Sub- pat ch 13343438; " Dat abase Pat ch Set Updat e : 11. 2. 0. 3. 1
( 13343438) "
Bugs f i xed:
13070939, 13035804, 10350832, 13632717, 13041324,
12919564, 13420224
13742437, 12861463, 12834027, 13742438, 13332439,
13036331, 13499128
12998795, 12829021, 13492735, 9873405, 13742436,
13503598, 12960925
12718090, 13742433, 12662040, 9703627, 12905058,
12938841, 13742434
12849688, 12950644, 13362079, 13742435, 12620823,
12917230, 12845115
12656535, 12764337, 13354082, 12588744, 11877623,
12612118, 12847466
13742464, 13528551, 12894807, 13343438, 12582664,
12780983, 12748240
12797765, 12780098, 13696216, 12923168, 13466801,
13772618, 11063191, 13554409

Pat ch 13696251 : appl i ed on Thu Apr 26 15: 55: 53 UTC 2012
Uni que Pat ch I D: 14600705
Pat ch descr i pt i on: "Gr i d I nf r ast r uct ur e Pat ch Set Updat e :
11. 2. 0. 3. 2 ( 13696251) "
Cr eat ed on 5 Apr 2012, 07: 21: 52 hr s PST8PDT
Bugs f i xed:
13696251, 13348650, 12659561, 13039908, 13036424,
12794268, 13011520
13569812, 12758736, 13077654, 13001901, 13430715,
12538907, 13066371
12594616, 12897651, 12897902, 12896850, 12726222,
12829429, 12728585
13079948, 12876314, 13090686, 12925041, 12995950,
13251796, 12398492
12848480, 13652088, 12990582, 12975811, 12917897,
13082238, 12947871
13037709, 13371153, 12878750, 11772838, 13058611,
13001955, 11836951
12965049, 13440962, 12765467, 13425727, 12885323,
12784559, 13332363
13074261, 12971251, 12857064, 13396284, 12899169,
13111013, 12867511
12639013, 13085732, 12829917, 12934171, 12849377,
12349553, 12914824
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 81
12730342, 13334158, 12950823, 13355963, 13531373,
13002015, 13024624
12791719, 13886023, 13019958, 13255295, 12810890,
12782756, 13502441
12873909, 13243172, 12820045, 12842804, 13045518,
12765868, 12772345
12823838, 13345868, 12823042, 12932852, 12825835,
12695029, 13146560
13038806, 13263435, 13025879, 13410987, 13396356,
12827493, 13637590
13247273, 13258062, 12834777, 13068077


Rac syst emcompr i si ng of mul t i pl e nodes
Local node = host 02
Remot e node = host 01
Remot e node = host 03

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

OPat ch succeeded.

[ gr i d@host 02 ~] $ ssh host03
Last l ogi n: Thu Apr 26 16: 32: 01 2012 f r omhost 01. exampl e. com

[ gr i d@host 03 ~] $ /u01/app/11.2.0/grid/OPatch/opatch lsinv
Or acl e I nt er i mPat ch I nst al l er ver si on 11. 2. 0. 3. 0
Copyr i ght ( c) 2012, Or acl e Cor por at i on. Al l r i ght s r eser ved.


Or acl e Home : / u01/ app/ 11. 2. 0/ gr i d
Cent r al I nvent or y : / u01/ app/ or aI nvent or y
f r om : / u01/ app/ 11. 2. 0/ gr i d/ or aI nst . l oc
OPat ch ver si on : 11. 2. 0. 3. 0
OUI ver si on : 11. 2. 0. 3. 0
Log f i l e l ocat i on :
/ u01/ app/ 11. 2. 0/ gr i d/ cf gt ool l ogs/ opat ch/ opat ch2012- 04- 27_04-
28- 07AM_1. l og

Lsi nvent or y Out put f i l e l ocat i on :
/ u01/ app/ 11. 2. 0/ gr i d/ cf gt ool l ogs/ opat ch/ l si nv/ l si nvent or y2012-
04- 27_04- 28- 07AM. t xt

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
I nst al l ed Top- l evel Pr oduct s ( 1) :

Or acl e Gr i d I nf r ast r uct ur e
11. 2. 0. 3. 0
Ther e ar e 1 pr oduct s i nst al l ed i n t hi s Or acl e Home.


I nt er i mpat ches ( 2) :

O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 82
Pat ch 13696216 : appl i ed on Thu Apr 26 16: 24: 27 UTC 2012
Uni que Pat ch I D: 14600705
Pat ch descr i pt i on: "Dat abase Pat ch Set Updat e : 11. 2. 0. 3. 2
( 13696216) "
Cr eat ed on 3 Apr 2012, 22: 02: 51 hr s PST8PDT
Sub- pat ch 13343438; " Dat abase Pat ch Set Updat e : 11. 2. 0. 3. 1
( 13343438) "
Bugs f i xed:
13070939, 13035804, 10350832, 13632717, 13041324,
12919564, 13420224
13742437, 12861463, 12834027, 13742438, 13332439,
13036331, 13499128
12998795, 12829021, 13492735, 9873405, 13742436,
13503598, 12960925
12718090, 13742433, 12662040, 9703627, 12905058,
12938841, 13742434
12849688, 12950644, 13362079, 13742435, 12620823,
12917230, 12845115
12656535, 12764337, 13354082, 12588744, 11877623,
12612118, 12847466
13742464, 13528551, 12894807, 13343438, 12582664,
12780983, 12748240
12797765, 12780098, 13696216, 12923168, 13466801,
13772618, 11063191, 13554409

Pat ch 13696251 : appl i ed on Thu Apr 26 16: 20: 47 UTC 2012
Uni que Pat ch I D: 14600705
Pat ch descr i pt i on: "Gr i d I nf r ast r uct ur e Pat ch Set Updat e :
11. 2. 0. 3. 2 ( 13696251) "
Cr eat ed on 5 Apr 2012, 07: 21: 52 hr s PST8PDT
Bugs f i xed:
13696251, 13348650, 12659561, 13039908, 13036424,
12794268, 13011520
13569812, 12758736, 13077654, 13001901, 13430715,
12538907, 13066371
12594616, 12897651, 12897902, 12896850, 12726222,
12829429, 12728585
13079948, 12876314, 13090686, 12925041, 12995950,
13251796, 12398492
12848480, 13652088, 12990582, 12975811, 12917897,
13082238, 12947871
13037709, 13371153, 12878750, 11772838, 13058611,
13001955, 11836951
12965049, 13440962, 12765467, 13425727, 12885323,
12784559, 13332363
13074261, 12971251, 12857064, 13396284, 12899169,
13111013, 12867511
12639013, 13085732, 12829917, 12934171, 12849377,
12349553, 12914824
12730342, 13334158, 12950823, 13355963, 13531373,
13002015, 13024624
12791719, 13886023, 13019958, 13255295, 12810890,
12782756, 13502441
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 83
12873909, 13243172, 12820045, 12842804, 13045518,
12765868, 12772345
12823838, 13345868, 12823042, 12932852, 12825835,
12695029, 13146560
13038806, 13263435, 13025879, 13410987, 13396356,
12827493, 13637590
13247273, 13258062, 12834777, 13068077


Rac syst emcompr i si ng of mul t i pl e nodes
Local node = host 03
Remot e node = host 01
Remot e node = host 02

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

OPat ch succeeded.
[ gr i d@host 03 ~] $


O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 84
Practices for Lesson 8
In this practice, you will work with Oracle Clusterware log files and learn to use the
ocr dump and cl uvf y utilities.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 85
Practi ce 8-1: Worki ng wi th Log Fi les
In this practice, you will examine the Oracle Clusterware alert log and then package
various log files into an archive format suitable to send to My Oracle Support.
1) While connected as the gr i d user to your first node, locate and view the contents of
the Oracle Clusterware alert log.
[ gr i d@host 01 ~] $ . oraenv
ORACLE_SI D = [ gr i d] ? +ASM1
The Or acl e base has been set t o / u01/ app/ gr i d

[ gr i d@host 01 ~] $ cd /u01/app/11.2.0/grid/log/host01

[ gr i d@host 01 host 01] $ view alerthost01.log

2012- 04- 24 12: 28: 41. 948
[ cl i ent ( 12035) ] CRS- 2101: The OLR was f or mat t ed usi ng ver si on 3.
2012- 04- 24 12: 29: 39. 168
[ ohasd( 12650) ] CRS- 2112: The OLR ser vi ce st ar t ed on node host 01.
2012- 04- 24 12: 29: 39. 198
[ ohasd( 12650) ] CRS- 1301: Or acl e Hi gh Avai l abi l i t y Ser vi ce
st ar t ed on node host 01.
[ cl i ent ( 12900) ] CRS- 10001: 24- Apr - 12 12: 29 ACFS- 9459: ADVM/ ACFS
i s not suppor t ed on t hi s OS ver si on: ' 2. 6. 32- 300. 7. 1. el 5uek'
[ cl i ent ( 12902) ] CRS- 10001: 24- Apr - 12 12: 29 ACFS- 9201: Not
Suppor t ed
[ cl i ent ( 12992) ] CRS- 10001: 24- Apr - 12 12: 29 ACFS- 9459: ADVM/ ACFS
i s not suppor t ed on t hi s OS ver si on: ' 2. 6. 32- 300. 7. 1. el 5uek'
2012- 04- 24 12: 29: 51. 615
[ gpnpd( 13108) ] CRS- 2328: GPNPD st ar t ed on node host 01.
2012- 04- 24 12: 29: 54. 195
[ cssd( 13180) ] CRS- 1713: CSSD daemon i s st ar t ed i n excl usi ve mode
2012- 04- 24 12: 29: 56. 101
[ ohasd( 12650) ] CRS- 2767: Resour ce st at e r ecover y not at t empt ed
f or ' or a. di skmon' as i t s t ar get st at e i s OFFLI NE
2012- 04- 24 12: 29: 58. 975
[ cssd( 13180) ] CRS- 1709: Lease acqui si t i on f ai l ed f or node host 01
because no vot i ng f i l e has been conf i gur ed; Det ai l s at
( : CSSNM00031: ) i n
/ u01/ app/ 11. 2. 0/ gr i d/ l og/ host 01/ cssd/ ocssd. l og
2012- 04- 24 12: 30: 07. 979
[ cssd( 13180) ] CRS- 1601: CSSD Reconf i gur at i on compl et e. Act i ve
nodes ar e host 01 .
2012- 04- 24 12: 30: 09. 649
[ ct ssd( 13243) ] CRS- 2403: The Cl ust er Ti me Synchr oni zat i on
Ser vi ce on host host 01 i s i n obser ver mode.
2012- 04- 24 12: 30: 09. 911
[ ct ssd( 13243) ] CRS- 2401: The Cl ust er Ti me Synchr oni zat i on
Ser vi ce st ar t ed on host host 01.
2012- 04- 24 12: 30: 09. 911
[ ct ssd( 13243) ] CRS- 2407: The new Cl ust er Ti me Synchr oni zat i on
Ser vi ce r ef er ence node i s host host 01.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 8-1: Worki ng wi th Log Fi les (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 86
[ cl i ent ( 13308) ] CRS- 10001: 24- Apr - 12 12: 30 ACFS- 9204: f al se

:q!
2) Navigate to the Oracle Cluster Synchronization Services daemon log directory and
determine whether any log archives exist.
[ gr i d@host 01 host 01] $ cd ./cssd

[ gr i d@host 01 cssd] $ pwd
/u01/app/11.2.0/grid/log/host01/cssd

[ gr i d@host 01 cssd] $ ls -alt ocssd*
- r w- r - - r - - 1 gr i d oi nst al l 19202443 Apr 28 03: 30 ocssd. l og
3) Switch to the r oot user and set up the environment variables for the Grid
Infrastructure. Change to the / home/ or acl e/ l abs directory and run the
di agcol l ect i on. pl script to gather all log files that can be sent to My Oracle
Support for problem analysis.
[ gr i d@host 01 cssd] $ su - Passwor d: 0racle << Password is not
displayed

[ r oot @host 01 ~] # . oraenv
ORACLE_SI D = [ r oot ] ? +ASM1
The Or acl e base has been set t o / u01/ app/ gr i d

[ r oot @host 01 ~] # cd /home/oracle/labs

[ r oot @host 01 l abs] # diagcollection.pl --collect --crshome
/u01/app/11.2.0/grid
Pr oduct i on Copyr i ght 2004, 2008, Or acl e. Al l r i ght s r eser ved
Cl ust er Ready Ser vi ces ( CRS) di agnost i c col l ect i on t ool
The f ol l owi ng CRS di agnost i c ar chi ves wi l l be cr eat ed i n t he
l ocal di r ect or y.
cr sDat a_host 01_20090901_1413. t ar . gz - > l ogs, t r aces and cor es
f r omCRS home. Not e: cor e f i l es wi l l be packaged onl y wi t h t he
- - cor e opt i on.
ocr Dat a_host 01_20090901_1413. t ar . gz - > ocr dump, ocr check et c
cor eDat a_host 01_20090901_1413. t ar . gz - > cont ent s of CRS cor e
f i l es i n t ext f or mat

osDat a_host 01_20090901_1413. t ar . gz - > l ogs f r omOper at i ng
Syst em
Col l ect i ng cr s dat a
/ bi n/ t ar :
l og/ host 01/ agent / cr sd/ or ar oot agent _r oot / or ar oot agent _r oot . l og:
f i l e changed as we r ead i t
Col l ect i ng OCR dat a
Col l ect i ng i nf or mat i on f r omcor e f i l es
No cor ef i l es f ound
Col l ect i ng OS l ogs
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 8-1: Worki ng wi th Log Fi les (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 87
4) List the resulting log file archives that were generated with the
di agcol l ect i on. pl script.
[ r oot @host 01 l abs] # ls -la *tar.gz
- r w- r - - r - - 1 r oot r oot 16507694 Apr 28 03: 36
cr sDat a_host 01_20120428_0335. t ar . gz
- r w- r - - r - - 1 r oot r oot 22426 Apr 28 03: 36
ocr Dat a_host 01_20120428_0335. t ar . gz
- r w- r - - r - - 1 r oot r oot 160207 Apr 28 03: 36
osDat a_host 01_20120428_0335. t ar . gz
5) Exit the switch user command to return to the gr i d account.
[ r oot @host 01 l abs] # exit
l ogout

[ gr i d@host 01 cssd] $

O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 88
Practi ce 8-2: Worki ng wi th OCRDUMP
In this practice, you will work with the OCRDUMP utility and dump the binary file into
both text and XML representations.
1) While connected to the gr i d account, dump the contents of the OCR to the standard
output and count the number of lines of output.
[ gr i d@host 01 ~] $ ocrdump -stdout | wc -l
575
2) Switch to the r oot user, dump the contents of the OCR to the standard output and
count the number of lines of output. Compare your results with the previous step.
How do the results differ?
[ gr i d@host 01 ~] $ su -
Passwor d: 0racle << Passwor d i s not di spl ayed

[ r oot @host 01 ~] # . oraenv
ORACLE_SI D = [ r oot ] ? +ASM1
The Or acl e base has been set t o / u01/ app/ gr i d

[ r oot @host 01 ~] # ocrdump -stdout | wc -l
2769
3) Dump the first 25 lines of the OCR to standard output using XML format.
[ r oot @host 01 ~] # ocrdump -stdout -xml | head -25
<OCRDUMP>

<TI MESTAMP>04/ 27/ 2012 14: 10: 49</ TI MESTAMP>
<COMMAND>/ u01/ app/ 11. 2. 0/ gr i d/ bi n/ ocr dump. bi n - st dout - xml
</ COMMAND>

<KEY>
<NAME>SYSTEM</ NAME>
<VALUE_TYPE>UNDEF</ VALUE_TYPE>
<VALUE><! [ CDATA[ ] ] ></ VALUE>
<USER_PERMI SSI ON>PROCR_ALL_ACCESS</ USER_PERMI SSI ON>
<GROUP_PERMI SSI ON>PROCR_READ</ GROUP_PERMI SSI ON>
<OTHER_PERMI SSI ON>PROCR_READ</ OTHER_PERMI SSI ON>
<USER_NAME>r oot </ USER_NAME>
<GROUP_NAME>r oot </ GROUP_NAME>

<KEY>
<NAME>SYSTEM. ver si on</ NAME>
<VALUE_TYPE>UB4 ( 10) </ VALUE_TYPE>
<VALUE><! [ CDATA[ 5] ] ></ VALUE>
<USER_PERMI SSI ON>PROCR_ALL_ACCESS</ USER_PERMI SSI ON>
<GROUP_PERMI SSI ON>PROCR_READ</ GROUP_PERMI SSI ON>
<OTHER_PERMI SSI ON>PROCR_READ</ OTHER_PERMI SSI ON>
<USER_NAME>r oot </ USER_NAME>
<GROUP_NAME>r oot </ GROUP_NAME>
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 8-2: Worki ng wi th OCRDUMP (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 89
4) Create an XML file dump of the OCR in the / home/ or acl e/ l abs directory.
Name the dump file as ocr _cur r ent _dump. xml .
[ r oot @host 01~] # ocrdump -xml
/ home/ or acl e/ l abs/ ocr _cur r ent _dump. xml
5) Find the node and the directory that contains the automatic backup of the OCR from
24 hours ago.
[root@host01 ~]# ocrconfig -showbackup

host 02 2012/ 04/ 27 12: 50: 44
/ u01/ app/ 11. 2. 0/ gr i d/ cdat a/ cl ust er 01/ backup00. ocr

host 01 2012/ 04/ 27 08: 28: 29
/ u01/ app/ 11. 2. 0/ gr i d/ cdat a/ cl ust er 01/ backup01. ocr

host 01 2012/ 04/ 27 04: 28: 28
/ u01/ app/ 11. 2. 0/ gr i d/ cdat a/ cl ust er 01/ backup02. ocr

host 01 2012/ 04/ 26 00: 49: 02
/ u01/ app/ 11. 2. 0/ gr i d/ cdat a/ cl ust er 01/ day. ocr

host 01 2012/ 04/ 24 16: 48: 56
/ u01/ app/ 11. 2. 0/ gr i d/ cdat a/ cl ust er 01/ week. ocr

host 02 2012/ 04/ 27 13: 55: 27
/ u01/ app/ 11. 2. 0/ gr i d/ cdat a/ cl ust er 01/ backup_20120427_135527. oc
r
6) Copy the 24-hour-old automatic backup of the OCR into the
/ home/ or acl e/ l abs directory. This is not a dump, but rather an actual backup of
the OCR. (If the daily backup of the OCR is not there, use the oldest backup on the
list.)
Note: It may be necessary to use scp if the file is located on a different node. Be sure
to use your cluster name in place of cluster01 in the path.
[ r oot @host 01 ~] # cp /u01/app/11.2.0/grid/cdata/cluster01/day.ocr
/home/oracle/labs/day.ocr

>>>> Or <<<<<

[ r oot @host 01~] # scp
host02:/u01/app/11.2.0/grid/cdata/cluster01/day.ocr
/home/oracle/labs/day.ocr
r oot @host 02' s passwor d: 0racle << Passwor d i s not di spl ayed
day. ocr 100%7436KB 7. 3MB/ s 00: 00
7) Dump the contents of the day. ocr backup OCR file in the XML format saving the
file in the / home/ or acl e/ l abs directory. Name the file as day_ocr . xml .
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 8-2: Worki ng wi th OCRDUMP (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 90
[ r oot @host 01 ~] # ocrdump -xml -backupfile
/home/oracle/labs/day.ocr /home/oracle/labs/day_ocr.xml
8) Compare the differences between the day_ocr . xml file and the
ocr _cur r ent _dump. xml file to determine all changes made to the OCR in the
last 24 hours. Exit the switch user command when done.
[ r oot @host 01 ~] # diff /home/oracle/labs/day_ocr.xml
/home/oracle/labs/ocr_current_dump.xml

3, 5c3, 4
> <TI MESTAMP>04/ 27/ 2012 14: 11: 45</ TI MESTAMP>
> <COMMAND>/ u01/ app/ 11. 2. 0/ gr i d/ bi n/ ocr dump. bi n - xml
/ home/ or acl e/ l abs/ ocr _cur r ent _dump. xml </ COMMAND>
- - -
<
<VALUE><! [ CDATA[ ( ADDRESS=( PROTOCOL=t cp) ( HOST=192. 168. 1. 101) ( PO
RT=47068) ) ] ] ></ VALUE>
- - -
>
<VALUE><! [ CDATA[ ( ADDRESS=( PROTOCOL=t cp) ( HOST=192. 168. 1. 101) ( PO
RT=54270) ) ] ] ></ VALUE>
- - -
> </ KEY>
>
> <KEY>
> <NAME>SYSTEM. cr s. e2epor t . host 03</ NAME>
> <VALUE_TYPE>ORATEXT</ VALUE_TYPE>
>
<VALUE><! [ CDATA[ ( ADDRESS=( PROTOCOL=t cp) ( HOST=192. 168. 1. 103) ( PO
RT=29045) ) ] ] ></ VALUE>
712c747
< <VALUE><! [ CDATA[ 100000] ] ></ VALUE>
- - -
> <VALUE><! [ CDATA[ 500000] ] ></ VALUE>
724c759
< <VALUE><! [ CDATA[ 41] ] ></ VALUE>
- - -
> <VALUE><! [ CDATA[ 47] ] ></ VALUE>
748c783
< <VALUE><! [ CDATA[ 10] ] ></ VALUE>
- - -
> <VALUE><! [ CDATA[ 33] ] ></ VALUE>
878c913
< <VALUE><! [ CDATA[ 230733010] ] ></ VALUE>

[ r oot @host 01 ~] # exit
[ gr i d@host 01 ~] #

O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 91
Practi ce 8-3: Worki ng wi th CLUVFY
In this practice, you will work with CLUVFY to verify the state of various cluster
components.
1) Determine the location of the cl uvf y utility and its configuration file.
[ gr i d@host 01 ~] $ which cluvfy
/u01/app/11.2.0/grid/bin/cluvfy

[ gr i d@host 01 ~] $ cd $ORACLE_HOME/cv/admin

[ gr i d@host 01 admi n] $ pwd
/ u01/ app/ 11. 2. 0/ gr i d/ cv/ admi n

[grid@host01 admin]$ cat cvu_config
# Conf i gur at i on f i l e f or Cl ust er Ver i f i cat i on Ut i l i t y( CVU)
# Ver si on: 011405
#
# NOTE:
# 1. _ Any l i ne wi t hout a ' =' wi l l be i gnor ed
# 2. _ Si nce t he f al l back opt i on wi l l l ook i nt o t he envi r onment
var i abl es,
# pl ease have a component pr ef i x( CV_) f or each pr oper t y t o
def i ne a
# namespace.
#


#Nodes f or t he cl ust er . I f CRS home i s not i nst al l ed, t hi s
l i st wi l l be
#pi cked up when - n al l i s ment i oned i n t he commandl i ne
ar gument .
#CV_NODE_ALL=

#i f enabl ed, cvuqdi sk r pmi s r equi r ed on al l nodes
CV_RAW_CHECK_ENABLED=TRUE

# Fal l back t o t hi s di st r i but i on i d
CV_ASSUME_DI STI D=OEL4

# Whet her X- Wi ndows check shoul d be per f or med f or user
equi val ence wi t h SSH
#CV_XCHK_FOR_SSH_ENABLED=TRUE

# To over r i de SSH l ocat i on
#ORACLE_SRVM_REMOTESHELL=/ usr / bi n/ ssh

# To over r i de SCP l ocat i on
#ORACLE_SRVM_REMOTECOPY=/ usr / bi n/ scp

# To over r i de ver si on used by command l i ne par ser
#CV_ASSUME_CL_VERSI ON=10. 2
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 8-3: Worki ng wi th CLUVFY (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 92

# Locat i on of t he br owser t o be used t o di spl ay HTML r epor t
#CV_DEFAULT_BROWSER_LOCATI ON=/ usr / bi n/ mozi l l a
2) Display the stage options and stage names that can be used with the cl uvf y utility.
[ gr i d@host 01 admi n] $ cluvfy stage -list

USAGE:
cl uvf y st age {- pr e| - post } <st age- name> <st age- speci f i c
opt i ons> [ - ver bose]

Val i d St ages ar e:
- pr e cf s : pr e- check f or CFS set up
- pr e cr si nst : pr e- check f or CRS i nst al l at i on
- pr e acf scf g : pr e- check f or ACFS Conf i gur at i on.
- pr e dbi nst : pr e- check f or dat abase i nst al l at i on
- pr e dbcf g : pr e- check f or dat abase conf i gur at i on
- pr e hacf g : pr e- check f or HA conf i gur at i on
- pr e nodeadd : pr e- check f or node addi t i on.
- post hwos : post - check f or har dwar e and oper at i ng
syst em
- post cf s : post - check f or CFS set up
- post cr si nst : post - check f or CRS i nst al l at i on
- post acf scf g : post - check f or ACFS Conf i gur at i on.
- post hacf g : post - check f or HA conf i gur at i on
- post nodeadd : post - check f or node addi t i on.
- post nodedel : post - check f or node del et i on.
3) Perform a postcheck for the ACFS configuration on all nodes.
[ gr i d@host 01 admi n] $ cluvfy stage -post acfscfg -n all

Per f or mi ng post - checks f or ACFS Conf i gur at i on

Checki ng node r eachabi l i t y. . .
Node r eachabi l i t y check passed f r omnode "host 01"


Checki ng user equi val ence. . .
User equi val ence check passed f or user "gr i d"

Task ACFS I nt egr i t y check st ar t ed. . .

St ar t i ng check t o see i f ASM i s r unni ng on al l cl ust er
nodes. . .

ASM Runni ng check passed. ASM i s r unni ng on al l speci f i ed
nodes

St ar t i ng Di sk Gr oups check t o see i f at l east one Di sk Gr oup
conf i gur ed. . .
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 8-3: Worki ng wi th CLUVFY (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 93
Di sk Gr oup Check passed. At l east one Di sk Gr oup conf i gur ed

Task ACFS I nt egr i t y check passed

UDev at t r i but es check f or ACFS st ar t ed. . .
UDev at t r i but es check passed f or ACFS

Post - check f or ACFS Conf i gur at i on was successf ul .
4) Display a list of the component names that can be checked with the cl uvf y utility.
[ gr i d@host 01 admi n] $ cluvfy comp -list

USAGE:
cl uvf y comp <component - name> <component - speci f i c opt i ons> [ -
ver bose]

Val i d Component s ar e:
noder each : checks r eachabi l i t y bet ween nodes
nodecon : checks node connect i vi t y
cf s : checks CFS i nt egr i t y
ssa : checks shar ed st or age accessi bi l i t y
space : checks space avai l abi l i t y
sys : checks mi ni mumsyst emr equi r ement s
cl u : checks cl ust er i nt egr i t y
cl umgr : checks cl ust er manager i nt egr i t y
ocr : checks OCR i nt egr i t y
ol r : checks OLR i nt egr i t y
ha : checks HA i nt egr i t y
f r eespace : checks f r ee space i n CRS Home
cr s : checks CRS i nt egr i t y
nodeapp : checks node appl i cat i ons exi st ence
admpr v : checks admi ni st r at i ve pr i vi l eges
peer : compar es pr oper t i es wi t h peer s
sof t war e : checks sof t war e di st r i but i on
acf s : checks ACFS i nt egr i t y
asm : checks ASM i nt egr i t y
gpnp : checks GPnP i nt egr i t y
gns : checks GNS i nt egr i t y
scan : checks SCAN conf i gur at i on
ohasd : checks OHASD i nt egr i t y
cl ocksync : checks Cl ock Synchr oni zat i on
vdi sk : checks Vot i ng Di sk conf i gur at i on and
UDEV set t i ngs
heal t hcheck : checks mandat or y r equi r ement s and/ or
best pr act i ce r ecommendat i ons
dhcp : checks DHCP conf i gur at i on
dns : checks DNS conf i gur at i on

O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 8-3: Worki ng wi th CLUVFY (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 94
5) Display the syntax usage help for the space component check of the cl uvf y utility.
[ gr i d@host 01 admi n] $ cluvfy comp space -help

USAGE:
cl uvf y comp space [ - n <node_l i st >] - l <st or age_l ocat i on> - z
<di sk_space>{B| K| M| G} [ - ver bose]

<node_l i st > i s t he comma separ at ed l i st of non- domai n
qual i f i ed nodenames, on whi ch t he t est shoul d be conduct ed. I f
" al l " i s speci f i ed, t hen al l t he nodes i n t he cl ust er wi l l be
used f or ver i f i cat i on.
<st or age_l ocat i on> i s t he st or age pat h.
<di sk_space> i s t he r equi r ed di sk space, i n uni t s of
byt es( B) , ki l obyt es( K) , megabyt es( M) or gi gabyt es( G) .

DESCRI PTI ON:
Checks f or f r ee di sk space at t he l ocat i on pr ovi ded by ' - l '
opt i on on al l t he nodes i n t he nodel i st . I f no ' - n' opt i on i s
gi ven, l ocal node i s used f or t hi s check.
6) Verify that on each node of the cluster the / t mp directory has at least 200 MB of free
space in it using the cl uvf y utility. Use verbose output.
[ gr i d@host 01 admi n] $ cluvfy comp space -n host01,host02,host03
-l /tmp -z 200M -verbose

Ver i f yi ng space avai l abi l i t y

Checki ng space avai l abi l i t y. . .

Check: Space avai l abl e on "/ t mp"

Node Name Avai l abl e Requi r ed St at us
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
host 03 4. 0942GB ( 4293124. 0KB) 200MB ( 204800. 0KB) passed
host 02 3. 8727GB ( 4060868. 0KB) 200MB ( 204800. 0KB) passed
host 01 3. 5293GB ( 3700784. 0KB) 200MB ( 204800. 0KB) passed
Resul t : Space avai l abi l i t y check passed f or " / t mp"

Ver i f i cat i on of space avai l abi l i t y was successf ul .
[ gr i d@host 01 ~] $

O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 95
Practices for Lesson 9
In this practice, you will use Oracle Clusterware to protect the Apache application.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 96
Practi ce 9-1: Protecti ng the Apache Applicati on
In this practice, you use Oracle Clusterware to protect the Apache application. To do this,
you create an application VIP for Apache (HTTPD), an action script, and a resource.
1) As the r oot user, verify that the Apache RPMs, ht t pd, ht t pd- devel , and
ht t pd- manual , are installed on your first two nodes.
[ gr i d@host 01 ~] $ su -
Passwor d: 0r acl e <<passwor d not di spl ayed

[ r oot @host 01 ~] #

[ r oot @host 01 ~] # r pm- qa| gr ep ht t pd
ht t pd- 2. 2. 3- 31. 0. 1. el 5
ht t pd- devel - 2. 2. 3- 31. 0. 1. el 5
ht t pd- manual - 2. 2. 3- 31. 0. 1. el 5

Repeat on second node
[ r oot @host 01 ~] # ssh host 02 r pm- qa| gr ep ht t pd
r oot @host 02' s passwor d: 0r acl e <<passwor d not di spl ayed
ht t pd- 2. 2. 3- 31. 0. 1. el 5
ht t pd- devel - 2. 2. 3- 31. 0. 1. el 5
ht t pd- manual - 2. 2. 3- 31. 0. 1. el 5

[ r oot @host 01 ~]

2) As the r oot user, start the Apache application on your first node with the
apachect l st ar t command.
[ r oot @host 01 ~] # apachectl start
From a VNC session on one of your three nodes, access the Apache test page on your
first node. For example, if your first node was named host01, the HTTP address
would look something like this:
http://host01.example.com
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 9-1: Protecti ng the Apache Applicati on (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 97

After you have determined that Apache is working properly, repeat this step on your
second host. After you have determined that Apache is working correctly on both
nodes, you can stop Apache with the apachect l st op command.

3) Create an action script to control the application. This script must be accessible by all
nodes on which the application resource can be located.
a) As the r oot user, create a script on the first node called apache. scr in
/ usr / l ocal / bi n that will start, stop, check status, and clean up if the
application does not exit cleanly. Make sure that the host specified in the
WEBPAGECHECK variable is your first node. Use the
/ home/ or acl e/ l abs/ l ess_09/ apache. scr . t pl file as a template for
creating the script. Make the script executable and test the script.
[ r oot @host 01 ~] # cp / home/ or acl e/ l abs/ l ess_09/ apache. t pl
/usr/local/bin/apache.scr

[ r oot @host 01 ~] # vi /usr/local/bin/apache.scr

#! / bi n/ bash

HTTPDCONFLOCATI ON=/ et c/ ht t pd/ conf / ht t pd. conf
WEBPAGECHECK=ht t p: / / host01.example.com: 80/ i cons/ apache_pb. gi f
case $1 i n
' st ar t ' )
/ usr / sbi n/ apachect l - k st ar t - f $HTTPDCONFLOCATI ON
RET=$?
; ;
' st op' )
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 9-1: Protecti ng the Apache Applicati on (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 98
/ usr / sbi n/ apachect l - k st op
RET=$?
; ;
' cl ean' )
/ usr / sbi n/ apachect l - k st op
RET=$?
; ;
' check' )
/ usr / bi n/ wget - q - - del et e- af t er $WEBPAGECHECK
RET=$?
; ;
*)
RET=0
; ;
esac
# 0: success; 1 : er r or
i f [ $RET - eq 0 ] ; t hen
exi t 0
el se
exi t 1
f i

Save the file

[ r oot @host 01 ~] # chmod 755 /usr/local/bin/apache.scr

[root@host01 ~]# apache.scr start
Verify web page
[ r oot @host 01 ~] # apache.scr stop
Refresh http://host01.example.com, Web page should no longer display
b) As r oot , create a script on the second node called apache. scr in
/ usr / bi n/ l ocal that will start, stop, check status, and clean up if the
application does not exit cleanly. Make sure that the host specified in the
WEBPAGECHECK variable is your second node. Use the
/ home/ or acl e/ l abs/ l ess_09/ apache. scr . t pl file as a template for
creating the script. Make the script executable and test the script.
[ r oot @host 01 ~] # ssh host02
r oot @host 02' s passwor d: 0racle << password not displayed
[root@host02 ~]# cp /home/oracle/labs/less_09/apache.tpl
/usr/local/bin/apache.scr
[root@host02 ~]# vi /usr/local/bin/apache.scr

#! / bi n/ bash

HTTPDCONFLOCATI ON=/ et c/ ht t pd/ conf / ht t pd. conf
WEBPAGECHECK=ht t p: / / host02.example.com: 80/ i cons/ apache_pb. gi f
case $1 i n
' st ar t ' )
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 9-1: Protecti ng the Apache Applicati on (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 99
/ usr / sbi n/ apachect l - k st ar t - f $HTTPDCONFLOCATI ON
RET=$?
; ;
' st op' )
/ usr / sbi n/ apachect l - k st op
RET=$?
; ;
' cl ean' )
/ usr / sbi n/ apachect l - k st op
RET=$?
; ;
' check' )
/ usr / bi n/ wget - q - - del et e- af t er $WEBPAGECHECK
RET=$?
; ;
*)
RET=0
; ;
esac
# 0: success; 1 : er r or
i f [ $RET - eq 0 ] ; t hen
exi t 0
el se
exi t 1
f i

Save the file

[ r oot @host 02 ~] # chmod 755 /usr/local/bin/apache.scr

[ r oot @host 02 ~] # apache.scr start
Verify web page

[ r oot @host 02 ~] # apache.scr stop
Refresh http://host02.example.com, Web page should no longer display

4) Next, you must validate the return code of a check failure using the new script. The
Apache server should NOT be running on either node. Run apache. scr check
and immediately test the return code by issuing an echo $? command. This must be
run immediately after the apache. scr check command because the shell
variable $? holds the exit code of the previous command run from the shell. An
unsuccessful check should return an exit code of 1. You should do this on both nodes.
[ r oot @host 01 ~] # apache.scr check

[ r oot @host 01 ~] # echo $?
1

O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 9-1: Protecti ng the Apache Applicati on (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 100
Repeat on second node

[ r oot @host 02 ~] # apache.scr check

[ r oot @host 02 ~] # echo $?
1

5) As the gr i d user, create a server pool for the resource called myApache_sp. This
pool contains your first two hosts and is a child of the Generic pool.
[ gr i d@host 01 ~] $ id
ui d=502( gr i d) gi d=54321( oi nst al l )
gr oups=504( asmadmi n) , 505( asmdba) , 506( asmoper ) , 54321( oi nst al l )

[ gr i d@host 01 ~] $ . oraenv
ORACLE_SI D = [ gr i d] ? +ASM1
The Or acl e base has been set t o / u01/ app/ gr i d

[ gr i d@host 01 ~] $ /u01/app/11.2.0/grid/bin/crsctl add
serverpool myApache_sp -attr "PARENT_POOLS=Generic,
SERVER_NAMES=host01 host02"

6) Check the status of the new pool on your cluster.
[ gr i d@host 01 ~] $ /u01/app/11.2.0/grid/bin/crsctl status server
-f
NAME=host 01
STATE=ONLI NE
ACTI VE_POOLS=myApache_sp Gener i c
STATE_DETAI LS=

NAME=host 02
STATE=ONLI NE
ACTI VE_POOLS=myApache_sp Gener i c
STATE_DETAI LS=. . .

7) Add the Apache Resource, which can be called myApache, to the myApache_sp
subpool that has Generic as a parent. It must be performed as r oot because the
resource requires root authority because of listening on the default privileged port 80.
Set CHECK_INTERVAL to 30, RESTART_ATTEMPTS to 2, and PLACEMENT
to r est r i ct ed.
[ r oot @host 01 ~] # su
Passwor d: 0racle << Password not displayed

[ r oot @host 01 ~] # i d
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 9-1: Protecti ng the Apache Applicati on (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 101
ui d=0( r oot ) gi d=0( r oot )
gr oups=0( r oot ) , 1( bi n) , 2( daemon) , 3( sys) , 4( adm) , 6( di sk) , 10( wheel
)

[ r oot @host 01 ~] # /u01/app/11.2.0/grid/bin/crsctl add resource
myApache -type cluster_resource -attr
"ACTION_SCRIPT=/usr/local/bin/apache.scr,
PLACEMENT='restricted', SERVER_POOLS=myApache_sp,
CHECK_INTERVAL='30', RESTART_ATTEMPTS='2'"

[root@host01 ~]#

8) View the static attributes of the myApache resource with the cr sct l st at us
r esour ce myApache - p f command.
[ r oot @host 01 ~] # /u01/app/11.2.0/grid/bin/crsctl status
resource myApache -f
NAME=myApache
TYPE=cl ust er _r esour ce
STATE=OFFLI NE
TARGET=OFFLI NE
ACL=owner : r oot : r wx, pgr p: r oot : r - x, ot her : : r - -
ACTI ON_FAI LURE_TEMPLATE=
ACTI ON_SCRI PT=/ usr / l ocal / bi n/ apache. scr
ACTI VE_PLACEMENT=0
AGENT_FI LENAME=%CRS_HOME%/ bi n/ scr i pt agent
AUTO_START=r est or e
CARDI NALI TY=1
CARDI NALI TY_I D=0
CHECK_I NTERVAL=30
CREATI ON_SEED=30
DEFAULT_TEMPLATE=
DEGREE=1
DESCRI PTI ON=
ENABLED=1
FAI LOVER_DELAY=0
FAI LURE_I NTERVAL=0
FAI LURE_THRESHOLD=0
HOSTI NG_MEMBERS=
I D=myApache
LOAD=1
LOGGI NG_LEVEL=1
NOT_RESTARTI NG_TEMPLATE=
OFFLI NE_CHECK_I NTERVAL=0
PLACEMENT=r est r i ct ed
PROFI LE_CHANGE_TEMPLATE=
RESTART_ATTEMPTS=2
SCRI PT_TI MEOUT=60
SERVER_POOLS=myApache_sp
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 9-1: Protecti ng the Apache Applicati on (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 102
START_DEPENDENCI ES=
START_TI MEOUT=0
STATE_CHANGE_TEMPLATE=
STOP_DEPENDENCI ES=
STOP_TI MEOUT=0
UPTI ME_THRESHOLD=1h
9) Use the cr sct l st ar t r esour ce myApache command to start the new
resource. Use the cr sct l st at us r esour ce myApache command to confirm
that the resource is online on the first node. If you like, open a browser and point it to
your first node as shown in step 2.
[ r oot @host 01 ~] # /u01/app/11.2.0/grid/bin/crsctl start
resource myApache
CRS- 2672: At t empt i ng t o st ar t ' myApache' on ' host 01'
CRS- 2676: St ar t of ' myApache' on ' host 01' succeeded

[ r oot @host 01 ~] # /u01/app/11.2.0/grid/bin/crsctl status
r esour ce myApache
NAME=myApache
TYPE=cl ust er _r esour ce
TARGET=ONLI NE
STATE=ONLI NE on host 01

10) Confirm that Apache is NOT running on your second node. The easiest way to do this
is to check for the running / usr / sbi n/ ht t pd - k st ar t - f
/ et c/ ht t pd/ conf / ht t pd. conf d processes with the ps command.
[ r oot @host 01 ~] # ssh host02 ps -ef|grep -i "httpd -k"
r oot @host 02' s passwor d: 0racle << password is not displayed
[ r oot @host 01 ~] #

11) Next, simulate a node failure on your first node using the i ni t command as r oot .
Before issuing the reboot on the first node, open a VNC session on the second node
and as the r oot user, execute the
/ home/ or acl e/ l abs/ l ess_09/ moni t or . sh script so that you can monitor
the failover.
ON THE FIRST NODE AS THE root USER

[ r oot @host 01 ~] # reboot To initiate a reboot, simulating a node failure

ON THE second NODE AS THE root USER

[ r oot @host 02 ~] # cat /home/oracle/labs/less_09/monitor.sh
whi l e t r ue
do
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 9-1: Protecti ng the Apache Applicati on (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 103
ps - ef | gr ep - i " ht t pd - k"
sl eep 1
done
cr sct l r el ocat e
[ r oot @host 02 ~] # /home/oracle/labs/less_09/monitor.sh
Execute this on the second node

r oot 21940 18530 0 11: 01 pt s/ 4 00: 00: 00 gr ep - i ht t pd - k
r oot 21948 18530 0 11: 01 pt s/ 4 00: 00: 00 gr ep - i ht t pd - k
r oot 21951 18530 0 11: 01 pt s/ 4 00: 00: 00 gr ep - i ht t pd k
. . .
apache 22123 22117 0 11: 01 ? 00: 00: 00
/ usr / sbi n/ ht t pd - k st ar t - f / et c/ ht t pd/ conf / ht t pd. conf

apache 22124 22117 0 11: 01 ? 00: 00: 00
/ usr / sbi n/ ht t pd - k st ar t - f / et c/ ht t pd/ conf / ht t pd. conf

apache 22125 22117 0 11: 01 ? 00: 00: 00
/ usr / sbi n/ ht t pd - k st ar t - f / et c/ ht t pd/ conf / ht t pd. conf
. . .

Issue a ctl-c to stop the monitoring

[ r oot @host 02 ~] #

12) Verify the failover from the first node to the second with the cr sct l st at
r esour ce myApache t command.
[ r oot @host 02 ~]# /u01/app/11.2.0/grid/bin/crsctl stat resource
myApache t

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NAME TARGET STATE SERVER STATE_DETAI LS
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Cl ust er Resour ces
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
myApache
1 ONLI NE ONLI NE host 02

Access http://host02.example.com, Apache test web page should display

Access http://host01.example.com, Web page should no longer display


O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 9-1: Protecti ng the Apache Applicati on (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 104
13) Use the cr sct l r el ocat e r esour ce command to move the myApache
resource back to host01.
[ r oot @host 01 ~] # /u01/app/11.2.0/grid/bin/crsctl relocate
resource myApache
CRS-2673: Attempting to stop 'myApache' on 'host02'
CRS-2677: Stop of 'myApache' on 'host02' succeeded
CRS-2672: Attempting to start 'myApache' on 'host01'
CRS-2676: Start of 'myApache' on 'host01' succeeded

Access http://host01.example.com, Apache test web page should display

Access http://host02.example.com, Web page should no longer display


O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 105
Practi ce 9-2: Perform RAC Installati on
In this practice, you use will use the OUI to install the RAC database software and create
a three-node RAC database to make Enterprise Manager DB Console available for the
ASM labs.
1) From your classroom PC desktop, execute ssh X or acl e@host 01 to open a
terminal session on host01 as the or acl e user.
[ vncuser @classsroom_pc ~] # ssh - X or acl e@host 01
or acl e@host 01' s passwor d:
/ usr / bi n/ xaut h: cr eat i ng new aut hor i t y f i l e
/ home/ or acl e/ . Xaut hor i t y
[ or acl e@host 01 ~] $
2) As the or acl e user, navigate to / st age/ dat abase and execute
r unI nst al l er .
[ or acl e@host 01 ~] $ cd /stage/database

[ or acl e@host 01 dat abase] $ ./runInstaller

Step Screen/Page
Description
Choices or Values
a. Configure Security
Updates
Deselect I wish to receive security updates and click
Next.
b. Dialog box: You
have not provided an
email address
Click Yes.
c. Download Software
Updates
Click Skip software updates and click Next.
d. Select Installation
Option
Select Create and configure a database and click Next.
e. System Class Select Server Class and click Next.
f. Grid Installation
Options
Select Oracle Real Applications Clusters database
installation. Make sure that all three nodes are selected,
then click SSH Connectivity.
Enter the oracle user password, 0r acl e and click Setup.
When SSH for oracle has been configured on the nodes,
click Next.
g. Select Install Type Select Advanced install and click Next.
h. Select Product
Languages
Click Next.
i. Select Database
Edition
Select Enterprise Edition and click Next.
j. Specify Installation
Location
Oracle Base should be / u01/ app/ or acl e and
Software Location should be
/ u01/ app/ or acl e/ pr oduct / 11. 2. 0/ dbhome_1.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 9-2: Perform RAC Installati on (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 106
Step Screen/Page
Description
Choices or Values
Click Next.
k. Select Configuration
Type
Select General Purpose/Transaction Processing and click
Next.
l. Specify Database
Identifiers
The Global database name should be orcl and the Global
Service Identifier should be orcl. Click Next.
m. Specify
Configuration
Options
On the Memory tab, set Target Database Memory to 600
MB. Click the Sample Schemas folder tab, click the Create
database with sample schemas check box and click Next.
n. Specify Management
Options
Select Use Oracle Enterprise Manager Database Control...
and click Next.
o. Specify Database
Storage Options
Select Oracle Automatic Storage Management and enter
the ASMSNMP password, or acl e_4U. Click Next.
p. Specify Recovery
Options
Select Do not enable automated backups and click Next.
q. Select ASM Disk
Group
Select DATA and click Next.
r. Specify Schema
Passwords
For convenience sake only, select Use the same password
for all accounts (or acl e_4U), confirm the password and
click Next.
s. Privileged Operating
System Groups
Select dba as the OSDBA group and oper as the OSOPER
group and click Next.
t. Summary Click Install.
u. Database
Configuration
Assistant dialog box
Take note of the Database Control url. Click OK to
dismiss the dialog box.
v. Executer
Configuration scripts
Run the or ai nst Root . sh script on host03 and the
r oot . sh script on all three nodes as directed. Click OK.
w. Finish Click Close to dismiss the OUI.

O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 107
Practices for Lesson 11
In these practices, you will adjust ASM initialization parameters, stop and start instances,
and monitor the status of instances.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 108
Practi ce 11-1: Admi ni steri ng ASM Instances
In this practice, you adjust initialization parameters in the SPFILE, and stop and start the
ASM instances on local and remote nodes.
1) Disk groups are reconfigured occasionally to move older data to slower disks. Even
though these operations occur at scheduled maintenance times in off-peak hours, the
rebalance operations do not complete before regular operations resume. There is some
performance impact to the regular operations. The setting for the
ASM_POWER_LI MI T initialization parameter determines the speed of the rebalance
operation. Determine the current setting and increase the speed by 2.
a) Open a terminal window on the first node, become the gr i d user, and set the
environment to use the +ASM1 instance. Connect to the +ASM1 instance as SYS
with the SYSASMprivilege. What is the setting for ASM_POWER_LIMIT?
[ or acl e@host 01 ~] $ su - grid
Passwor d: 0racle << password is not displayed
[ [ gr i d@host 01 ~] $ . oraenv
ORACLE_SI D = [ gr i d] ? +ASM1
The Or acl e base has been set t o / u01/ app/ gr i d

[ gr i d@host 01 ~] $ sqlplus / as sysasm

SQL*Pl us: Rel ease 11. 2. 0. 3. 0 Pr oduct i on on Tue May 1 06: 41: 53
2012

Copyr i ght ( c) 1982, 2011, Or acl e. Al l r i ght s r eser ved.


Connect ed t o:
Or acl e Dat abase 11g Ent er pr i se Edi t i on Rel ease 11. 2. 0. 3. 0 -
Wi t h t he Real Appl i cat i on Cl ust er s and Aut omat i c St or age
Management opt i ons

SQL> show parameter ASM_POWER_LIMIT

NAME TYPE VALUE
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
asm_power _l i mi t i nt eger 1
SQL>
b) This installation uses an SPFILE. Use the ALTER SYSTEMcommand to change
ASM_POWER_LIMIT for all nodes.
SQL> show parameter SPFILE

NAME TYPE VALUE
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
spf i l e st r i ng
+DATA/ cl ust er 01/ asmpar amet er f i

l e/ r egi st r y. 253. 781446641
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 11-1: Admi ni steri ng ASM Instances (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 109
SQL> ALTER SYSTEM set ASM_POWER_LI MI T=3 SCOPE=BOTH SI D=' *' ;

Syst emal t er ed.

SQL> show par amet er ASM_POWER_LI MI T


NAME TYPE VALUE
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
asm_power _l i mi t i nt eger 3
SQL> SQL

2) You have decided that due to other maintenance operations you want one instance
+ASM1 to handle the bulk of the rebalance operation, so you will set the ASM
POWER_LIMIT to 1 on instance +ASM2 and 5 on instance +ASM1.
SQL> ALTER SYSTEM set ASM_POWER_LIMIT=1 SCOPE=BOTH
SID='+ASM2';

Syst emal t er ed.

SQL> ALTER SYSTEM set ASM_POWER_LIMIT=5 SCOPE=BOTH
SID='+ASM1';

Syst emal t er ed.

SQL> show parameter ASM_POWER_LIMIT

NAME TYPE VALUE
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
asm_power _l i mi t i nt eger 5
SQL>
SQL> col umn NAME f or mat A16
SQL> col umn VALUE f or mat 999999
SQL> sel ect i nst _i d, name, val ue f r omGV$PARAMETER
2 wher e name l i ke ' asm_power _l i mi t ' ;

I NST_I D NAME VALUE
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1 asm_power _l i mi t 5

2 asm_power _l i mi t 1

3 asm_power _l i mi t 3
SQL>

O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 11-1: Admi ni steri ng ASM Instances (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 110
3) Exit the SQL*Plus application.
SQL> exit

[ gr i d@host 01 ~] $
4) The ASM instance and all associated applications, database, and listener on one node
must be stopped for a maintenance operation to the physical cabling. Stop all the
applications, ASM, and listener associated with +ASM1 using srvctl.
a) In a new terminal window, as the or acl e user, stop Enterprise Manager on your
first node.
[ gr i d@host 01 ~] $ su oracle
Passwor d: 0racle << password is not displayed

[ or acl e@host 01 ~] $ . oraenv
ORACLE_SI D = [ or acl e] ? orcl
The Or acl e base has been set t o / u01/ app/ or acl e

[ or acl e@host 01 ~] $ export ORACLE_UNQNAME=orcl

[ or acl e@host 01 ~] $ emct l st op dbconsol e
Or acl e Ent er pr i se Manager 11g Dat abase Cont r ol Rel ease
11. 2. 0. 3. 0
Copyr i ght ( c) 1996, 2011 Or acl e Cor por at i on. Al l r i ght s
r eser ved.
ht t ps: / / host 01. exampl e. com: 1158/ em/ consol e/ about Appl i cat i on
St oppi ng Or acl e Ent er pr i se Manager 11g Dat abase Cont r ol . . .
. . . St opped.
b) Stop the or cl database.
[ or acl e@host 01 ~] $ srvctl stop instance -d orcl -n host01
c) Verify that the database is stopped on host01. The pgr ep command shows that
no or cl background processes are running.
[ or acl e@host 01 ~] $ pgrep -lf orcl
d) In a terminal window, become the gr i d OS user, set the oracle environment, set
the class environment, and then stop the ASM instance +ASM using the sr vct l
st op asmn command.
[ gr i d@host 01 ~] $ srvctl stop asm -n host01
PRCR- 1014 : Fai l ed t o st op r esour ce or a. asm
PRCR- 1065 : Fai l ed t o st op r esour ce or a. asm
CRS- 2529: Unabl e t o act on ' or a. asm' because t hat woul d
r equi r e st oppi ng or r el ocat i ng ' or a. DATA. dg' , but t he f or ce
opt i on was not speci f i ed

O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 11-1: Admi ni steri ng ASM Instances (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 111
e) Attempt to stop the ASM instance on host01 using the force option, f .
[ gr i d@host 01 ~] $ srvctl stop asm -n host01 -f

f) As the r oot user, stop the ASM instance with cr sct l st op cl ust er n
host 01 command. This command will stop all the Cluster services on the node.
[ gr i d@host 01 ~] $ su -
Passwor d: 0r acl e << passwor d i s not di spl ayed

[ r oot @host 01 ~] # . oraenv
ORACLE_SI D = [ r oot ] ? +ASM1
The Or acl e base has been set t o / u01/ app/ gr i d

[ r oot @host 01 ~] # crsctl stop cluster -n host01
CRS- 2673: At t empt i ng t o st op ' or a. cr sd' on ' host 01'
CRS- 2790: St ar t i ng shut down of Cl ust er Ready Ser vi ces- managed
r esour ces on ' host 01'
CRS- 2673: At t empt i ng t o st op ' or a. LI STENER. l snr ' on ' host 01'
CRS- 2673: At t empt i ng t o st op ' or a. LI STENER_SCAN1. l snr ' on
' host 01'
CRS- 2677: St op of ' or a. LI STENER. l snr ' on ' host 01' succeeded
CRS- 2673: At t empt i ng t o st op ' or a. host 01. vi p' on ' host 01'
CRS- 2677: St op of ' or a. LI STENER_SCAN1. l snr ' on ' host 01'
succeeded
CRS- 2673: At t empt i ng t o st op ' or a. scan1. vi p' on ' host 01'
CRS- 2677: St op of ' or a. scan1. vi p' on ' host 01' succeeded
CRS- 2672: At t empt i ng t o st ar t ' or a. scan1. vi p' on ' host 02'
CRS- 2677: St op of ' or a. host 01. vi p' on ' host 01' succeeded
CRS- 2672: At t empt i ng t o st ar t ' or a. host 01. vi p' on ' host 03'
CRS- 2676: St ar t of ' or a. host 01. vi p' on ' host 03' succeeded
CRS- 2676: St ar t of ' or a. scan1. vi p' on ' host 02' succeeded
CRS- 2672: At t empt i ng t o st ar t ' or a. LI STENER_SCAN1. l snr ' on
' host 02'
CRS- 2676: St ar t of ' or a. LI STENER_SCAN1. l snr ' on ' host 02'
succeeded
CRS- 2673: At t empt i ng t o st op ' or a. ons' on ' host 01'
CRS- 2677: St op of ' or a. ons' on ' host 01' succeeded
CRS- 2673: At t empt i ng t o st op ' or a. net 1. net wor k' on ' host 01'
CRS- 2677: St op of ' or a. net 1. net wor k' on ' host 01' succeeded
CRS- 2792: Shut down of Cl ust er Ready Ser vi ces- managed r esour ces
on ' host 01' has compl et ed
CRS- 2677: St op of ' or a. cr sd' on ' host 01' succeeded
CRS- 2673: At t empt i ng t o st op ' or a. ct ssd' on ' host 01'
CRS- 2673: At t empt i ng t o st op ' or a. evmd' on ' host 01'
CRS- 2673: At t empt i ng t o st op ' or a. cl ust er _i nt er connect . hai p'
on ' host 01'
CRS- 2677: St op of ' or a. evmd' on ' host 01' succeeded
CRS- 2677: St op of ' or a. cl ust er _i nt er connect . hai p' on ' host 01'
succeeded
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 11-1: Admi ni steri ng ASM Instances (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 112
^[ CRS- 2677: St op of ' or a. ct ssd' on ' host 01' succeeded
CRS- 2673: At t empt i ng t o st op ' or a. cssd' on ' host 01'
CRS- 2677: St op of ' or a. cssd' on ' host 01' succeeded

[ r oot @host 01 ~] #
g) Confirm that the listener has been stopped.
[ r oot @host 01 ~] # lsnrctl status listener

LSNRCTL f or Li nux: Ver si on 11. 2. 0. 3. 0 - Pr oduct i on on 01- MAY-
2012 07: 31: 08

Copyr i ght ( c) 1991, 2011, Or acl e. Al l r i ght s r eser ved.

Connect i ng t o
( DESCRI PTI ON=( ADDRESS=( PROTOCOL=I PC) ( KEY=LI STENER) ) )
TNS- 12541: TNS: no l i st ener
TNS- 12560: TNS: pr ot ocol adapt er er r or
TNS- 00511: No l i st ener
Li nux Er r or : 2: No such f i l e or di r ect or y
5) Restart all the cluster resources on node host01.
[ r oot @host 01 ~] # crsctl start cluster -n host01
CRS- 2672: At t empt i ng t o st ar t ' or a. cssdmoni t or ' on ' host 01'
CRS- 2676: St ar t of ' or a. cssdmoni t or ' on ' host 01' succeeded
CRS- 2672: At t empt i ng t o st ar t ' or a. cssd' on ' host 01'
CRS- 2672: At t empt i ng t o st ar t ' or a. di skmon' on ' host 01'
CRS- 2676: St ar t of ' or a. di skmon' on ' host 01' succeeded
CRS- 2676: St ar t of ' or a. cssd' on ' host 01' succeeded
CRS- 2672: At t empt i ng t o st ar t ' or a. ct ssd' on ' host 01'
CRS- 2672: At t empt i ng t o st ar t ' or a. cl ust er _i nt er connect . hai p'
on ' host 01'
CRS- 2676: St ar t of ' or a. ct ssd' on ' host 01' succeeded
CRS- 2672: At t empt i ng t o st ar t ' or a. evmd' on ' host 01'
CRS- 2676: St ar t of ' or a. evmd' on ' host 01' succeeded
CRS- 2676: St ar t of ' or a. cl ust er _i nt er connect . hai p' on ' host 01'
succeeded
CRS- 2672: At t empt i ng t o st ar t ' or a. asm' on ' host 01'
CRS- 2676: St ar t of ' or a. asm' on ' host 01' succeeded
CRS- 2672: At t empt i ng t o st ar t ' or a. cr sd' on ' host 01'
CRS- 2676: St ar t of ' or a. cr sd' on ' host 01' succeeded
6) Verify that the resources, database, and Enterprise manager are restarted on host01.
The cr sct l st at us r esour ce n host 01 command shows that ASM is online.
[ r oot @host 01 ~] # crsctl status resource -n host01
NAME=or a. DATA. dg
TYPE=or a. di skgr oup. t ype
TARGET=ONLI NE
STATE=ONLI NE
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 11-1: Admi ni steri ng ASM Instances (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 113

NAME=or a. FRA. dg
TYPE=or a. di skgr oup. t ype
TARGET=ONLI NE
STATE=ONLI NE

NAME=or a. LI STENER. l snr
TYPE=or a. l i st ener . t ype
TARGET=ONLI NE
STATE=ONLI NE

NAME=or a. LI STENER_SCAN1. l snr
TYPE=or a. scan_l i st ener . t ype
CARDI NALI TY_I D=1
TARGET=ONLI NE
STATE=ONLI NE

NAME=or a. asm
TYPE=or a. asm. t ype
TARGET=ONLI NE
STATE=ONLI NE

NAME=or a. gsd
TYPE=or a. gsd. t ype
TARGET=OFFLI NE
STATE=OFFLI NE

NAME=or a. host 01. vi p
TYPE=or a. cl ust er _vi p_net 1. t ype
CARDI NALI TY_I D=1
TARGET=ONLI NE
STATE=ONLI NE

NAME=or a. net 1. net wor k
TYPE=or a. net wor k. t ype
TARGET=ONLI NE
STATE=ONLI NE

NAME=or a. ons
TYPE=or a. ons. t ype
TARGET=ONLI NE
STATE=ONLI NE

NAME=or a. scan1. vi p
TYPE=or a. scan_vi p. t ype
CARDI NALI TY_I D=1
TARGET=ONLI NE
STATE=ONLI NE

[ r oot @host 01 ~] #
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 11-1: Admi ni steri ng ASM Instances (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 114
7) In a terminal window, on the first node as the or acl e user, start the or cl instance
and Enterprise Manager on host01.
[oracle@host01 ~]$ . oraenv
ORACLE_SI D = [ or acl e] ? orcl
The Or acl e base has been set t o / u01/ app/ or acl e

[ or acl e@host 01 ~] $ srvctl start instance -d orcl -n host01

[ or acl e@host 01 ~] $ export ORACLE_UNQNAME=orcl

[ or acl e@host 01 ~] $ emct l st ar t dbconsol e
Or acl e Ent er pr i se Manager 11g Dat abase Cont r ol Rel ease
11. 2. 0. 3. 0
Copyr i ght ( c) 1996, 2011 Or acl e Cor por at i on. Al l r i ght s
r eser ved.
ht t ps: / / host 01. exampl e. com: 1158/ em/ consol e/ about Appl i cat i on
St ar t i ng Or acl e Ent er pr i se Manager 11g Dat abase Cont r ol
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . st ar t ed.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Logs ar e gener at ed i n di r ect or y
/ u01/ app/ or acl e/ pr oduct / 11. 2. 0/ dbhome_1/ host 01_or cl / sysman/ l og
8) Determine the Enterprise Manager DB control configuration on the cluster. Notice
that dbconsole is running on your first node, an agent is running on your second node,
and no EM components have been started on the third node.
[ or acl e@host 01 ~] $ emca -displayConfig dbcontrol -cluster

STARTED EMCA at May 1, 2012 7: 57: 29 AM
EM Conf i gur at i on Assi st ant , Ver si on 11. 2. 0. 3. 0 Pr oduct i on
Copyr i ght ( c) 2003, 2011, Or acl e. Al l r i ght s r eser ved.

Ent er t he f ol l owi ng i nf or mat i on:
Dat abase uni que name: or cl
Ser vi ce name: or cl
Do you wi sh t o cont i nue? [ yes( Y) / no( N) ] : y
May 1, 2012 7: 59: 30 AM or acl e. sysman. emcp. EMConf i g per f or m
I NFO: Thi s oper at i on i s bei ng l ogged at
/ u01/ app/ or acl e/ cf gt ool l ogs/ emca/ or cl / emca_2012_05_01_07_57_25
. l og.
May 1, 2012 8: 00: 21 AM or acl e. sysman. emcp. EMDBPost Conf i g
showCl ust er DBCAgent Message
I NFO:
**************** Cur r ent Conf i gur at i on ****************
I NSTANCE NODE DBCONTROL_UPLOAD_HOST
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

or cl host 01 host 01. exampl e. com
or cl host 02 host 01. exampl e. com
or cl host 03 host 01. exampl e. com
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 11-1: Admi ni steri ng ASM Instances (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 115


Ent er pr i se Manager conf i gur at i on compl et ed successf ul l y
FI NI SHED EMCA at May 1, 2012 8: 00: 21 AM
9) In a terminal window, become the r oot user. Stop all the cluster resources on
host02.
[ or acl e@host 01 ~] $ su - root
Passwor d: 0racle << password is not displayed

[ r oot @host 01 ~] # . oraenv
ORACLE_SI D = [ r oot ] ? +ASM1
The Or acl e base has been set t o / u01/ app/ gr i d

[ r oot @host 01 ~] # crsctl stop cluster -n host02
CRS- 2673: At t empt i ng t o st op ' or a. cr sd' on ' host 02'
CRS- 2790: St ar t i ng shut down of Cl ust er Ready Ser vi ces- managed
r esour ces on ' host 02'
CRS- 2673: At t empt i ng t o st op ' or a. LI STENER_SCAN1. l snr ' on
' host 02'
CRS- 2673: At t empt i ng t o st op ' or a. LI STENER. l snr ' on ' host 02'
CRS- 2673: At t empt i ng t o st op ' or a. FRA. dg' on ' host 02'
CRS- 2673: At t empt i ng t o st op ' or a. or cl . db' on ' host 02'
CRS- 2677: St op of ' or a. LI STENER_SCAN1. l snr ' on ' host 02'
succeeded
CRS- 2673: At t empt i ng t o st op ' or a. scan1. vi p' on ' host 02'
CRS- 2677: St op of ' or a. scan1. vi p' on ' host 02' succeeded
CRS- 2672: At t empt i ng t o st ar t ' or a. scan1. vi p' on ' host 01'
CRS- 2677: St op of ' or a. LI STENER. l snr ' on ' host 02' succeeded
CRS- 2673: At t empt i ng t o st op ' or a. host 02. vi p' on ' host 02'
CRS- 2677: St op of ' or a. host 02. vi p' on ' host 02' succeeded
CRS- 2672: At t empt i ng t o st ar t ' or a. host 02. vi p' on ' host 03'
CRS- 2676: St ar t of ' or a. host 02. vi p' on ' host 03' succeeded
CRS- 2677: St op of ' or a. or cl . db' on ' host 02' succeeded
CRS- 2673: At t empt i ng t o st op ' or a. DATA. dg' on ' host 02'
CRS- 2677: St op of ' or a. FRA. dg' on ' host 02' succeeded
CRS- 2676: St ar t of ' or a. scan1. vi p' on ' host 01' succeeded
CRS- 2672: At t empt i ng t o st ar t ' or a. LI STENER_SCAN1. l snr ' on
' host 01'
CRS- 2676: St ar t of ' or a. LI STENER_SCAN1. l snr ' on ' host 01'
succeeded
CRS- 2677: St op of ' or a. DATA. dg' on ' host 02' succeeded
CRS- 2673: At t empt i ng t o st op ' or a. asm' on ' host 02'
CRS- 2677: St op of ' or a. asm' on ' host 02' succeeded
CRS- 2673: At t empt i ng t o st op ' or a. ons' on ' host 02'
CRS- 2677: St op of ' or a. ons' on ' host 02' succeeded
CRS- 2673: At t empt i ng t o st op ' or a. net 1. net wor k' on ' host 02'
CRS- 2677: St op of ' or a. net 1. net wor k' on ' host 02' succeeded
CRS- 2792: Shut down of Cl ust er Ready Ser vi ces- managed r esour ces
on ' host 02' has compl et ed
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 11-1: Admi ni steri ng ASM Instances (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 116
CRS- 2677: St op of ' or a. cr sd' on ' host 02' succeeded
CRS- 2673: At t empt i ng t o st op ' or a. ct ssd' on ' host 02'
CRS- 2673: At t empt i ng t o st op ' or a. evmd' on ' host 02'
CRS- 2673: At t empt i ng t o st op ' or a. asm' on ' host 02'
CRS- 2677: St op of ' or a. evmd' on ' host 02' succeeded
CRS- 2677: St op of ' or a. asm' on ' host 02' succeeded
CRS- 2673: At t empt i ng t o st op ' or a. cl ust er _i nt er connect . hai p'
on ' host 02'
CRS- 2677: St op of ' or a. ct ssd' on ' host 02' succeeded
CRS- 2677: St op of ' or a. cl ust er _i nt er connect . hai p' on ' host 02'
succeeded
CRS- 2673: At t empt i ng t o st op ' or a. cssd' on ' host 02'
CRS- 2677: St op of ' or a. cssd' on ' host 02' succeeded
[ r oot @host 01 ~] #
10) What is the status of the orcl database on your cluster?
[ r oot @host 01 ~] # srvctl status database -d orcl
I nst ance or cl 1 i s r unni ng on node host 01
I nst ance or cl 2 i s not r unni ng on node host 02
I nst ance or cl 3 i s r unni ng on node host 03
11) As the r oot user on your first node, start the cluster on host02.
[ r oot @host 01 ~] # crsctl start cluster -n host02
CRS- 2672: At t empt i ng t o st ar t ' or a. cssdmoni t or ' on ' host 02'
CRS- 2676: St ar t of ' or a. cssdmoni t or ' on ' host 02' succeeded
CRS- 2672: At t empt i ng t o st ar t ' or a. cssd' on ' host 02'
CRS- 2672: At t empt i ng t o st ar t ' or a. di skmon' on ' host 02'
CRS- 2676: St ar t of ' or a. di skmon' on ' host 02' succeeded
CRS- 2676: St ar t of ' or a. cssd' on ' host 02' succeeded
CRS- 2672: At t empt i ng t o st ar t ' or a. ct ssd' on ' host 02'
CRS- 2676: St ar t of ' or a. ct ssd' on ' host 02' succeeded
CRS- 2672: At t empt i ng t o st ar t ' or a. evmd' on ' host 02'
CRS- 2672: At t empt i ng t o st ar t ' or a. cl ust er _i nt er connect . hai p'
on ' host 02'
CRS- 2676: St ar t of ' or a. evmd' on ' host 02' succeeded
CRS- 2676: St ar t of ' or a. cl ust er _i nt er connect . hai p' on ' host 02'
succeeded
CRS- 2672: At t empt i ng t o st ar t ' or a. asm' on ' host 02'
CRS- 2676: St ar t of ' or a. asm' on ' host 02' succeeded
CRS- 2672: At t empt i ng t o st ar t ' or a. cr sd' on ' host 02'
CRS- 2676: St ar t of ' or a. cr sd' on ' host 02' succeeded


O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 11-1: Admi ni steri ng ASM Instances (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 117
12) Did the orcl instance on host02 start? Use the svr ct l st at us dat abase or cl
command as any of the users (or acl e, gr i d, r oot ) as long as the oracle
environment is set for that user.
Note: The database may take a couple of minutes to restart. If the or cl 2 instance is
not running, try the status command again, until instance or cl 2 is running.
[ r oot @host 01 ~] # . oraenv
ORACLE_SI D = [ r oot ] ? +ASM1
The Or acl e base has been set t o / u01/ app/ gr i d
[ r oot @host 01 ~] # sr vct l st at us dat abase - d or cl
I nst ance or cl 1 i s r unni ng on node host 01
I nst ance or cl 2 i s r unni ng on node host 02
I nst ance or cl 3 i s r unni ng on node host 03


O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 118
Practices for Lesson 12
In these practices, you will add, configure, and remove disk groups, manage rebalance
operations, and monitor disk and disk group I/O statistics.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 119
Practi ce 12-1: Admi ni steri ng ASM Di sk Groups
In this practice, you will change the configuration of a disk group, and control the
resulting rebalance operations. You will determine the connected clients to the existing
disk groups, and perform disk group checks.
Because the asmadmin group has only one member, gr i d, open a terminal window and
become the gr i d OS user for this practice.
You will use several tools, such as EM, ASMCMD, and ASMCA, to perform the same
operations.
1) The FRA disk group has more disks allocated than are needed. So that one disk will
be dropped from the FRA disk group, remove ASMDISK08 from the disk group. Use
ASMCMD.
a) As the gr i d OS user on your first node, confirm that the FRA disk group is
mounted. If it is not mounted, mount it on your first and second nodes.
[ gr i d@host 01 ~] $ . oraenv
ORACLE_SI D = [ gr i d] ? +ASM1
The Or acl e base has been set t o / u01/ app/ gr i d

[ gr i d@host 01 ~] $ asmcmd
ASMCMD> lsdg
St at e Type Rebal Sect or Bl ock AU Tot al _MB
Fr ee_MB Req_mi r _f r ee_MB Usabl e_f i l e_MB Of f l i ne_di sks
Vot i ng_f i l es Name
MOUNTED NORMAL N 512 4096 1048576 9788
4271 2447 912 0
Y DATA/
MOUNTED EXTERN N 512 4096 1048576 9788
9377 0 9377 0
N FRA/
ASMCMD> exit

[ gr i d@host 01 ~] $ crsctl status resource ora.FRA.dg -t
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NAME TARGET STATE SERVER
STATE_DETAI LS
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Local Resour ces
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
or a. FRA. dg
ONLI NE ONLI NE host 01
ONLI NE ONLI NE host 02
ONLI NE ONLI NE host 03

[ gr i d@host 01 ~] $
b) Use the chdg command with inline XML. Note that the command is typed
without a return, all on one line.
chdg <chdg name="FRA" power ="5"> <dr op>
<dsk name="ASMDI SK08"/ > </ dr op> </ chdg>
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 12-1: Admi ni steri ng ASM Di sk Groups (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 120
[ grid@host01 ~]$ asmcmd
ASMCMD> chdg <chdg name="FRA" power ="5" > <dr op>
<dsk name="ASMDI SK08"/ > </ dr op> </ chdg>
Di skgr oup al t er ed.
ASMCMD>

2) In preparation for adding another disk to the DATA disk group, perform a disk check
to verify the disk group metadata. Use the check disk group command chkdg.
ASMCMD> chkdg DATA

3) Add another disk (ASMDISK09) to the DATA disk group and remove a disk
(ASMDISK04), but the rebalance operation must wait until a quiet time and then
proceed as quickly as possible. Use Enterprise Manager Database Control. On your
classroom PC desktop, open a browser and enter this address:
https://host01.example.com:1158/em. Login using sys and or acl e_4U as sysdba.
Note: The first time you access Database Control, you will be presented with a
Secure Connection Failed dialog box. Click the Or you can add an exception
link. Next, the Secure Connection Failed dialog box appears. Click Add
Exception. On the Add Security Exception dialog box, click Get Certificate.
Step Screen/Page Description Choices or Values
a. Automatic Storage Management:
Instance name Home
Click Disk Groups tab.
b. Automatic Storage Management
Login
Enter:
Username: SYS
Password: or acl e_4U
Click Login.
c. Automatic Storage Management:
Instance name Disk Groups
Click the DATA link.
d. Disk Group: DATA Click ADD.
e. Add Disks Change Rebalance Power to 0
(this prevents any rebalance).
Select Disks: ORCL:ASMDISK09
Click Show SQL.
f. Show SQL The SQL statement that will be executed is
shown:
ALTER DISKGROUP DATA ADD DISK
'ORCL:ASMDISK09' SIZE 2447 M
REBALANCE POWER 0
Click Return.
g. Add Disks Click OK.
h. The Add Disks in Progress is
message displayed

i. Disk Group: DATA Select ASMDISK04.
Click Remove.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 12-1: Admi ni steri ng ASM Di sk Groups (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 121
j. Confirmation Click Show Advanced Options.
Set Rebalance Power to 0.
Click Yes.

4) Perform the pending rebalance operation on disk group DATA.
Step Screen/Page Description Choices or Values
a. Disk Group: DATA Click the Automatic Storage Management:
+ASM1_host01.example.com locator link at
the top of page.
b. Automatic Storage Management:
+ASM1_host01.example.com
Select the DATA disk group check box.
Click Rebalance.
c. Select ASM Instances Select all ASM instances.
Click OK.
d. Confirmation Click Show Advanced Options.
Set Rebalance Power to 11.
Click Yes.
e. Automatic Storage Management:
+ASM1_host01.example.com
Update Message:
Disk Group DATA rebalance request has
been submitted.
Click the DATA disk group.
f. Disk Group: DATA Observe the Used(%) column for the various
disk in the DATA disk group.
Notice the status column (ASMDISK04 is
marked as DROPPING).
Click the browsers Refresh button.
g. Disk Group: DATA Notice the change in the Used(%) column.
Refresh again. After a few minutes the
ASMDISK04 will not appear in the list on
Member disks.
h. Exit EM.

5) Examine the disk activity involving the DATA disk group.
a) Examine the statistics for all the disk groups and then check the individual disks
in the DATA disk group using EM.
Step Screen/Page Description Choices or Values
a. Cluster Database:
orcl.example.com: Home
In the Instances section, click the
+ASM1.host01.example.com link.
b. Automatic Storage Management:
+ASM1_host01.example.com:
Home
Click the Performance tab.
c. Automatic Storage Management:
+ASM1_host01.example.com:
Performance
Notice the graphs of the various metrics:
Response Time, Throughput, Operations per
Second, and Operation Size. Almost all the
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 12-1: Admi ni steri ng ASM Di sk Groups (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 122
Step Screen/Page Description Choices or Values
data operations involved the DATA disk
group.
In the Additional Monitoring Links section,
click Disk Group I/O Cumulative Statistics.
d. Automatic Storage Management
Login
Enter:
Username: SYS
Password: or acl e_4U
Click Login.
e. Disk Group I/O Cumulative
Statistics
Notice that almost all of the I/O calls are
against the DATA disk group.
Expand the DATA disk group to show the
individual disk statistics.
f. Disk Group I/O Cumulative
Statistics
Examine the number of reads and writes to
each disk in the DATA disk group. The I/O
to the disks will not be balanced because
ASMDISK09 was just added.
g. Exit EM.

b) Examine the disk I/O statistics using the l sdsk - - st at i st i cs command.
[ gr i d@host 01 ~] $ asmcmd
ASMCMD> lsdsk --statistics
Reads Wr i t e Read_Er r s Wr i t e_Er r s Read_t i me Wr i t e_Ti me
Byt es_Read Byt es_Wr i t t en Vot i ng_Fi l e Pat h
19651 27173 0 0 2088. 659 3285. 676
531651072 391219200 Y ORCL: ASMDI SK01
7801 31713 0 0 956. 054 2064. 172
203924480 412720640 Y ORCL: ASMDI SK02
62372 28850 0 0 2532. 262 1853. 304
1074500096 337212416 Y ORCL: ASMDI SK03
777 1671 0 0 29. 17 82. 535
13254656 46297088 N ORCL: ASMDI SK05
1611 2605 0 0 170. 514 265. 292
49721344 84987904 N ORCL: ASMDI SK06
461 1526 0 0 20. 383 106. 967
11063296 47538176 N ORCL: ASMDI SK07
695 43989 0 0 11. 879 6391. 772
7233536 1427686400 N ORCL: ASMDI SK09
ASMCMD>

c) Examine the disk statistics bytes and time for the DATA disk group with the
i ost at t G DATA command.
ASMCMD> iostat -t -G DATA
Gr oup_Name Dsk_Name Reads Wr i t es Read_Ti me
Wr i t e_Ti me
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 12-1: Admi ni steri ng ASM Di sk Groups (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 123
DATA ASMDI SK01 531782144 391407104 2088. 673
3285. 765
DATA ASMDI SK02 203924480 413052416 956. 054
2064. 229
DATA ASMDI SK03 1076171264 337621504 2533. 055
1853. 423
DATA ASMDI SK09 7905280 1428071424 11. 942
6391. 79 ASMCMD> exit
6) Run the following ASMCMD commands to return the DATA and FRA disk groups
to the configuration at the beginning of the practice.
[ gr i d@host 01 ~] $ asmcmd chdg
/home/oracle/labs/less_12/reset_DATA.xml
Di skgr oup al t er ed

[ gr i d@host 01 ~] $ asmcmd chdg
/home/oracle/labs/less_12/reset_FRA.xml
Di skgr oup al t er ed



O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 124
Practices for Lesson 13
In this practice, you will administer ASM files, directories, and templates.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 125
Practi ce 13-1: Admi ni steri ng ASM Fi l es, Directori es, and
Templates
In this practice, you use several tools to navigate the ASM file hierarchy, manage aliases,
manage templates, and move files to different disk regions.
1) ASM is designed to hold database files in a hierarchical structure. After setting up the
grid environment, navigate the orcl database files with ASMCMD.
[ gr i d@host 01 ~] $ . oraenv
ORACLE_SI D = [ gr i d] ? +ASM1
The Or acl e base has been set t o / u01/ app/ gr i d
[ gr i d@host 01 ~] $ asmcmd

ASMCMD> l s
DATA/
FRA/
ASMCMD> l s DATA
ORCL/
cl ust er 01/
ASMCMD> l s DATA/ ORCL
CONTROLFI LE/
DATAFI LE/
ONLI NELOG/
PARAMETERFI LE/
TEMPFI LE/
spf i l eor cl . or a
ASMCMD> l s - l DATA/ ORCL/ *
Type Redund St r i ped Ti me Sys Name

+DATA/ ORCL/ CONTROLFI LE/ :
CONTROLFI LE HI GH FI NE MAY 01 08: 00: 00 Y
Cur r ent . 260. 782110205

+DATA/ ORCL/ DATAFI LE/ :
DATAFI LE MI RROR COARSE MAY 01 22: 00: 00 Y
EXAMPLE. 264. 782110251
DATAFI LE MI RROR COARSE MAY 02 04: 00: 00 Y
SYSAUX. 257. 782110079
DATAFI LE MI RROR COARSE MAY 01 08: 00: 00 Y
SYSTEM. 256. 782110079
DATAFI LE MI RROR COARSE MAY 01 08: 00: 00 Y
UNDOTBS1. 258. 782110081
DATAFI LE MI RROR COARSE MAY 01 08: 00: 00 Y
UNDOTBS2. 265. 782110579
DATAFI LE MI RROR COARSE MAY 01 08: 00: 00 Y
UNDOTBS3. 266. 782110585
DATAFI LE MI RROR COARSE MAY 01 08: 00: 00 Y
USERS. 259. 782110081

+DATA/ ORCL/ ONLI NELOG/ :
ONLI NELOG MI RROR COARSE MAY 01 08: 00: 00 Y
gr oup_1. 261. 782110211
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 126
ONLI NELOG MI RROR COARSE MAY 01 08: 00: 00 Y
gr oup_2. 262. 782110215
ONLI NELOG MI RROR COARSE MAY 01 08: 00: 00 Y
gr oup_3. 269. 782110695
ONLI NELOG MI RROR COARSE MAY 01 08: 00: 00 Y
gr oup_4. 270. 782110697
ONLI NELOG MI RROR COARSE MAY 01 08: 00: 00 Y
gr oup_5. 267. 782110691
ONLI NELOG MI RROR COARSE MAY 01 08: 00: 00 Y
gr oup_6. 268. 782110693

+DATA/ ORCL/ PARAMETERFI LE/ :
PARAMETERFI LE MI RROR COARSE MAY 01 22: 00: 00 Y
spf i l e. 271. 782110701

+DATA/ ORCL/ TEMPFI LE/ :
TEMPFI LE MI RROR COARSE MAY 01 22: 00: 00 Y
TEMP. 263. 782110233
N
spf i l eor cl . or a =>
+DATA/ ORCL/ PARAMETERFI LE/ spf i l e. 271. 782110701

ASMCMD> exit

[ gr i d@host 01 ~] $
2) The default structure may not be the most useful for some sites. Create a set of aliases
for directories and files to match a file system. Use EM.
Step Screen/Page Description Choices or Values
a. Cluster Database:
orcl.example.com
In the Instances section, click the
+ASM1_node_name.example.com link.
b. Automatic Storage
Management:
+ASM1_node_name.exampl
e.com Home
Click the Disk Groups tab.
c. Automatic Storage
Management Login
Enter:
Username: SYS
Password: or acl e_4U
Click Login.
d. Automatic Storage
Management:
+ASM1_host01.example.co
m DiskGroups
Click the DATA disk group link.
e. Disk Group: DATA General Click the Files tab.
f. Disk Group: DATA Files Select ORCL.
Click Create Directory.
g. Create Directory Enter:
New Directory: or adat a
Click Show SQL.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 127
Step Screen/Page Description Choices or Values
h. Show SQL The SQL that will be executed is shown
ALTER DI SKGROUP DATA ADD DI RECTORY
' +DATA/ ORCL/ or adat a'
Click Return.
i. Create Directory Click OK.
j. Disk Group: DATA Files Expand the ORCL folder.
Expand the DATAFILE folder.
Select EXAMPLE.nnn.NNNNNN.
Click Create Alias.
k. Create Alias Enter:
User Alias:
+DATA/ORCL/oradata/example_01.dbf
Click Show SQL.
l. Show SQL The SQL that will be executed is shown
ALTER DI SKGROUP DATA ADD ALI AS
' +DATA/ ORCL/ or adat a/ exampl e_01. dbf '
FOR
' +DATA/ ORCL/ DATAFI LE/ EXAMPLE. 264. 6988
59675'
Click Return.
m. Create Alias Click OK.
n. Disk Group: DATA: Files Click the EXAMPLE. nnn. NNNNN link.
o. EXAMPLE. nnn. NNNNNN:
Properties
Notice the properties that are displayed in the
General section.
Click OK.
p. Disk Group: DATA Files Click the exampl e_01. dbf link.
q. example_01.dbf: Properties Note that the properties include System Name.
Click OK.
r. Exit EM.

3) Using ASMCMD, navigate to view exampl e_01. dbf and display the properties.
Using the system name, find the alias. Use the l s a command.
[ gr i d@host 01 ~] $ asmcmd

ASMCMD> l s +DATA/ORCL/oradata/*
exampl e_01. dbf

ASMCMD> l s -l +DATA/ORCL/oradata/*
Type Redund St r i ped Ti me Sys Name
N
exampl e_01. dbf => +DATA/ ORCL/ DATAFI LE/ EXAMPLE. 264. 782110251

ASMCMD> ls --absolutepath +DATA/ORCL/DATAFILE/example*

+DATA/ ORCL/ or adat a/ exampl e_01. dbf => EXAMPLE. 264. 782110251

O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 128
4) Create a new tablespace. Name the file using a full name. Use EM.
Step Screen/Page Description Choices or Values
x. Cluster Database:
orcl.example.com Home
Click the Server tab.
y. Cluster Database:
orcl.example.com Server
In the Storage section, click the Tablespaces
link.
z. Tablespaces Click Create.
aa. Create Tablespace: General Enter:
Name: XYZ
In the Datafiles section, click Add.
bb. Add Datafile Enter:
Alias directory: +DATA/ORCL/oradata
Alias name: XYZ_01.dbf
Click Continue.
cc. Create Tablespace: General Click OK.
dd. Tablespaces

5) Create another data file for the XYZ tablespace. Allow the file to receive a default
name. Did both the files get system-assigned names?
Step Screen/Page Description Choices or Values
a. Tablespaces Select XYZ tablespace.
Click Edit.
b. Edit Tablespace: XYZ In the Datafiles section, click Add.
c. Add Datafile Click Continue.
d. Edit Tablespace: XYZ Click Show SQL.
e. Show SQL Note: The SQL provides only the disk group
name.
Click Return.
f. Edit Tablespace: XYZ Click Apply.
g. Edit Tablespace: XYZ In the Datafiles section, note the names of
the two files. One name was specified in the
previous practice step, xyz_01.dbf, and the
other is a system-assigned name.
Click the Database tab.
h. Cluster Database:
orcl.example.com
In the Instances section, click the
+ASM1_host01.example.com link.
i. Automatic Storage Management:
+ASM1_node_name.example.com
Home
Click the Disk Groups tab.
j. Automatic Storage Management
Login
Enter:
Username: SYS
Password: or acl e_4U
Click Login.
k. Automatic Storage Management:
+ASM1_host01.example.com
Click the DATA disk group link.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 129
Step Screen/Page Description Choices or Values
DiskGroups
l. Disk Group: DATA General Click the Files tab.
m. Disk Group: DATA Files Expand the ORCL folder.
Expand the DATAFILE folder.
Note that there are two system-named files
associated with the XYZ tablespace.
Expand the oradata folder.
Click the xyz_01.dbf link.
n. XYZ_01.dbf: Properties Observe that the xyz_01. dbf file is an
alias to a file with a system name.
Click OK.
o. Disk Group: DATA Files

6) Move the files for the XYZ tablespace to the hot region of the DATA disk group.
Step Screen/Page Description Choices or Values
a. Disk Group: DATA: Files Click the General tab.
b. Disk Group: DATA: General In the Advanced Attributes section, click
Edit.
c. Edit Advanced Attributes for Disk
Group: DATA
Change Database Compatibility to
11.2.0.0.0
Show SQL.
d. Show SQL Notice the SET ATTRIBUTE clause.
Click Return.
e. Edit Advanced Attributes for Disk
Group: DATA
Click OK.
f. Disk Group: DATA General Click the Files tab.
g. Disk Group: DATA: Files Expand the ORCL folder.
Expand the oradata folder.
Select the xyz_01. dbf file.
Click Edit File.
h. Edit File: XYZ_01.dbf In the Regions section, select the Primary
Hot and Mirror Hot options.
Click Show SQL.
i. Show SQL Note that the SQL statement uses the alias
name and attributes clause.
Click Return.
j. Edit File: XYZ_01.dbf Click Apply.
k. Disk Group: DATA: Files Expand the DATAFILE folder.
Select the XYZ file that is not in the HOT
region.
Click Edit File.
l. Edit File:XYZ.nnn.NNNNN In the Regions section, select Primary Hot
and Mirror Hot options.
Click Show SQL.
m. Show SQL Note that the SQL statement uses the system
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 130
Step Screen/Page Description Choices or Values
name and attributes clause.
Click Return.
n. Edit File: XYZ.nnn.NNNNNN Click Apply.
o. Disk Group: DATA: Files

7) Create a template that changes the default placement of files to the hot region.
Step Screen/Page Description Choices or Values
a. Disk Group: DATA: Files Click the Templates tab.
b. Disk Group: DATA: Templates Click Create.
c. Create Template Enter:
Template Name: HOT_FI LES
In the Regions section, select the Primary
Hot and Mirror Hot options.
Click Show SQL.
d. Show SQL Note the SQL statement attributes clause.
Click Return.
e. Create Template Click OK.
f. Disk Group: DATA: Templates Note the attributes of the HOT_FILES
template compared with the DATAFILE
template.

8) Add another data file to the XYZ tablespace using the template. Was the file placed in
the HOT region?
Step Screen/Page Description Choices or Values
a. Disk Group: DATA: Templates Click the Database tab.
b. Cluster Database:
orcl.example.com Home
Click the Server tab.
c. Cluster Database:
orcl.example.com Server
In the Storage section, click the Tablespaces
link.
d. Tablespaces Select the XYZ tablespace.
Click Edit.
e. Edit Tablespace: XYZ In the Datafiles section, click Add.
f. Add Datafile Change Template to HOT_FILES.
Click Continue.
g. Edit Tablespace: XYZ Click Show SQL.
h. Show SQL Note the data file specification
"+DATA(HOT_FILES)."
Click Return.
i. Edit Tablespace: XYZ Click Apply.
Click the Database tab.
j. Cluster Database:
orcl.example.com
In the Instances section, click the
+ASM1_node_name.example.com link.
k. Automatic Storage Management:
+ASM1_node_name.example.com
Click the Disk Groups tab.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 131
Step Screen/Page Description Choices or Values
Home
l. Automatic Storage Management
Login
Enter:
Username: SYS
Password: or acl e_4U
Click Login.
m. Automatic Storage Management:
+ASM1_host01.example.com
DiskGroups
Click the DATA disk group link.
n. Disk Group: DATA General Click the Files tab.
o. Disk Group: DATA Files Expand the ORCL folder.
Expand the DATAFILE folder.
Notice that there are three system-named
files associated with the XYZ tablespace.
All have the HOT and HOT MIRROR
attributes set.

9) Create a table in the XYZ tablespace.
a) In a terminal window, as the or acl e OS user, use the following command
connect to the ORCL database (the password for SYS is or acl e_4U):
sql pl us sys@or cl AS SYSDBA

[ or acl e@host 01 ~] $ . oraenv
ORACLE_SI D = [ or acl e] ? orcl
The Or acl e base has been set t o / u01/ app/ or acl e

[ or acl e@host 01 ~] $ sqlplus sys@orcl as sysdba

SQL*Pl us: Rel ease 11. 2. 0. 3. 0 Pr oduct i on on Wed May 2 09: 27: 40
2012

Copyr i ght ( c) 1982, 2011, Or acl e. Al l r i ght s r eser ved.

Ent er passwor d: oracle_4U << password is not displayed

Connect ed t o:
Or acl e Dat abase 11g Ent er pr i se Edi t i on Rel ease 11. 2. 0. 3. 0 -
Wi t h t he Par t i t i oni ng, Real Appl i cat i on Cl ust er s, Aut omat i c
St or age Management , OLAP,
Dat a Mi ni ng and Real Appl i cat i on Test i ng opt i ons

SQL>
b) Create a large table in the XYZ tablespace called CUST_COPY by executing the
cr _cust _copy. sql script. This script makes a copy of the SH. CUSTOMERS
table into the XYZ tablespace.
SQL> @/home/oracle/labs/less_13/cr_cust_copy.sql
SQL>
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 132
SQL> CREATE TABLE Cust _copy TABLESPACE XYZ AS
2 SELECT * FROM SH. CUSTOMERS;

Tabl e cr eat ed.

SQL>
10) Query the new table. Select all the rows to force some read activity with the
command: SELECT * FROM CUST_COPY. Use the SET PAGESI ZE 300 command
to speed up the display processing.
SQL> SET PAGESIZE 300
SQL> SELECT * FROM CUST_COPY;
/ * r ows r emoved */
100055 Andr ew Cl ar k
F
1978 Mar r i ed 77 Cumber l and Avenue
74673 Duncan 51402
SC
52722 52790
260- 755- 4130 J : 190, 000 - 249, 999
11000
Cl ar k@company. com Cust omer t ot al 52772
01- J AN- 98 A


55500 r ows sel ect ed.

SQL>


11) View the I/O statistics by region. Use EM to view the statistics, and then repeat using
ASMCMD.
a) View the I/O statistics by region with Enterprise Manager.
Step Screen/Page Description Choices or Values
a. Disk Group: DATA Files Click the Performance tab.
b. Disk Group: DATA: Performance In the Disk Group I/O Cumulative Statistics
section, observe the values for Hot Reads
and Hot Writes.

b) In a terminal window, on your first node as the gr i d user, set the oracle
environment for the +ASM1 instance. View the I/O statistics by region using
ASMCMD.
[ gr i d@host 01 ~] $ . oraenv
ORACLE_SI D = [ gr i d] ? +ASM1
The Or acl e base has been set t o / u01/ app/ gr i d

[ gr i d@host 01 ~] $ asmcmd

O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 133
ASMCMD> iostat --io --region -G DATA
Gr oup_Name Dsk_Name Reads Wr i t es Col d_Reads Col d_Wr i t es
Hot _Reads Hot _Wr i t es
DATA ASMDI SK01 37273 96572 20565 56990
2 0
DATA ASMDI SK02 11367 121355 2637 85453
2 0
DATA ASMDI SK03 252111 106941 245150 72748
0 0
DATA ASMDI SK04 21430 63937 21283 18831
1 0
ASMCMD> exit


12) Drop the tablespaces and templates created in this practice.
a) As the or acl e OS user, connect to the orcl database, and then use the
dr op_XYZ. sh script to drop the XYZ tablespace.
[ or acl e@host 01 ~] $ sqlplus sys@orcl as sysdba

SQL*Pl us: Rel ease 11. 2. 0. 3. 0 Pr oduct i on on Wed May 2 09: 41: 59
2012

Copyr i ght ( c) 1982, 2011, Or acl e. Al l r i ght s r eser ved.

Ent er passwor d: oracle_4U << password is not displayed

Connect ed t o:
Or acl e Dat abase 11g Ent er pr i se Edi t i on Rel ease 11. 2. 0. 3. 0 -
Wi t h t he Par t i t i oni ng, Real Appl i cat i on Cl ust er s, Aut omat i c
St or age Management , OLAP,
Dat a Mi ni ng and Real Appl i cat i on Test i ng opt i ons

SQL> @/ home/oracle/labs/less_13/drop_XYZ.sql
SQL>
SQL> DROP TABLESPACE XYZ I NCLUDI NG CONTENTS AND DATAFI LES;

Tabl espace dr opped.

SQL>
SQL> EXI T;
Di sconnect ed f r omOr acl e Dat abase 11g Ent er pr i se Edi t i on
Rel ease 11. 2. 0. 3. 0
Wi t h t he Par t i t i oni ng, Real Appl i cat i on Cl ust er s, Aut omat i c
St or age Management , OLAP,
Dat a Mi ni ng and Real Appl i cat i on Test i ng opt i ons
[ or acl e@host 01 ~] $

O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 134
b) As the gr i d OS user, use asmcmd to remove the HOT_FI LES template.
[ gr i d@host 01 ~] $ . or aenv
ORACLE_SI D = [ gr i d] ? +ASM1
The Or acl e base has been set t o / u01/ app/ gr i d
[ gr i d@host 01 ~] $ asmcmd
ASMCMD> rmtmpl -G DATA HOT_FILES
ASMCMD> exit
[ gr i d@host 01 ~] $


O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 135
Practices for Lesson 14
In this practice you will create, register, and mount an ACFS file system. In addition, you
will manage ACFS snapshots.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 136
Practi ce 14-1: Managi ng ACFS
In this practice, you will create, register, and mount an ACFS file system for general use.
You will see the acfs modules that are loaded for ACFS. You create, use, and manage
ACFS snapshots.
1) Open a terminal window on your first node and become the r oot user. Use the
l smod command to list the currently loaded modules. Use the gr ep command to
display only the modes that have the or a string in them. Note the first three modules
in the list below. These modules are required to enable ADVM and ACFS. The
or acl easmmodule is loaded to enable ASMlib management of the ASM disks.
Check all three nodes.
/* on host01 */

[ r oot @host 01] # lsmod | grep ora
or acl eacf s 787588 2
or acl eadvm 177792 6
or acl eoks 226784 2 or acl eacf s, or acl eadvm
or acl easm 46356 1


/* on host02 */
[ r oot @host 01] # ssh host02 lsmod| grep ora
r oot @host 02' s passwor d: 0racle << password is not displayed
or acl eacf s 787588 3
or acl eadvm 177792 7
or acl eoks 226784 2 or acl eacf s, or acl eadvm
or acl easm 46356 1

/* on host03 */

[ r oot @host 01 ~] # ssh host03 lsmod | grep ora
r oot @host 03' s passwor d: 0racle << password is not displayed
or acl eacf s 787588 0
or acl eadvm 177792 0
or acl eoks 226784 2 or acl eacf s, or acl eadvm
or acl easm 46356 1
[ r oot @host 01 ~] # exit

2) Scenario: Your database application creates a number of image files stored as
BFILES and external tables. These must be stored on a shared resource. An ACFS
file system meets that requirement. First, create an ASM disk group strictly for ACFS
volumes. Create an ASM volume and the ACFS file system. The ACFS volume
should be 3 GB on the ACFS disk group. The mount point should be
/ u01/ app/ or acl e/ asf cmount s/ i mages. These operations can be done with
ASMCA, ASMCMD, Enterprise Manager, or SQL*Plus. The ASMCMD solution is
shown here.
a) As the grid user, use ASMCA to create a disk group called ACFS.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 14-1: Managi ng ACFS (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 137
Step Screen/Page Description Choices or Values
a. Disk Groups Click Create.
b. Create Disk Group

Enter ACFS in the Disk Group Name field.
Select Normal for the redundancy level.
Select ASMDISK11, ASMDISK12,
ASMDISK13, and ASMDISK14. Click
Show Advanced Options and set ASM
Compatibility to 11.2.0.0.0 and ADVM
Compatibility to 11.2.0.0.0. Click OK.
c. Disk Groups Check that ACFS is mounted on all three
nodes, then exit ASMCA.
b) Open another terminal window as the gr i d OS user on your first node.
[ gr i d@host 01] $ . oraenv
ORACLE_SI D = [ gr i d] ? +ASM1
The Or acl e base has been set t o / u01/ app/ gr i d
c) Start ASMCMD as the gr i d OS user.
[ gr i d@host 01] $ asmcmd
ASMCMD>
d) Create a volume using the vol cr eat e command.
ASMCMD> volcreate -G ACFS -s 3G IMAGES
ASMCMD>
e) Find the volume device name. Use the vol i nf o command.
ASMCMD> volinfo -G ACFS -a
Di skgr oup Name: ACFS

Vol ume Name: I MAGES
Vol ume Devi ce: / dev/ asm/ i mages- 408
St at e: ENABLED
Si ze ( MB) : 3072
Resi ze Uni t ( MB) : 32
Redundancy: MI RROR
St r i pe Col umns: 4
St r i pe Wi dt h ( K) : 128
Usage:
Mount pat h:
ASMCMD>
f) As the r oot user, create an ACFS file system in the IMAGES volumeusing
the terminal window that you used in step 1. When using the mkf s t acf s
command, the volume device must be supplied. Use the volume device name that
you found in the previous step.
[ r oot @host 01 ~] # mkfs -t acfs /dev/asm/images-408
mkf s. acf s: ver si on = 11. 2. 0. 3. 0
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 14-1: Managi ng ACFS (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 138
mkf s. acf s: on- di sk ver si on = 39. 0
mkf s. acf s: vol ume = / dev/ asm/ i mages- 408
mkf s. acf s: vol ume si ze = 3221225472
mkf s. acf s: For mat compl et e.
[ r oot @host 01 ~] #
g) Mount the volumeusing the terminal window that you used in step 1as the
r oot user, create the mount point directory, and mount the volume. The volume
device is used again in the mount command. Enter the mount command all on one
line. Repeat these commands on the second and third nodes of the cluster as the
r oot user.
/* On first node */

[ r oot @host 01 ~] # mkdir -p /u01/app/oracle/acfsmount/images
[ r oot @host 01 ~] # mount -t acfs /dev/asm/images-408
/u01/app/oracle/acfsmount/images

/* On second node */

[ r oot @host 01 ~] # ssh host02
r oot @host 02' s passwor d: 0racle << password not displayed
[ r oot @host 02 ~] # mkdir p /u01/app/oracle/acfsmount/images
[ r oot @host 02 ~] # mount -t acfs /dev/asm/images-408
/u01/app/oracle/acfsmount/images
[ r oot @host 02 ~] # exit
l ogout


Connect i on t o host 02 cl osed.
[ r oot @host 01 ~] #

/* On third node*/

[ r oot @host 01] # ssh host03
r oot @host 03' s passwor d: 0racle << password not displayed
[ r oot @host 03] # mkdir -p /u01/app/oracle/acfsmount/images
[ r oot @host 03] # mount -t acfs /dev/asm/images-408
/u01/app/oracle/acfsmount/images
[ r oot @host 03] # exit

O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 14-1: Managi ng ACFS (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 139
h) Verify that the volume is mounted.
/* On first node */

[ r oot @host 01 ~] # df -h
Fi l esyst em Si ze Used Avai l Use%Mount ed on
/ dev/ xvda2 7. 7G 3. 8G 3. 6G 52%/
/ dev/ xvda1 99M 40M 55M 42%/ boot
t mpf s 1. 1G 767M 325M 71%/ dev/ shm
/ dev/ xvdc1 2. 0G 1. 7G 248M 88%/ home
/ dev/ xvdd1 15G 11G 4. 1G 72%/ u01
/ dev/ xvde1 7. 7G 4. 0G 3. 4G 55%/ st age
192. 0. 2. 1: / shar e 2. 9G 1. 3G 1. 5G 47%/ shar e
/ dev/ asm/ i mages- 408 3. 0G 109M 2. 9G 4%
/ u01/ app/ or acl e/ acf smount / i mages

/* On second node */

[ r oot @host 02 ~] # df -h
Fi l esyst em Si ze Used Avai l Use%Mount ed on
/ dev/ xvda2 7. 7G 3. 4G 3. 9G 47%/
/ dev/ xvda1 99M 40M 55M 42%/ boot
t mpf s 1. 1G 767M 325M 71%/ dev/ shm
/ dev/ xvdc1 2. 0G 62M 1. 9G 4%/ home
/ dev/ xvdd1 15G 9. 8G 4. 3G 70%/ u01
/ dev/ xvde1 7. 7G 4. 0G 3. 4G 55%/ st age
host 01: / var / NFS 7. 7G 3. 8G 3. 6G 52%/ var / NFS
192. 0. 2. 1: / shar e 2. 9G 1. 3G 1. 5G 47%/ shar e
/ dev/ asm/ i mages- 408 3. 0G 109M 2. 9G 4%
/ u01/ app/ or acl e/ acf smount / i mage


/* On third node */

[ r oot @host 03 ~] # df -h
Fi l esyst em Si ze Used Avai l Use%Mount ed on
/ dev/ xvda2 7. 7G 3. 2G 4. 1G 44%/
/ dev/ xvda1 99M 40M 55M 42%/ boot
t mpf s 1. 1G 767M 325M 71%/ dev/ shm
/ dev/ xvdc1 2. 0G 42M 1. 9G 3%/ home
/ dev/ xvdd1 15G 9. 6G 4. 5G 68%/ u01
/ dev/ xvde1 7. 7G 4. 0G 3. 4G 55%/ st age
192. 0. 2. 1: / shar e 2. 9G 1. 3G 1. 5G 47%/ shar e
/ dev/ asm/ i mages- 408 3. 0G 109M 2. 9G 4%
/ u01/ app/ or acl e/ acf smount / i mages


O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 14-1: Managi ng ACFS (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 140
i) Register the volume and mount point. As the r oot user, run the command: (Use
your actual device name.)
acf sut i l r egi st r y a / dev/ asm/ i mages- nnn
/ u01/ app/ or acl e/ acf smount / i mages
[ r oot @host 01] # acfsutil registry -a /dev/asm/images-408
/u01/app/oracle/acfsmount/images
acf sut i l r egi st r y: mount poi nt
/ u01/ app/ or acl e/ acf smount / i mages successf ul l y added t o Or acl e
Regi st r y
j) As the r oot user, view the registry status of the volume with the acf sut i l
r egi st r y l command.
[ r oot @host 01] # acfsutil registry -l
Devi ce : / dev/ asm/ i mages- 407 : Mount Poi nt :
/ u01/ app/ or acl e/ acf smount / i mages : Opt i ons : none : Nodes :
al l : Di sk Gr oup : ACFS : Vol ume : I MAGES
[ r oot @host 01] #
3) An ACFS file system can be resized, and it will automatically resize the volume, if
there is sufficient space in the disk group. The images file system is near capacity.
Increase the file system by 256 MB. As the r oot user, use the acf sut i l si ze
+256M / u01/ app/ or acl e/ acf smount / i mages command.
[ r oot @host 01] # acfsutil size +256M
/u01/app/oracle/acfsmount/images
acf sut i l si ze: new f i l e syst emsi ze: 3489660928 ( 3328MB)
[ r oot @host 01 ~] #
4) As the gr i d user, check the size of the volume after the resize operation with
asmcmd vol i nf o.
[ gr i d@host 01 ~] $ . or aenv
ORACLE_SI D = [ gr i d] ? +ASM1
The Or acl e base has been set t o / u01/ app/ gr i d

[ gr i d@host 01 ~] $ asmcmd volinfo -G ACFS IMAGES
Di skgr oup Name: ACFS

Vol ume Name: I MAGES
Vol ume Devi ce: / dev/ asm/ i mages- 408
St at e: ENABLED
Si ze ( MB) : 3328
Resi ze Uni t ( MB) : 32
Redundancy: UNPROT
St r i pe Col umns: 4
St r i pe Wi dt h ( K) : 128
Usage: ACFS
Mount pat h: / u01/ app/ or acl e/ acf smount / i mages

[ gr i d@host 01 ~] $
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 14-1: Managi ng ACFS (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 141
5) The IMAGES file system holds the image files for the orcl database owned by the
or acl e user. As the r oot user, change the permissions on the mount point so that
the or acl e user will own the file system on all three nodes.
[ r oot @host 01] # chown oracle:dba
/u01/app/oracle/acfsmount/images

/* On second node */

[ r oot @host 01 ~] # ssh host02 chown oracle:dba
/u01/app/oracle/acfsmount/images
r oot @host 02' s passwor d: 0racle << password is not displayed

/* on third node */

[ r oot @host 01] # ssh host03 chown oracle:dba
/u01/app/oracle/acfsmount/images
r oot @host 03' s passwor d: 0racle << password is not displayed

[ r oot @host 01 ~] #
6) As the or acl e user, transfer a set of images to
/ u01/ app/ or acl e/ acf smount / i mages. Unzip the images in
/ home/ or acl e/ l abs/ l ess_14/ i mages. zi p to the IMAGES file system.
[ or acl e@host 01] $ cd /home/oracle/labs/less_14
[ or acl e@host 01 l ess_14] $ unzip images.zip -d
/u01/app/oracle/acfsmount/images
Ar chi ve: i mages. zi p
cr eat i ng: / u01/ app/ or acl e/ acf smount / i mages/ gr i dI nst al l /
i nf l at i ng:
/ u01/ app/ or acl e/ acf smount / i mages/ gr i dI nst al l / asm. gi f
i nf l at i ng:
/ u01/ app/ or acl e/ acf smount / i mages/ gr i dI nst al l / bul l et 2. gi f

Li nes r emoved

i nf l at i ng:
/ u01/ app/ or acl e/ acf smount / i mages/ gr i dI nst al l / vi ew_i mage. gi f
ext r act i ng:
/ u01/ app/ or acl e/ acf smount / i mages/ gr i dI nst al l / whi t e_spacer . gi f
[ or acl e@host 01 l ess_14] $
7) Verify that the files have been extracted.
[ or acl e@host 01 l ess_14] $ ls -R
/u01/app/oracle/acfsmount/images
/ u01/ app/ or acl e/ acf smount / i mages:
gr i dI nst al l l ost +f ound

/ u01/ app/ or acl e/ acf smount / i mages/ gr i dI nst al l :
asm. gi f t 20108. gi f t 30104. gi f t 30119d. gi f
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 14-1: Managi ng ACFS (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 142
bul l et 2. gi f t 20109a. gi f t 30105. gi f t 30119. gi f
bul l et . gi f t 20109b. gi f t 30106. gi f t 30120a. gi f
di vi der . gi f t 20110. gi f t 30107. gi f t 30120b. gi f
gr adi ent . gi f t 20111a. gi f t 30108a. gi f t 30121d. gi f
MoveAl l But t on. gi f t 20111b. gi f t 30108. gi f t 30123a. gi f
MoveBut t on. gi f t 20111c. gi f t 30109. gi f t 30123b. gi f
r pm- or acl easm. gi f t 20111. gi f t 30110. gi f t 30123c. gi f
show_me. gi f t 20112. gi f t 30111. gi f t 30201. gi f
t 10101. gi f t 20113. gi f t 30112a. gi f t 30202. gi f
t 10102. gi f t 20113h. gi f t 30112. gi f t 30203. gi f
t 10103. gi f t 20114c. gi f t 30113a. gi f t 30204a. gi f
t 10201. gi f t 20114l ogi n. gi f t 30113b. gi f t 30204. gi f
t 10202. gi f t 20114ser ver . gi f t 30114a. gi f t 30205. gi f
t 10203. gi f t 20117add. gi f t 30114b. gi f t 30206. gi f
t 10204. gi f t 20117cr t bs. gi f t 30114. gi f t 30207. gi f
t 10205. gi f t 20117emct l . gi f t 30115a. gi f t 30208. gi f
t 20101. gi f t 20117t bs. gi f t 30115. gi f t 40101. gi f
t 20102. gi f t 20119asm. gi f t 30116a. gi f t 40102. gi f
t 20103. gi f t 2017emct l . gi f t 30116b. gi f t 40104. gi f
t 20104. gi f t 30101a. gi f t 30116c. gi f t 40105a. gi f
t 20105. gi f t 30101b. gi f t 30116d. gi f t 40105b. gi f
t 20106. gi f t 30101c. gi f t 30118b. gi f Thumbs. db
t 20107a. gi f t 30102. gi f t 30119b. gi f
vi ew_i mage. gi f
t 20107. gi f t 30103. gi f t 30119c. gi f
whi t e_spacer . gi f
l s: / u01/ app/ or acl e/ acf smount / i mages/ l ost +f ound: Per mi ssi on
deni ed
[ or acl e@host 01] $
8) Create a snapshot of the IMAGES file system. Use the ASMCMD utility as the r oot
user to execute the command:
/ sbi n/ acf sut i l snap cr eat e snap_001 \
/ u01/ app/ or acl e/ acf smount / i mages
[ r oot @host 01] # /sbin/acfsutil snap create snap_001
/u01/app/oracle/acfsmount/images
acf sut i l snap cr eat e: Snapshot oper at i on i s compl et e.

9) Find the . SNAP directory and explore the entries. How much space does the
gr i dI nst al l directory tree use? How much space does the
. ACFS/ snaps/ snap_001/ gr i dI nst al l directory tree use?
[ r oot @host 01] # cd /u01/app/oracle/acfsmount/images
[ r oot @host 01 i mages] # ls la
t ot al 88
dr wxr wx- - - 5 or acl e dba 4096 May 7 23: 31 .
dr wxr - xr - x 4 r oot r oot 4096 May 7 11: 53 . .
dr wxr - xr - x 2 or acl e oi nst al l 12288 May 7 16: 30 gr i dI nst al l
dr wx- - - - - - 2 r oot r oot 65536 May 7 15: 04 l ost +f ound
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 14-1: Managi ng ACFS (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 143
[ r oot @host 01] # du -h gridInstall
2. 0M gr i dI nst al l
[ r oot @host 01] # ls .ACFS
r epl snaps
[ r oot @host 01 i mages] # ls .ACFS/snaps
snap_001
[ r oot @host 01 i mages] # ls .ACFS/snaps/snap_001
gr i dI nst al l l ost +f ound
[ r oot @host 01] # du -h .ACFS/snaps/snap_001/gridInstall
2. 0M . ACFS/ snaps/ snap_001/ gr i dI nst al l
10) Delete the asm. gi f file from the IMAGES file system.
[ r oot @host 01 i mages] # rm gridInstall/asm.gif
r m: r emove r egul ar f i l e `gr i dI nst al l / asm. gi f ' ? y
11) Create another snapshot of the IMAGES file system.
[ r oot @host 01 i mages] # /sbin/acfsutil snap create snap_002
/u01/app/oracle/acfsmount/images
acf sut i l snap cr eat e: Snapshot oper at i on i s compl et e.

12) How much space is being used by the snapshots and the files that are stored in the
IMAGES file system? Use the acf sut i l i nf o command to find this information.
[ r oot @host 01 i mages] # / sbi n/ acf sut i l i nf o f s
/ u01/ app/ or acl e/ acf smount / i mages
ACFS Ver si on: 11. 2. 0. 3. 0
f l ags: Mount Poi nt , Avai l abl e
mount t i me: Thu May 3 03: 29: 51 2012
vol umes: 1
t ot al si ze: 3489660928
t ot al f r ee: 3310538752
pr i mar y vol ume: / dev/ asm/ i mages- 408
l abel :
f l ags: Pr i mar y, Avai l abl e, ADVM
on- di sk ver si on: 39. 0
al l ocat i on uni t : 4096
maj or , mi nor : 252, 208897
si ze: 3489660928
f r ee: 3310538752
ADVM di skgr oup ACFS
ADVM r esi ze i ncr ement : 33554432
ADVM r edundancy: mi r r or
ADVM st r i pe col umns: 4
ADVM st r i pe wi dt h: 131072
number of snapshot s: 2
snapshot space usage: 2260992
r epl i cat i on st at us: DI SABLED
[ r oot @host 01 i mages] #
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 14-1: Managi ng ACFS (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 144
13) Restore the asm. gi f file to the file system from the snapshot. This can be done with
OS commands or from Enterprise Manager. This solution uses the OS commands.
a) The snapshot is a sparse file representation of the file system, so you can browse
the snapshot as if it were a full file system. All the OS file commands are
functional. Find the asm. gi f file in the snapshot. The f i nd command ID is
shown in this solution. Perform this operation as the r oot user.
[ r oot @host 01] $ cd /u01/app/oracle/acfsmount/images
[ r oot @host 01 i mages] # find .ACFS -name asm.gif
f i nd: . ACFS/ snaps/ snap_001/ l ost +f ound: Per mi ssi on deni ed
. ACFS/ snaps/ snap_001/ gr i dI nst al l / asm. gi f
f i nd: . ACFS/ snaps/ snap_002/ l ost +f ound: Per mi ssi on deni ed
b) Restore the asm. gi f file by copying from the snapshot to the original location.
[ r oot @host 01 i mages] $ cp
./.ACFS/snaps/snap_001/gridInstall/asm.gif
./gridInstall/asm.gif
14) Dismount the Images file system from all three nodes. This command must be
executed by the r oot user. If the directory is busy, use l sof to find the user that is
holding the directory open and stop that session.
[ r oot @host 01] # umount /u01/app/oracle/acfsmount/images
umount : / u01/ app/ or acl e/ acf smount / i mages: devi ce i s busy
umount : / u01/ app/ or acl e/ acf smount / i mages: devi ce i s busy
umount : / u01/ app/ or acl e/ acf smount / i mages: devi ce i s busy
umount : / u01/ app/ or acl e/ acf smount / i mages: devi ce i s busy

[ r oot @host 01] # lsof +d /u01/app/oracle/acfsmount/images
COMMAND PI D USER FD TYPE DEVI CE SI ZE NODE NAME
COMMAND PI D USER FD TYPE DEVI CE SI ZE NODE NAME
l sof 5770 r oot cwd DI R 252, 5634 4096 2
/ u01/ app/ or acl e/ acf smount / i mages
l sof 5771 r oot cwd DI R 252, 5634 4096 2
/ u01/ app/ or acl e/ acf smount / i mages
bash 23971 r oot cwd DI R 252, 5634 4096 2
/ u01/ app/ or acl e/ acf smount / i mages
/ * change di r ect or i es t he r oot sessi on t hen cont i nue */

[ r oot @host 01] # cd

[ r oot @host 01] # umount /u01/app/oracle/acfsmount/images

[ r oot @host 01 ~] # ssh host02 umount
/u01/app/oracle/acfsmount/images
r oot @host 02' s passwor d: 0racle << password is not displayed

[ r oot @host 01 ~] # ssh host03 umount
/u01/app/oracle/acfsmount/images
r oot @host 03' s passwor d: 0racle << password is not displayed

[ r oot @host 01 ~] #
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Practi ce 14-1: Managi ng ACFS (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 145
15) Remove the IMAGES ACFS file system and volume using Enterprise Manager.
Step Screen/Page Description Choices or Values
a. Enter the EM URL in the
browser.
https//host01.example.com:1158/em
b. Login Name: SYS
Password: or acl e_4U
Connect as SYSDBA
Click Login.
c. Cluster Database:
orcl.example.com Home
In the Instances section, click the link for
+ASM1_host01.example.com.
d. Automatic Storage
Management:
+ASM1_host01.example.com
Home
Click the ASM Cluster File System tab.
e. Automatic Storage
Management Login
Username: SYS
Password: or acl e_4U
Click Login.
f. Automatic Storage
Management:
+ASM1_host01.example.com
ASM Cluster File System
Select
/ u01/ app/ or acl e/ acf smount / i mages.
Click Deregister.
g. ASM Cluster File System Host
Credentials:
host01.example.com
Username: gr i d
Password: 0r acl e
Click Continue.
h. Deregister ASM Cluster File
System: /dev/asm/images-408
Click OK.
i. Automatic Storage
Management:
+ASM1_host01.example.com
ASM Cluster File System
Click the Disk Groups tab.
j. Automatic Storage
Management:
+ASM1_host01.example.com
Disk Groups
Click the link for the ACFS disk group.
k. Disk Group: ACFS General Click the Volumes tab.
l. Disk Group: ACFS Volumes Select the IMAGES volume.
Click Delete.
m. Confirmation Click Yes.
n. Disk Group: ACFS General
o. Disk Group: ACFS Click Logout.
Close the browser.

O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 146
Optional Practice 14-2: Uni nstal ling the RAC Database with
DEINSTALL
In this practice, you will uninstall the RAC database using the deinstall tool, leaving the
clusterware installed on three nodes.
1) Open a terminal window as the or acl e OS user on your first node and set the
environment with the or aenv command. Enter the name of the database when
prompted.
[ or acl e@host 01] $ . oraenv
ORACLE_SI D = [ or acl e] ? orcl
The Or acl e base has been set t o / u01/ app/ or acl e
2) Stop Enterprise Manager.
[ or acl e@host 01] $ export ORACLE_UNQNAME=orcl
[ or acl e@host 01] $ emctl stop dbconsole
Or acl e Ent er pr i se Manager 11g Dat abase Cont r ol Rel ease
11. 2. 0. 3. 0
Copyr i ght ( c) 1996, 2011 Or acl e Cor por at i on. Al l r i ght s
r eser ved.
ht t ps: / / host 01. exampl e. com: 1158/ em/ consol e/ about Appl i cat i on
St oppi ng Or acl e Ent er pr i se Manager 11g Dat abase Cont r ol . . .
. . . St opped.

3) Change directories to the or acl e users home directory: / home/ or acl e.
[ or acl e@host 01] $ cd /home/oracle
4) Start the deinstall tool. The tool does an auto discover of the database parameters. The
next prompt allows you to change the discovered parameters, answer n.
[ or acl e@host 01 ~] $ORACLE_HOME/deinstall/deinstall
Checki ng f or r equi r ed f i l es and boot st r appi ng . . .
Pl ease wai t . . .
Locat i on of l ogs / u01/ app/ or aI nvent or y/ l ogs/

############ ORACLE DEI NSTALL & DECONFI G TOOL START
############


######################### CHECK OPERATI ON START
#########################
## [ START] I nst al l check conf i gur at i on ##


Checki ng f or exi st ence of t he Or acl e home l ocat i on
/ u01/ app/ or acl e/ pr oduct / 11. 2. 0/ dbhome_1
Or acl e Home t ype sel ect ed f or dei nst al l i s: Or acl e Real
Appl i cat i on Cl ust er Dat abase
Or acl e Base sel ect ed f or dei nst al l i s: / u01/ app/ or acl e
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Optional Practice 14-2: Uni nstal ling the RAC Database with
DEINSTALL (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 147
Checki ng f or exi st ence of cent r al i nvent or y l ocat i on
/ u01/ app/ or aI nvent or y
Checki ng f or exi st ence of t he Or acl e Gr i d I nf r ast r uct ur e home
/ u01/ app/ 11. 2. 0/ gr i d
The f ol l owi ng nodes ar e par t of t hi s cl ust er :
host 01, host 02, host 03
Checki ng f or suf f i ci ent t emp space avai l abi l i t y on node( s) :
' host 01, host 02, host 03'

## [ END] I nst al l check conf i gur at i on ##


Net wor k Conf i gur at i on check conf i g START

Net wor k de- conf i gur at i on t r ace f i l e l ocat i on:
/ u01/ app/ or aI nvent or y/ l ogs/ net dc_check2012- 05- 04_10- 10- 04-
AM. l og

Net wor k Conf i gur at i on check conf i g END

Dat abase Check Conf i gur at i on START

Dat abase de- conf i gur at i on t r ace f i l e l ocat i on:
/ u01/ app/ or aI nvent or y/ l ogs/ dat abasedc_check2012- 05- 04_10- 10-
08- AM. l og

Use comma as separ at or when speci f yi ng l i st of val ues as i nput

Speci f y t he l i st of dat abase names t hat ar e conf i gur ed i n t hi s
Or acl e home [ or cl ] :

###### For Dat abase ' or cl ' ######

RAC Dat abase
The nodes on whi ch t hi s dat abase has i nst ances: [ host 01,
host 02, host 03]
The i nst ance names: [ or cl 1, or cl 2, or cl 3]
The l ocal i nst ance name on node: or cl 1
The di agnost i c dest i nat i on l ocat i on of t he dat abase:
/ u01/ app/ or acl e/ di ag/ r dbms/ or cl
St or age t ype used by t he Dat abase: ASM

The det ai l s of dat abase( s) or cl have been di scover ed
aut omat i cal l y. Do you still want to modify the details of orcl
database(s)? [n]: n

Dat abase Check Conf i gur at i on END

Ent er pr i se Manager Conf i gur at i on Assi st ant START
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Optional Practice 14-2: Uni nstal ling the RAC Database with
DEINSTALL (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 148

EMCA de- conf i gur at i on t r ace f i l e l ocat i on:
/ u01/ app/ or aI nvent or y/ l ogs/ emcadc_check2012- 05- 04_10- 11- 43-
AM. l og

Checki ng conf i gur at i on f or dat abase or cl
Ent er pr i se Manager Conf i gur at i on Assi st ant END
Or acl e Conf i gur at i on Manager check START
OCM check l og f i l e l ocat i on :
/ u01/ app/ or aI nvent or y/ l ogs/ / ocm_check2515. l og
Or acl e Conf i gur at i on Manager check END

######################### CHECK OPERATI ON END
#########################


####################### CHECK OPERATI ON SUMMARY
#######################
Or acl e Gr i d I nf r ast r uct ur e Home i s: / u01/ app/ 11. 2. 0/ gr i d
The cl ust er node( s) on whi ch t he Or acl e home dei nst al l at i on
wi l l be per f or med ar e: host 01, host 02, host 03
Or acl e Home sel ect ed f or dei nst al l i s:
/ u01/ app/ or acl e/ pr oduct / 11. 2. 0/ dbhome_1
I nvent or y Locat i on wher e t he Or acl e home r egi st er ed i s:
/ u01/ app/ or aI nvent or y
The f ol l owi ng dat abases wer e sel ect ed f or de- conf i gur at i on :
or cl
Dat abase uni que name : or cl
St or age used : ASM
Wi l l updat e t he Ent er pr i se Manager conf i gur at i on f or t he
f ol l owi ng dat abase( s) : or cl
No Ent er pr i se Manager ASM t ar get s t o updat e
No Ent er pr i se Manager l i st ener t ar get s t o mi gr at e
Checki ng t he conf i g st at us f or CCR
host 01 : Or acl e Home exi st s wi t h CCR di r ect or y, but CCR i s not
conf i gur ed
host 02 : Or acl e Home exi st s wi t h CCR di r ect or y, but CCR i s not
conf i gur ed
host 03 : Or acl e Home exi st s wi t h CCR di r ect or y, but CCR i s not
conf i gur ed
CCR check i s f i ni shed
Do you want to continue (y - yes, n - no)? [n]: y
A l og of t hi s sessi on wi l l be wr i t t en t o:
' / u01/ app/ or aI nvent or y/ l ogs/ dei nst al l _deconf i g2012- 05- 04_10-
09- 48- AM. out '
Any er r or messages f r omt hi s sessi on wi l l be wr i t t en t o:
' / u01/ app/ or aI nvent or y/ l ogs/ dei nst al l _deconf i g2012- 05- 04_10-
09- 48- AM. er r '

O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Optional Practice 14-2: Uni nstal ling the RAC Database with
DEINSTALL (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 149
######################## CLEAN OPERATI ON START
########################

Ent er pr i se Manager Conf i gur at i on Assi st ant START

EMCA de- conf i gur at i on t r ace f i l e l ocat i on:
/ u01/ app/ or aI nvent or y/ l ogs/ emcadc_cl ean2012- 05- 04_10- 11- 43-
AM. l og

Updat i ng Ent er pr i se Manager Dat abase Cont r ol conf i gur at i on f or
dat abase or cl
Updat i ng Ent er pr i se Manager ASM t ar get s ( i f any)
Updat i ng Ent er pr i se Manager l i st ener t ar get s ( i f any)
Ent er pr i se Manager Conf i gur at i on Assi st ant END
Dat abase de- conf i gur at i on t r ace f i l e l ocat i on:
/ u01/ app/ or aI nvent or y/ l ogs/ dat abasedc_cl ean2012- 05- 04_10- 12-
43- AM. l og
Dat abase Cl ean Conf i gur at i on START or cl
Thi s oper at i on may t ake f ew mi nut es.
Dat abase Cl ean Conf i gur at i on END or cl

Net wor k Conf i gur at i on cl ean conf i g START

Net wor k de- conf i gur at i on t r ace f i l e l ocat i on:
/ u01/ app/ or aI nvent or y/ l ogs/ net dc_cl ean2012- 05- 04_10- 20- 00-
AM. l og

De- conf i gur i ng Li st ener conf i gur at i on f i l e on al l nodes. . .
Li st ener conf i gur at i on f i l e de- conf i gur ed successf ul l y.

De- conf i gur i ng Nami ng Met hods conf i gur at i on f i l e on al l
nodes. . .
Nami ng Met hods conf i gur at i on f i l e de- conf i gur ed successf ul l y.

De- conf i gur i ng Local Net Ser vi ce Names conf i gur at i on f i l e on
al l nodes. . .
Local Net Ser vi ce Names conf i gur at i on f i l e de- conf i gur ed
successf ul l y.

De- conf i gur i ng Di r ect or y Usage conf i gur at i on f i l e on al l
nodes. . .
Di r ect or y Usage conf i gur at i on f i l e de- conf i gur ed successf ul l y.

De- conf i gur i ng backup f i l es on al l nodes. . .
Backup f i l es de- conf i gur ed successf ul l y.

The net wor k conf i gur at i on has been cl eaned up successf ul l y.

Net wor k Conf i gur at i on cl ean conf i g END
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Optional Practice 14-2: Uni nstal ling the RAC Database with
DEINSTALL (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 150

Or acl e Conf i gur at i on Manager cl ean START
OCM cl ean l og f i l e l ocat i on :
/ u01/ app/ or aI nvent or y/ l ogs/ / ocm_cl ean2515. l og
Or acl e Conf i gur at i on Manager cl ean END
Set t i ng t he f or ce f l ag t o f al se
Set t i ng t he f or ce f l ag t o cl eanup t he Or acl e Base
Or acl e Uni ver sal I nst al l er cl ean START

Det ach Or acl e home ' / u01/ app/ or acl e/ pr oduct / 11. 2. 0/ dbhome_1'
f r omt he cent r al i nvent or y on t he l ocal node : Done

Del et e di r ect or y ' / u01/ app/ or acl e/ pr oduct / 11. 2. 0/ dbhome_1' on
t he l ocal node : Done

The Or acl e Base di r ect or y ' / u01/ app/ or acl e' wi l l not be
r emoved on l ocal node. The di r ect or y i s not empt y.

Det ach Or acl e home ' / u01/ app/ or acl e/ pr oduct / 11. 2. 0/ dbhome_1'
f r omt he cent r al i nvent or y on t he r emot e nodes ' host 03, host 02'
: Done

Del et e di r ect or y ' / u01/ app/ or acl e/ pr oduct / 11. 2. 0/ dbhome_1' on
t he r emot e nodes ' host 02, host 03' : Done

The Or acl e Base di r ect or y ' / u01/ app/ or acl e' wi l l not be
r emoved on node ' host 02' . The di r ect or y i s not empt y.

The Or acl e Base di r ect or y ' / u01/ app/ or acl e' wi l l not be
r emoved on node ' host 03' . The di r ect or y i s not empt y.

Or acl e Uni ver sal I nst al l er cl eanup was successf ul .

Or acl e Uni ver sal I nst al l er cl ean END


## [ START] Or acl e i nst al l cl ean ##

Cl ean i nst al l oper at i on r emovi ng t empor ar y di r ect or y
' / t mp/ dei nst al l 2012- 05- 04_10- 09- 17AM' on node ' host 01'
Cl ean i nst al l oper at i on r emovi ng t empor ar y di r ect or y
' / t mp/ dei nst al l 2012- 05- 04_10- 09- 17AM' on node ' host 02, host 03'

## [ END] Or acl e i nst al l cl ean ##


######################### CLEAN OPERATI ON END
#########################

O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Optional Practice 14-2: Uni nstal ling the RAC Database with
DEINSTALL (conti nued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 151

####################### CLEAN OPERATI ON SUMMARY
#######################
Updat ed Ent er pr i se Manager conf i gur at i on f or dat abase or cl
Successf ul l y de- conf i gur ed t he f ol l owi ng dat abase i nst ances :
or cl
Cl eani ng t he conf i g f or CCR
As CCR i s not conf i gur ed, so ski ppi ng t he cl eani ng of CCR
conf i gur at i on
CCR cl ean i s f i ni shed
Successf ul l y det ached Or acl e home
' / u01/ app/ or acl e/ pr oduct / 11. 2. 0/ dbhome_1' f r omt he cent r al
i nvent or y on t he l ocal node.
Successf ul l y del et ed di r ect or y
' / u01/ app/ or acl e/ pr oduct / 11. 2. 0/ dbhome_1' on t he l ocal node.
Successf ul l y det ached Or acl e home
' / u01/ app/ or acl e/ pr oduct / 11. 2. 0/ dbhome_1' f r omt he cent r al
i nvent or y on t he r emot e nodes ' host 03, host 02' .
Successf ul l y del et ed di r ect or y
' / u01/ app/ or acl e/ pr oduct / 11. 2. 0/ dbhome_1' on t he r emot e nodes
' host 02, host 03' .
Or acl e Uni ver sal I nst al l er cl eanup was successf ul .

Run ' r m- r f / opt / ORCLf map' as r oot on node( s) ' host 03' at t he
end of t he sessi on.
Or acl e dei nst al l t ool successf ul l y cl eaned up t empor ar y
di r ect or i es.
##############################################################
#########


############# ORACLE DEI NSTALL & DECONFI G TOOL END ###########

[ or acl e@host 01 ~] $

O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
DHCP and DNS Configuration for GNS
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 2
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Objectives
After completing this lesson, you should be able to:
Configure or communicate DHCP configuration needs in
support of GNS
Configure or communicate DNS configuration needs in
support of GNS
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
In a static configuration, all addresses are assigned by administrative action and given names
that resolve with whatever name service is provided for the environment. This is universal
historic practice because there has been no realistic alternative. One result is significant
turnaround time to obtain the address, and to make the name resolvable. This is undesirable
for dynamic reassignment of nodes from cluster to cluster and function to function.
DHCP provides for dynamic configuration of the host IP address but does not provide a good
way to produce good names that are useful to external clients. As a result, it is rarely used in
server complexes because the point of a server is to provide service, and clients need to be
able to find the server. This is solved in the current release by providing a service (GNS) for
resolving names in the cluster, and defining GNS to the DNS service used by the clients.
To properly configure GNS to work for clients, it is necessary to configure the higher level
DNS to forward or delegate a subdomain to the cluster and the cluster must run GNS on an
address known to the DNS, by number. This GNS address is maintained as a VIP in the
cluster, which is run on a single node, and a GNSD process that follows that VIP around the
cluster and service names in the subdomain. To fully implement GNS, you need four things:
DHCP service for the public network in question
A single assigned address in the public network for the cluster to use as the GNS VIP
A forward from the higher-level DNS for the cluster to the GNS VIP
A running cluster with properly configured GNS
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 3
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
GNS: Overview
In a static configuration, all addresses are assigned by
administrative action.
DHCP provides dynamic configuration of host IP
addresses.
DHCP does not provide a good way to produce good names
that are useful to external clients.
Because of this, it is rarely used in server complexes.
To configure GNS:
It is necessary to configure the higher-level DNS to forward or
delegate a subdomain to the cluster
The cluster must run GNS on an address known to the DNS
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
With DHCP, a host needing an address sends a broadcast message to the network. A DHCP
server on the network can respond to the request, and give back an address, along with other
information such as what gateway to use, what DNS server(s) to use, what domain to use,
what NTP server to use, and so on.
In the request, a host typically identifies itself to the server by sending the MAC address of the
interface in question. It can also send other values. When using VIPS, the client identifier sent
is a CRS resource name instead of a MAC address. Therefore, instead of sending the MAC
address 00:04:23:A5:B2:C0, something similar to ora.hostname.vip is sent. This lets
Clusterware easily move the address from one physical host to another, because it is not
bound to a particular hardware MAC address. There are three ways to get DHCP service:
It can be there already, provided by the network administrator.
You can provide it yourself from a host on the network.
It can be provided with an appliance.
In production environments, the DHCP service would most likely already be configured. In test
environments, you either set up your own server or use one that comes in a box. For this
course, you will concentrate on setting up your own DHCP server.
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 4
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
DHCP Service
With DHCP, a host needing an address sends a broadcast
message to the network.
A DHCP server on the network responds to the request,
and assigns an address, along with other information such
as:
What gateway to use
What DNS servers to use
What domain to use
In the request, a host typically sends a client identifier,
usually the MAC address of the interface in question.
The identifier sent by Clusterware is not a MAC address,
but a VIP resource name such as ora.hostname.vip.
Because the IP address is not bound to a fixed MAC
address, Clusterware can move it between hosts as
needed.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
When you use DHCP for the public network, you will need two addresses per host (host
address and VIP), plus three for clusterwide SCAN. The GNS VIP cannot be obtained from
DHCP, because it must be known in advance, so it must be statically assigned.
For this example, make the following assumptions:
The hosts have known addresses on the public network that are accessible by their host
name so that they may be reached when Clusterware is not running.
DHCP must provide four addresses, one per node, plus three for the SCAN.
You have a DHCP server. It is a single-point of failure, so it must always be up.
The subnet and netmask on the interface to be serviced is
10.228.212.0/255.255.252.0.
The address range that the DHCP server will serve is from 10.228.212.10 through
10.228.215.254.
The gateway address is 10.228.212.1.
You have two name servers: M.N.P.Q and W.X.Y.Z.
The domain that your hosts are in for DNS search path purposes is us.example.com.
You have root access.
You have the RPM for the DHCP server if it is not already installed.
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 5
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
DHCP Configuration: Example
Assumptions about the environment:
The hosts have known addresses on the public network.
DHCP provides one address per node plus three for the SCAN.
The subnet and netmask on the interface to be serviced is
10.228.212.0/255.255.252.0.
The address range that the DHCP server will serve is
10.228.212.10 through 10.228.215.254.
The gateway address is 10.228.212.1.
The name server is known to the cluster nodes.
The domain your hosts are in is us.example.com.
You have root access.
You have the RPM for the DHCP server if it is not already
installed.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
To install and configure DHCP, perform the following steps:
1. As the root user, install the DHCP rpm: # rpm Ivh dhcp-3.0.1-62.EL4.rpm
2. The DHCP configuration file is /etc/dhcp.conf. In the current example, the minimal
configuration for the public network will look similar to the following:
subnet 10.228.212.0 netmask 255.255.252.0
{
default-lease-time 43200;
max-lease-time 86400;
option subnet-mask 255.255.252.0;
option broadcast-address 10.228.215.255;
option routers 10.228.212.1;
option domain-name-servers M.N.P.Q, W.X.Y.Z;
option domain-name "us.example.com";
Pool {
range 10.228.212.10 10.228.215.254;
} }
3. Start the DHCP service: # /etc/init.d/dhcp start
If you encounter any issues, check /var/log/messages for errors. You can adjust the
lease time to suit your needs within the subnet.
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 6
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
DHCP Configuration Example
The /etc/dhcp.conf file:
subnet 10.228.212.0 netmask 255.255.252.0
{
default-lease-time 43200;
max-lease-time 86400;
option subnet-mask 255.255.252.0;
option broadcast-address 10.228.215.255;
option routers 10.228.212.1;
option domain-name-servers M.N.P.Q, W.X.Y.Z;
option domain-name "us.example.com";
Pool {
range 10.228.212.10 10.228.215.254;
}
}
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Host name resolution uses the gethostbyname family of library calls. These calls go through
a configurable search path of name space providers, typically including a local /etc/hosts
file, DNS service, and possibly other directories, such as NIS and/or LDAP. On Linux, these
are usually defined in /etc/nsswitch.conf. For example, the line: hosts: files dns
nis says to look in /etc/hosts, then consult DNS, and then NIS.
When doing a lookup for a non-qualified name, the library will look for the unadorned name in
all the name space providers listed in the slide. If no answer is found, it will then successively
apply domains from the search entry in the file /etc/resolv.conf. This also defines the
DNS servers consulted when there is a dns entry in /etc/nsswitch.conf, for example:
search us.example.com example.com
nameserver M.N.P.Q
nameserver W.X.Y.Z
Usually the machines domain is the first entry in the search path, followed by others of
common use. DNS messages are sent as UDP packets, usually to the reserved port 53. A
query may ask for particular types of record, or all records for a name. Name resolution for
Ipv4 uses A records for addresses and CNAME records for aliases (canonical names) that
must be re-resolved. For IPv6, addresses are four times as large and an AAAA record is used.
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 7
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
DNS Concepts
Host name resolution uses the gethostbyname family of
library calls.
These calls do a configurable search of name space
providers, typically including:
Local /etc/hosts file
DNS service
Other directory services such as NIS or LDAP
DNS traffic is sent as UDP packets, usually to port 53.
A query may ask for particular types of record, or all
records for a name.
Name resolution for Ipv4 uses A records for addresses,
and CNAME records for aliases that must be re-resolved.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 8
PTR records are used for reverse lookups (ask for the name, given an address).
SRV records are used to define service offerings, providing both address and port, and are
used in the DNS-SD protocol that you use for multicast discovery. TXT records contain
arbitrary information, and are used in DNS-SD to carry attributes of a service.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
For GNS to function properly, it needs to receive queries for all names in a subdomain (zone)
from anywhere in the corporation. This is done by having the corporate DNS delegate the
subdomain to the cluster. The two key pieces of data needed are the full name of the
subdomain being created and the address where GNS will listen for queries.
You typically use the cluster name as the subdomain. The GNS address must be statically
assigned and on the public subnet of the cluster. It is virtual, and will be moved from host to
host within the cluster to ensure the presence of the GNS service.
When using the common ISC BIND V8 name server, the network administrator sets up the
zone delegation with an entry in the server configuration that looks like the following:
zone cluster01.us.example.com{
type forward;
forward only;
forwarders {
10.228.212.2 port 53;
};
};
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 9
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
DNS Forwarding for GNS
To work properly, GNS needs to receive queries for all
names in a subdomain from anywhere in the corporation.
This is done by having the corporate DNS delegate the
subdomain to the cluster.
DNS must know the full name of the subdomain and the
address where GNS will listen for queries.
This clause from the /etc/named.conf file configures
zone delegation for cluster01.us.example.com.
zone cluster01.us.example.com{
type forward;
forward only;
forwarders {
10.228.212.2 port 53;
};
};
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 10
Here, the subdomain is cluster01.us.example.com. When the us.example.com DNS gets any
query for anything under cluster01.us.example.com, it will send it to the server at that address
and port, and return any results received. It is also likely to cache the returned result for the
time-to-live (TTL) in the returned answer.
This does not establish any address for the name cluster01.us.example.com. Rather, it
creates a way of resolving anything underneath, such as prod.cluster01.us.example.com.
In production sites, it is necessary to work with the network administrators to change the
configuration of the DNS system, wherever it may reside.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
If you would like to set up something for testing, you can configure your own DNS service.
You now go through the exercise of configuring a local name server that all the other
machines use for all addresses. Queries for the GNS domain will be sent to the GNS service,
and all others will be sent to the corporate DNS. This server will cache all intermediate
results, so it is configured almost identically to a caching-only DNS server.
You will require the full name of the subdomain being created and the address where GNS
will listen for queries. The GNS address in this example will be 10.228.212.2. Other
requirements are listed as follows:
You must have a machine to use as your DHCP server. Remember, it is a single-point
of failure so it must always be up. The address for our new DNS server is 10.228.212.3.
Do not confuse this with the GNS address.
The parent name servers are M.N.P.Qand W.X.Y.Z.
You have access to the root account.
You have the RPM for the DNS server.
A summary of the steps required to install and configure the name server are listed as follows:
1. As root, install the BIND (DNS) rpm:
# rpm Ivh bind-9.2.4-30.rpm
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 11
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
DNS Configuration: Example
The following is assumed about the environment:
The cluster subdomain is cluster01.us.example.com.
The address GNS will listen on is 10.228.212.2 port 53.
The address for the new DNS server is 10.228.212.3.
The parent name servers are M.N.P.Q and W.X.Y.Z.
A summary of the steps is shown as follows:
1. As root, install the BIND (DNS) rpm: # rpm Ivh bind-9.2.4-30.rpm
2. Configure delegation in /etc/named.conf.
3. Populate the cache file: $ dig . ns > /var/named/db.cache
4. Populate /var/named/db.127.0.0 to handle reverse lookups.
5. Start the name service (named): # /etc/init.d/named start
6. Modify /etc/resolv.conf on all nodes.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 12
2. Configure delegation in /etc/named.conf.
3. Populate the cache file:
$ dig . ns > /var/named/db.cache
4. Populate /var/named/db.127.0.0 to handle reverse lookups.
5. Start the name service (named):
# /etc/init.d/named start
6. Modify /etc/resolv.conf to correctly configure the name space search order on all
nodes.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 13
1. The first action you will take is to install the most current BIND DNS rpm for your kernel
version. This should be done as the root user. For example:
# rpm Ivh bind-9.2.4-30.rpm
2. Next, configure the behavior of the DNS server, named, by editing the
/etc/named.conf file. You will define the working directory, the root nameserver
cache file, reverse lookup configuration, and delegation for your cluster subdomain.
# vi /etc/named.conf
options {
directory "/var/named"; # The named working directory (default value) #
forwarders { M.N.P.Q; W.X.Y.Z; }; ## Where to resolve unknown addresses.
forward only; These are the same corporate nameservers used in the DHCP
example ##
};
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
DNS Configuration: Detail
1. As the root user, install the BIND DNS RPM.
2. Configure the behavior of the DNS server, named by
editing the /etc/named.conf file. Define the:
Working directory
Nameserver cache file
Reverse lookup configuration
Delegation for your cluster sub-domain
# rpm Ivh bind-9.2.4-30.rpm ## Install the current RPM ##
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 14
zone "." in {
type hint;
file "db.cache"; ## Defines the cache for root nameservers as
/var/named/db.cache ##
};
zone "0.0.127.in-addr.arpa" in {
type master;
file "db.127.0.0"; ## localhost reverse lookup file ##
};
zone cluster01.us.example.com{
type forward;
forward only; ## This section defines the cluster GNS IP address to which requests for
address
forwarders { resolution for the cluster sub-domain cluster01.us.example.com are
sent ##
10.228.212.2 port 53;
};
};
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 15
3. Populate the cache file /var/named/db.cache.
The zone . section of named.conf establishes root name servers. You need to
populate this with data, and can do so with a simple lookup:
# dig . ns > /var/named/db.cache
The output of this command looks something like this:
[root@host01 ~]# dig ns .
...
;. IN NS
;; ANSWER SECTION:
. 288 IN NS a.root-servers.net.
. 288 IN NS b.root-servers.net.
. 288 IN NS c.root-servers.net.
. 288 IN NS d.root-servers.net.
. 288 IN NS e.root-servers.net.
...
;; ADDITIONAL SECTION:
a.root-servers.net. 459 IN A 198.41.0.4
a.root-servers.net. 459 IN AAAA 2001:503:ba3e::2:30
b.root-servers.net. 459 IN A 192.228.79.201
c.root-servers.net. 459 IN A 192.33.4.12
...
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
DNS Configuration: Detail
3. Populate the cache file /var/named/db.cache.
4. Populate /var/named/db.127.0.0.
5. Start the named server.
# dig . ns > /var/named/db.cache
$TTL 345600
@IN SOA localhost. root.localhost. (
00 ; Serial
86400 ; Refresh
7200 ; Retry
2592000 ; Expire
345600 ) ; Minimum
IN NS localhost.
1 IN PTR localhost.
# /etc/init.d/named start
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 16
4. Populate /var/named/db.127.0.0.
The zone 0.0.127.in-addr.arpa section of named.conf handles reverse
lookups for the localhost. You need to create the /var/named/db.127.0.0 data file
for the 127.0.0.0 domain, which should contain:
$TTL 345600
@ IN SOA localhost. root.localhost. (
00 ; Serial
86400 ; Refresh
7200 ; Retry
2592000 ; Expire
345600 ) ; Minimum
IN NS localhost.
1 IN PTR localhost.
5. Start the name server named.
# /etc/init.d/named start
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Cloning Grid Infrastructure
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM C - 2
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Objectives
After completing this lesson, you should be able to:
Describe the cloning process
Describe the clone.pl script and its variables
Perform a clone of Oracle Grid Infrastructure to a new
cluster
Extend an existing cluster by cloning
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Cloning is a process that allows the copying of an existing Oracle Clusterware installation to a
different location, and then updating the copied installation to work in the new environment.
The cloned copy can be used to create a new cluster from a successfully installed cluster. To
add or delete Oracle Clusterware from the nodes in the cluster, use the addNode.sh and
rootcrs.pl scripts. The cloning procedure cannot be used to remove a node from an
existing cluster.
The cloning procedure is responsible for the work that would have been done by the Oracle
Universal Installer (OUI) utility. It does not automate the prerequisite work that must be done
on each node before installing the Oracle software.
This technique is very useful if a large number of clusters need to be deployed in an
organization. If only one or two clusters are being deployed, you should probably use the
traditional installation program to perform the installations.
Note: The Oracle Enterprise Manager administrative tool with the Provisioning Pack feature
installed has automated wizards to assist with cloning exercises.
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM C - 3
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
What Is Cloning?
Cloning is the process of copying an existing Oracle
Clusterware installation to a different location. It:
Requires a successful installation as a baseline
Can be used to create new clusters
Cannot be used to remove nodes from the cluster
Does not perform the operating system prerequisites to an
installation
Is useful to build many clusters in an organization
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Using the cloning procedure presented in this lesson has several benefits compared to the
traditional Oracle Universal Installer (OUI) installation method. The OUI utility is a graphical
program and must be executed from a graphical session. Cloning can be completed in silent
mode from a command-line Secure Shell (SSH) terminal session without the need to load a
graphical windows system. If the OUI program were to be used to install the software from the
original installation media, all patches that have been applied since the first installation would
have to be reapplied. The clone technique presented in this lesson includes all successfully
applied patches, and can be performed very quickly on a large number of nodes. When the
OUI utility performs the copying of files to remote servers, the job is executed serially on one
node at a time. With cloning, simultaneous transfers to multiple nodes can be achieved.
Finally, the cloning method is a guaranteed way of repeating the same installation on multiple
clusters to help avoid human error.
The cloned installation acts the same as the source installation. It can be patched in the future
and removed from a cluster if needed by using ordinary tools and utilities. Cloning is an
excellent way to instantiate test clusters or a development cluster from a successful base
installation.
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM C - 4
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Benefits of Cloning Grid Infrastructure
The following are some of the benefits of cloning Oracle Grid
Infrastructure. It:
Can be completed in silent mode from a Secure Shell
(SSH) terminal session
Contains all patches applied to the original installation
Can be done very quickly
Is a guaranteed method of repeating the same installation
on multiple clusters
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
This example shows the result when you successfully clone an installed Oracle Clusterware
environment to create a cluster. The environment on Node 1 in Cluster A is used as the
source, and Cluster B, Nodes 1 and 2 are the destination. The Clusterware home is copied
from Cluster A, Node 1, to Cluster B, Nodes 1 and 2. When completed, there will be two
separate clusters. The OCR and Voting disks are not shared between the two clusters after
you successfully create a cluster from a clone.
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM C - 5
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Creating a Cluster by Cloning
Grid Infrastructure
Node 1 Node 1
Node 2
Cluster A Cluster B
Clusterware installed
Clusterware cloned
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
The cloning method requires that an existing, successful installation of Oracle Clusterware be
already performed in your organization. Verify that all patch sets and one-off patches have
been applied before starting the clone procedure to minimize the amount of work that will
have to be performed for the cloning exercise.
To begin the cloning process, start by performing a shutdown of Oracle Clusterware on one of
the nodes in the existing cluster with the crsctl stop crs wait command. The other
nodes in the existing cluster can remain active. After the prompt is returned from the
shutdown command, make a copy of the existing Oracle Clusterware installation into a
temporary staging area of your choosing. The disk space requirements in the temporary
staging area will be equal to the current size of the existing Oracle Clusterware installation.
The copy of the Oracle Clusterware files will not include files that are outside the main
installation directory such as /etc/oraInst.loc and the /etc/oracle directory. These
files will be created later by running various root scripts.
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM C - 6
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Preparing the Oracle Clusterware
Home for Cloning
The following procedure is used to prepare the Oracle
Clusterware home for cloning:
1. Install Oracle Clusterware on the first machine.
A. Use the Oracle Universal Installer (OUI) GUI interactively.
B. Install patches that are required (for example, 11.1.0.n).
C. Apply one-off patches, if necessary.
2. Shut down Oracle Clusterware.
3. Make a copy of the Oracle Clusterware home.
# crsctl stop crs -wait
# mkdir /stagecrs
# cp prf /u01/app/11.2.0/grid /stagecrs
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Each installation of Oracle Clusterware on local storage devices contains files and directories
that are applicable to only that node. These files and directories should be removed before
making an archive of the software for cloning. Perform the commands in the slide to remove
node-specific information from the copy that was made in step 3. Then create an archive of
the Oracle Clusterware copy. An example of using the Linux tar command followed by the
compress command to reduce the file size is shown in the slide. On Windows systems, use
the WinZip utility to create a ZIP file for the archive.
Note: Do not use the Java Archive (JAR) utility to copy and compress the Oracle Clusterware
home.
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM C - 7
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Preparing the Oracle Clusterware
Home for Cloning
4. Remove files that pertain only to the source node.
5. Create an archive of the source.
# cd /stagecrs/grid
# rm -rf /stagecrs/grid/log/<hostname>
# rm -rf log/<hostname>
# rm -rf gpnp/<hostname>
# find gpnp -type f -exec rm -f {} \;
# rm -rf root.sh*
# rm -rf gpnp/*
# rm -rf crs/init/*
# rm -rf cdata/*
# rm -rf crf/*
# rm -rf network/admin/*.ora
# find . -name '*.ouibak' -exec rm {} \;
# find . -name '*.ouibak.1' -exec rm {} \;
# find cfgtoollogs -type f -exec rm -f {} \;
# cd /stagecrs/grid
# tar zcvpf /tmp/crs111060.tgz .
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM C - 8
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Preparing the Oracle Clusterware
Home for Cloning
6. Restart Oracle Clusterware.
# crsctl start crs
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
The archive of the source files that was made when preparing the Oracle Clusterware home
for cloning will become the source file for an installation on the new cluster system and the
clone.pl script will be used to make it a working environment. These actions do not perform
the varied prerequisite tasks that must be performed on the operating system before an
Oracle Clusterware installation. The prerequisite tasks are the same as the ones presented in
the lesson titled Grid Infrastructure Installation in the topic about installing Oracle
Clusterware.
One of the main benefits of cloning is rapid instantiation of a new cluster. The prerequisite
tasks are manual in nature, but it is suggested that a shell script be developed to automate
these tasks. One advantage of the Oracle Enterprise Linux operating system is that an
oracle-validated-1.x.x.rpm file can be obtained that will perform almost all the
prerequisite checks, including the installation of missing packages with a single command.
This cloning procedure is used to build a new distinct cluster from an existing cluster. In step
1, prepare the new cluster nodes by performing the prerequisite setup and checks.
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM C - 9
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Cloning to Create a New Oracle
Clusterware Environment
The following procedure uses cloning to create a new cluster:
1. Prepare the new cluster nodes. (See the lesson titled Grid
Infrastructure Installation for details.)
Check system requirements.
Check network requirements.
Install the required operating system packages.
Set kernel parameters.
Create groups and users.
Create the required directories.
Configure installation owner shell limits.
Configure block devices for Oracle Clusterware devices.
Configure SSH and enable user equivalency.
Use the Cluster Verify Utility to check prerequisites.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
If cluvfy fails to execute because of user equivalence errors, the passphrase needs to be
loaded with the following commands before executing cluvfy:
exec /usr/bin/ssh-agent $SHELL
ssh-add
Note: Unlike traditional methods of installation, the cloning process does not validate your
input during the preparation phase. (By comparison, during the traditional method of
installation using OUI, various checks occur during the interview phase.) Thus, if you make
errors during the hardware setup or in the preparation phase, the cloned installation fails.
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM C - 10
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Step 2 is the deployment and extraction of the source archive to the new cluster nodes. If a
shared Oracle Cluster home on a Cluster File System (CFS) is not being utilized, extract the
source archive to each nodes local file system. It is possible to change the operating system
owner to a different one from that of the source with recursive chown commands as illustrated
in the slide. If other Oracle products have been previously installed on the new nodes, the
Central Oracle Inventory directory may already exist. It is possible for this directory to be
owned by a different user than the Oracle Grid Infrastructure user; however, both should
belong to the same primary group oinstall.
Create a directory for the Oracle Inventory on the destination node and, if necessary, change
the ownership of all the files in the Oracle Grid Infrastructure home to be owned by the Oracle
Grid Infrastructure installation owner and by the Oracle Inventory (oinstall privilege) group. If
the Grid Infrastructure installation owner is oracle, and the Oracle Inventory group is oinstall,
then the following example shows the commands to do this on a Linux system:
[root@node1 crs]# mkdir -p /u01/app/oraInventory
[root@node1 crs]# chown oracle:oinstall /u01/app/oraInventory
[root@node1 crs]# chown -R oracle:oinstall /u01/app/11.2.0/grid
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM C - 11
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Cloning to Create a New Oracle
Clusterware Environment
2. Deploy Oracle Clusterware on each of the destination
nodes.
A. Extract the TAR file created earlier.
B. Change the ownership of files and create Oracle Inventory.
C. Remove any network files from
/u01/app/11.2.0/grid/network/admin.
# mkdir p /u01/app/11.2.0/grid
# cd /u01/app/11.2.0
# tar zxvf /tmp/crs111060.tgz
# chown R grid:oinstall /u01/app/11.2.0/grid
# mkdir p /u01/app/oraInventory
# chown grid:oinstall /u01/app/oraInventory
$ rm /u01/app/11.2.0/grid/network/admin/*
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
When you run the last of the preceding commands on the Grid home, it clears setuid and
setgid information from the Oracle binary. It also clears setuid from the following binaries:
<Grid_home>/bin/extjob
<Grid_home>/bin/jssu
<Grid_home>/bin/oradism
Run the following commands to restore the cleared information:
# chmod u+s <Grid_home>/bin/oracle
# chmod g+s <Grid_home>/bin/oracle
# chmod u+s <Grid_home>/bin/extjob
# chmod u+s <Grid_home>/bin/jssu
# chmod u+s <Grid_home>/bin/oradism
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM C - 12
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
The PERL-based clone.pl script is used in place of the graphical OUI utility to perform the
installation on the new nodes so that they may participate in the existing cluster or become
valid nodes in a new cluster. The script can be executed directly on the command line or at a
DOS command prompt for Windows platforms. The clone.pl script accepts several
parameters as input, typed directly on the command line. Because the clone.pl script is
sensitive to the parameters being passed to it, including the use of braces, single quotation
marks, and double quotation marks, it is recommended that a shell script be created to
execute the PERL-based clone.pl script to input the arguments. This will be easier to
rework if there is a syntax error generated. A total of seven arguments can be passed as
parameters to the clone.pl script. Four of them define environment variables, and the
remaining three supply processing options. If your platform does not include a PERL
interpreter, you can download one at:
http://www.perl.org
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM C - 13
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
clone.pl Script
Cloning to a new cluster and cloning to extend an existing
cluster both use a PERL script. The clone.pl script is used,
which:
Can be used on the command line
Can be contained in a shell script
Accepts many parameters as input
Is invoked by the PERL interpreter
# perl <Grid_home>/clone/bin/clone.pl -silent $E01 $E02
$E03 $E04 $C01 $C02
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
The clone.pl script accepts four environment variables as command-line arguments
providing input. These variables would correlate to some of the questions presented by the
OUI utility during an installation. Each variable is associated with a symbol that is used for
reference purposes for developing a shell script. The choice of symbol is arbitrary. Each
variable is case-sensitive. The variables are as follows:
ORACLE_BASE: The location of the Oracle Base directory. A suggested value is
/u01/app/grid. The ORACLE_BASE value should be unique for each software owner.
ORACLE_HOME: The location of the Oracle Grid Infrastructure home. This directory
location must exist and be owned by the Oracle Grid Infrastructure software owner and
the Oracle Inventory group, typically grid:oinstall. A suggested value is
/u01/app/<version>/grid.
ORACLE_HOME_NAME: The name of the Oracle Grid Infrastructure home. This is stored
in the Oracle inventory and defaults to the name Ora11g_gridinfrahome1 when
performing an installation with OUI. Any name can be selected, but it should be unique
in the organization.
INVENTORY_LOCATION: The location of the Oracle Inventory. This directory location
must exist and must initially be owned by the Oracle operating system group: oinstall. A
typical location is /u01/app/oraInventory.
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM C - 14
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
clone.pl Environment Variables
The clone.pl script accepts four environment variables as
input. They are as follows:
Symbol Variable Description
E01 ORACLE_BASE The location of the Oracle base directory
E02 ORACLE_HOME The location of the Oracle Grid Infrastructure
home. This directory location must exist and
must be owned by the Oracle operating
system group: oinstall
E03 ORACLE_HOME_NAME The name of the Oracle Grid Infrastructure
home
E04 INVENTORY_LOCATION The location of the Oracle Inventory
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
The clone.pl script accepts two command options as input. The command options are
case-sensitive and are as follows:
CLUSTER_NODES: The short node names for the nodes in the cluster
LOCAL_NODES: The short name of the local node
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM C - 15
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
clone.pl Command Options
The clone.pl script accepts two required command options
as input. They are as follows:
# Variable Data Type Description
C01 CLUSTER_NODES String The short node names for the nodes
in the cluster
C02 LOCAL_NODE String The short name of the local node
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
The clone.pl script requires you to provide many setup values to the script when it is
executed. You may enter the values interactively on the command line or create a script to
supply the input values. By creating a script, you will have the ability to modify it and execute
the script a second time if errors exist. The setup values to the clone.pl script are case-
sensitive and sensitive to the use of braces, single quotation marks, and double quotation
marks.
For step 3, create a shell script that invokes clone.pl supplying command-line input
variables. All the variables should appear on a single line.
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM C - 16
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Cloning to Create a New Oracle
Clusterware Environment
3. Create a shell script to invoke clone.pl supplying input.
#!/bin/sh
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/11.2.0/grid
THIS_NODE=`hostname -s`
E01=ORACLE_BASE=${ORACLE_BASE}
E02=ORACLE_HOME=${ORACLE_HOME}
E03=ORACLE_HOME_NAME=OraGridHome1
E04=INVENTORY_LOCATION=${ORACLE_BASE}/oraInventory
#C00="-O'-debug'"
C01="-O'\"CLUSTER_NODES={node1,node2}\"'"
C02="-O'\"LOCAL_NODE=${THIS_NODE}\"'"
perl ${GRID_HOME}/clone/bin/clone.pl -silent $E01 $E02
$E03 $E04 $C01 $C02
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Step 4: Run the script you created in step 3 as the operating system user that installed Oracle
Clusterware. If you do not have a shared Oracle Grid Infrastructure home, run this script on
each node. The clone.pl command instantiates the crsconfig_params file in the next
step.
Step 5: Prepare the /u01/app/11.2.0/grid/install/crsconfig_params file using
the Configuration Wizard. You can copy the file from one node to all the other nodes. More
than 50 parameters are named in this file.
The Configuration Wizard helps you to prepare the crsconfig_params file, prompts you to
run the root.sh script (which calls the rootcrs.pl script), relinks Oracle binaries, and runs
cluster verifications. Start the Configuration Wizard as follows:
$ <Oracle_home>/crs/config/config.sh
Optionally, you can run the Configuration Wizard silently, as follows, providing a response file:
$ <Oracle_home>/crs/config/config.sh -silent -responseFile file_name
To use GNS, Oracle recommends that you add GNS to the cluster after your cloned cluster is
running. Add GNS using the srvctl add gns command.
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM C - 17
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Cloning to Create a New Oracle
Clusterware Environment
4. Run the script created in step 3 on each node.
5. Prepare the crsconfig_params file.
6. Run the cluvfy utility to validate the installation.
$ /tmp/my-clone-script.sh
$ cluvfy stage post crsinst n all -verbose
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Several log files are generated when the clone.pl script is executed and are useful in
diagnosing any errors that may occur. For a detailed log of the actions that occur during the
OUI part of the cloning, examine the log file:
<Central_Inventory>logs/cloneActions/<timestamp>.log
If errors occurred during the OUI portion of the cloning process, examine the log file:
<Central_Inventory>logs/oraInstall/<timestamp>.err
Other miscellaneous messages generated by OUI can be found in the output file:
<Central_Inventory>logs/oraInstall/<timestamp>.out
The location of the Central (Oracle) Inventory on Linux and AIX is /etc/oraInst.loc. On
other UNIX platforms, the Central Inventory is /var/opt/oracle/oraInst.loc.
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM C - 18
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Log Files Generated During Cloning
The following log files are generated during cloning to assist
with troubleshooting failures.
Detailed log of the actions that occur during the OUI part:
Information about errors that occur when OUI is running:
Other miscellaneous messages generated by OUI:
/u01/app/oraInventory/logs/cloneActions<timestamp>.log
/u01/app/oraInventory/logs/oraInstall<timestamp>.err
/u01/app/oraInventory/logs/oraInstall<timestamp>.out
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
The cloning procedure using the clone.pl script can also be used to extend Oracle
Clusterware to more nodes within the same cluster using steps similar to the procedure for
creating a new cluster. The first step shown here is identical to the first step performed when
cloning to create a new cluster. These steps differ depending on the operating system used.
When you configure Secure Shell (SSH) and enable user equivalency, remember that the
authorized_keys and known_hosts files exist on each node in the cluster. Therefore, it
will be necessary to update these files on existing nodes with information about the new
nodes that Oracle Clusterware will be extended to.
Note: This is not supported in 11.2.0.1.
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM C - 19
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Cloning to Extend Oracle Clusterware
to More Nodes
The procedure is very similar to cloning to create new clusters.
1. Prepare the new cluster nodes. (See the lesson titled
Oracle Clusterware Installation for details. Suggest that
this be made into a shell script for reuse.)
A. Check system requirements.
B. Check network requirements.
C. Install the required operating system packages.
D. Set kernel parameters.
E. Create groups and users.
F. Create the required directories.
G. Configure installation owner shell limits.
H. Configure SSH and enable user equivalency.
I. Use the cluvfy utility to check prerequisites.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Step 2 is the deployment and extraction of the source archive to the new cluster nodes.
Create a directory for the Oracle Inventory on the destination node and, if necessary, change
the ownership of all the files in the Oracle Grid Infrastructure home to be owned by the Oracle
Grid Infrastructure installation owner and by the Oracle Inventory (oinstall privilege) group.
If the Oracle Grid Infrastructure installation owner is oracle, and the Oracle Inventory group is
oinstall, then the following example shows the commands to do this on a Linux system:
[root@node1 crs]# mkdir -p /u01/app/oraInventory
[root@node1 crs]# chown oracle:oinstall /u01/app/oraInventory
[root@node1 crs]# chown -R oracle:oinstall /u01/app/11.2.0/grid
Note: When you run the last of the preceding commands on the Grid home, it clears setuid
and setgid information from the Oracle binary. It also clears setuid from the following binaries:
Grid_home/bin/extjob
Grid_home/bin/jssu
Grid_home/bin/oradism
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM C - 20
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Cloning to Extend Oracle Clusterware
to More Nodes
2. Deploy Oracle Clusterware on the destination nodes.
A. Extract the TAR file created earlier.
B. Change the ownership of files and create Oracle Inventory.
C. Run the following commands to restore cleared information:
# mkdir p /u01/app/11.2.0/grid
# cd /u01/app/11.2.0/grid
# tar zxvf /tmp/crs111060.tgz
# chown R grid:oinstall /u01/app/11.2.0/grid
# mkdir p /u01/app/oraInventory
# chown grid:oinstall /u01/app/oraInventory
# chmod u+s Grid_home/bin/oracle
# chmod g+s Grid_home/bin/oracle
# chmod u+s Grid_home/bin/extjob
# chmod u+s Grid_home/bin/jssu
# chmod u+s Grid_home/bin/oradism
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Run the following commands to restore the cleared information:
# chmod u+s <Grid_home>/bin/oracle
# chmod g+s <Grid_home>/bin/oracle
# chmod u+s <Grid_home>/bin/extjob
# chmod u+s <Grid_home>/bin/jssu
# chmod u+s <Grid_home>/bin/oradism
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM C - 21
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
For step 3, the quantity of input variables to the clone.pl procedure is greatly reduced
because the existing cluster has already defined many of the settings that are needed.
Creating a shell script to provide these input values is still recommended. For step 4, run the
shell script that you developed to invoke the clone.pl script.
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM C - 22
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Cloning to Extend Oracle Clusterware
to More Nodes
3. Create a shell script to invoke clone.pl supplying input.
4. Run the shell script created in step 3 on each new node.
#!/bin/sh
# /tmp/my-clone-script.sh
E01=ORACLE_BASE=/u01/app
E02=ORACLE_HOME=/u01/app/11.2.0/grid
E03=ORACLE_HOME_NAME=OraCrs11g
C01="-O'sl_tableList={node3:node3-priv:node3-vip:N:Y}'"
C02="-O'INVENTORY_LOCATION=/u01/app/oraInventory'"
C03="-O'-noConfig'"
perl /u01/app/11.2.0/grid/clone/bin/clone.pl $E01 $E02 $E03 $C01
$C02 $C03
$ /tmp/my-clone-script.sh
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
For step 5, run the orainstRoot.sh script on each new node as the root user. For step 6,
run the addNode.sh script on the source node, not on the destination node. Because the
clone.pl scripts have already been run on the new nodes, this script only updates the
inventories of the existing nodes.
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM C - 23
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Cloning to Extend Oracle Clusterware
to More Nodes
5. Run the orainstRoot.sh script on each new node.
6. Run the addNode script on the source node.
7. Copy the following files from node 1, on which you ran
addnode.sh, to node 2:
<Grid_home>/crs/install/crsconfig_addparams
Grid_home/crs/install/crsconfig_params
Grid_home/gpnp
# /u01/app/oraInventory/orainstRoot.sh
$ /u01/app/11.2.0/grid/oui/bin/addNode.sh silent \
"CLUSTER_NEW_NODES=host02" \
"CLUSTER_NEW_VIRTUAL_HOSTNAMES=host02-vip" \
"CLUSTER_NEW_VIPS=host02-vip" noCopy \
CRS_ADDNODE=true
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Next, in step 8, run the root.sh script on each new node joining the cluster as the root
user. For step 10, which is the last step, run cluvfy to verify the success of the Oracle
Clusterware installation.
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM C - 24
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Cloning to Extend Oracle Clusterware
to More Nodes
8. Run the <Grid_home>/root.sh script on the new node.
9. Navigate to the Oracle_home/oui/bin directory on the
first node and run the addNode.sh script:
10. Run the <Oracle_home>/root.sh script on node 2 as
root.
11. Run the <Grid_home>/crs/install/rootcrs.pl
script on node 2 as root.
12. Run cluvfy to validate the installation.
$ cluvfy stage -post nodeadd -n host02 -verbose
# /u01/app/11.2.0/grid/root.sh
$ ./addNode.sh -silent noCopy "CLUSTER_NEW_NODES={host02}"
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Answer: a
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM C - 25
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Quiz
An Oracle Clusterware home that was created with cloning
techniques can be used as the source for additional cloning
exercises.
a. True
b. False
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Answer: b
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM C - 26
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Quiz
Which scripting language is used for the cloning script?
a. Java
b. PERL
c. Shell
d. Python
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM C - 27
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Summary
In this lesson, you should have learned how to:
Describe the cloning process
Describe the clone.pl script and its variables
Perform a clone of Oracle Grid Infrastructure to a new
cluster
Extend an existing cluster by cloning
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
RAC Concepts
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM D - 2
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Objectives
After completing this lesson, you should be able to:
Explain the necessity of global resources
Describe global cache coordination
Explain object affinity and dynamic remastering
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM D - 3
Oracle Real Application Clusters (RAC) enables high utilization of a cluster of standard, low-
cost modular servers such as blades.
RAC offers automatic workload management for services. Services are groups or
classifications of applications that comprise business components corresponding to
application workloads. Services in RAC enable continuous, uninterrupted database
operations and provide support for multiple services on multiple instances. You assign
services to run on one or more instances, and alternate instances can serve as backup
instances. If a primary instance fails, the Oracle server moves the services from the failed
instance to a surviving alternate instance. The Oracle server also automatically load-balances
connections across instances hosting a service.
RAC harnesses the power of multiple low-cost computers to serve as a single large computer
for database processing, and provides the only viable alternative to large-scale symmetric
multiprocessing (SMP) for all types of applications.
RAC, which is based on a shared disk architecture, can grow and shrink on demand without
the need to artificially partition data among the servers of your cluster. RAC also offers a
single-button addition of servers to a cluster. Thus, you can easily add a server to or remove a
server from the database.
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Benefits of Using RAC
High availability: Surviving node and instance failures
Scalability: Adding more nodes as you need them in the
future
Pay as you grow: Paying for only what you need today
Key grid computing features:
Growth and shrinkage on demand
Single-button addition of servers
Automatic workload management for services
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM D - 4
If your application scales transparently on SMP machines, it is realistic to expect it to scale
well on RAC, without having to make any changes to the application code.
RAC eliminates the database instance, and the node itself, as a single point of failure, and
ensures database integrity in the case of such failures.
The following are some scalability examples:
Allow more simultaneous batch processes.
Allow larger degrees of parallelism and more parallel executions to occur.
Allow large increases in the number of connected users in online transaction processing
(OLTP) systems.
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Clusters and Scalability
SMP model RAC model
Cache
CPU CPU
Cache
CPU CPU
Memory
Cache coherency
SGA
BGP BGP
SGA
BGP BGP
Shared
storage
Cache fusion
BGP (background process)
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM D - 5
Successful implementation of cluster databases requires optimal scalability on four levels:
Hardware scalability: Interconnectivity is the key to hardware scalability, which greatly
depends on high bandwidth and low latency.
Operating system scalability: Methods of synchronization in the operating system can
determine the scalability of the system. In some cases, potential scalability of the
hardware is lost because of the operating systems inability to handle multiple resource
requests simultaneously.
Database management system scalability: A key factor in parallel architectures is
whether the parallelism is affected internally or by external processes. The answer to
this question affects the synchronization mechanism.
Application scalability: Applications must be specifically designed to be scalable. A
bottleneck occurs in systems in which every session is updating the same data most of
the time. Note that this is not RAC specific and is true on single-instance systems too.
It is important to remember that if any of the preceding areas are not scalable (no matter how
scalable the other areas are), then parallel cluster processing may not be successful. A typical
cause for the lack of scalability is one common shared resource that must be accessed often.
This causes the otherwise parallel operations to serialize on this bottleneck. A high latency in
the synchronization increases the cost of synchronization, thereby counteracting the benefits
of parallelization. This is a general limitation and not a RAC-specific limitation.
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Levels of Scalability
Hardware: Disk input/output (I/O)
Internode communication: High bandwidth and low latency
Operating system: Number of CPUs
Database management system: Synchronization
Application: Design
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM D - 6
Scaleup is the ability to sustain the same performance levels (response time) when both
workload and resources increase proportionally:
Scaleup = (volume parallel) / (volume original)
For example, if 30 users consume close to 100 percent of the CPU during normal processing,
then adding more users would cause the system to slow down due to contention for limited
CPU cycles. However, by adding CPUs, you can support extra users without degrading
performance.
Speedup is the effect of applying an increasing number of resources to a fixed amount of
work to achieve a proportional reduction in execution times:
Speedup = (time original) / (time parallel)
Speedup results in resource availability for other tasks. For example, if queries usually take
10 minutes to process and running in parallel reduces the time to five minutes, then additional
queries can run without introducing the contention that might occur if they were to run
concurrently.
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Scaleup and Speedup
Original system
100% of task
Cluster system scaleup
Up to
200%
of
task
Up to
300%
of
task
Time
Hardware
Time
Time
Cluster system speedup
Time/2
Hardware
Hardware
Hardware
Hardware
Time
Hardware
100%
of task
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM D - 7
The type of workload determines whether scaleup or speedup capabilities can be achieved
using parallel processing.
Online transaction processing (OLTP) and Internet application environments are
characterized by short transactions that cannot be further broken down and, therefore, no
speedup can be achieved. However, by deploying greater amounts of resources, a larger
volume of transactions can be supported without compromising the response.
Decision support systems (DSS) and parallel query options can attain speedup, as well as
scaleup, because they essentially support large tasks without placing conflicting demands on
resources. The parallel query capability within the Oracle database can also be leveraged to
decrease the overall processing time of long-running queries and to increase the number of
such queries that can be run concurrently.
In an environment with a mixed workload of DSS, OLTP, and reporting applications, scaleup
can be achieved by running different programs on different hardware. Speedup is possible in
a batch environment, but may involve rewriting programs to use the parallel processing
capabilities.
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Speedup/Scaleup and Workloads
Workload Speedup Scaleup
OLTP and Internet No Yes
DSS with parallel query Yes Yes
Batch (mixed) Possible Yes
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM D - 8
To make sure that a system delivers the I/O demand that is required, all system components
on the IO path need to be orchestrated to work together.
The weakest link determines the I/O throughput.
On the left, you see a high-level picture of a system. This is a system with four nodes, two
Host Bus Adapters (HBAs) per node, and two fibre channel switches, which are attached to
four disk arrays each. The components on the IO path are the HBAs, cables, switches, and
disk arrays. Performance depends on the number and speed of the HBAs, switch speed,
controller quantity, and speed of disks. If any one of these components is undersized, the
system throughput is determined by the undersized component. Assuming you have a 2 Gb
HBA, the nodes can read about 8 200 MB/s = 1.6 GB/s. However, assuming that each disk
array has one controller, all 8 arrays can also do 8 200 MB/s = 1.6 GB/s. Therefore, each of
the fibre channel switches also needs to deliver at least 2 Gb/s per port, to a total of 800 MB/s
throughput. The two switches will then deliver the needed 1.6 GB/s.
Note: When sizing a system, also take the system limits into consideration. For instance, the
number of bus slots per node is limited and may need to be shared between HBAs and
network cards. In some cases, dual port cards exist if the number of slots is exhausted. The
number of HBAs per node determines the maximal number of fibre channel switches. And the
total number of ports on a switch limits the number of HBAs and disk controllers.
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
I/O Throughput Balanced: Example
FC-switch
Disk
array 1
Disk
array 2
Disk
array 3
Disk
array 4
Disk
array 5
Disk
array 6
Disk
array 7
Disk
array 8
Each machine has 2 CPUs:
2 200 MB/s 4 = 1600 MB/s
Each machine has 2 HBAs:
8 200 MB/s = 1600 MB/s
Each switch needs to support 800 MB/s
to guarantee a total system throughput
of 1600 MB/s.
Each disk array
has one 2-Gbit
controller:
8 200 MB/s =
1600 MB/s
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM D - 9
While discussing, people often confuse bits with bytes. This confusion originates mainly from
the fact that hardware vendors tend to describe a components performance in bits/s, whereas
database vendors and customers describe their performance requirements in bytes/s.
The following is a list of common hardware components with their theoretical performance in
bits/second and typical performance in bytes/second:
HBAs come in 1 or 2 Gb per second with a typical throughput of 100 or 200 MB/s.
A 16 Port Switch comes with 16 2-Gb ports. However, the total throughput is 8 times 2
Gb, which results in 1600 MB/s.
Fibre Channel cables have a 2-Gb/s throughput, which translates into 200 MB/s.
Disk Controllers come in 2-Gb/s throughput, which translates into about 200 MB/s.
GigE has a typical performance of about 80 MB/s whereas InfiniBand delivers about
160 MB/s.
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Performance of Typical Components
Throughput Performance
Component Theory (Bit/s) Maximal Byte/s
HBA Gb/s 100/200 MB/s
16 Port Switch 8 2 Gb/s 1600 MB/s
Fibre Channel 2 Gb/s 200 MB/s
Disk Controller 2 Gb/s 200 MB/s
GigE NIC 1 Gb/s 80 MB/s
InfiniBand 10 Gb/s 890 MB/s
CPU 200250 MB/s
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM D - 10
In single-instance environments, locking coordinates access to a common resource such as a
row in a table. Locking prevents two processes from changing the same resource (or row) at
the same time.
In RAC environments, internode synchronization is critical because it maintains proper
coordination between processes on different nodes, preventing them from changing the same
resource at the same time. Internode synchronization guarantees that each instance sees the
most recent version of a block in its buffer cache.
Note: The slide shows what would happen in the absence of cache coordination. RAC
prevents this problem.
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Necessity of Global Resources
1008
SGA1 SGA2
1008
SGA1 SGA2
1008
1008
SGA1 SGA2
1008
SGA1 SGA2
1009 1008
1009
Lost
updates!
1 2
3 4
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM D - 11
Cluster operations require synchronization among all instances to control shared access to
resources. RAC uses the Global Resource Directory (GRD) to record information about how
resources are used within a cluster database. The Global Cache Services (GCS) and Global
Enqueue Services (GES) manage the information in the GRD.
Each instance maintains a part of the GRD in its System Global Area (SGA). The GCS and
GES nominate one instance to manage all information about a particular resource. This
instance is called the resource master. Also, each instance knows which instance masters
which resource.
Maintaining cache coherency is an important part of RAC activity. Cache coherency is the
technique of keeping multiple copies of a block consistent between different Oracle instances.
GCS implements cache coherency by using what is called the Cache Fusion algorithm.
The GES manages all nonCache Fusion interinstance resource operations and tracks the
status of all Oracle enqueuing mechanisms. The primary resources of the GES controls are
dictionary cache locks and library cache locks. The GES also performs deadlock detection to
all deadlock-sensitive enqueues and resources.
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Global Resources Coordination
LMON
LMD0
LMSx
DIAG

LCK0
Cache GRD Master
GES
GCS
LMON
LMD0
LMSx
DIAG

Cache
LCK0
GRD Master
GES
GCS
Node1
Instance1
Noden
Instancen
Cluster
Interconnect
Global
resources
Global Enqueue Services (GES) Global Cache Services (GCS)
Global Resource Directory (GRD)
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM D - 12
The scenario described in the slide assumes that the data block has been changed, or dirtied,
by the first instance. Furthermore, only one copy of the block exists clusterwide, and the
content of the block is represented by its SCN.
1. The second instance attempting to modify the block submits a request to the GCS.
2. The GCS transmits the request to the holder. In this case, the first instance is the holder.
3. The first instance receives the message and sends the block to the second instance.
The first instance retains the dirty buffer for recovery purposes. This dirty image of the
block is also called a past image of the block. A past image block cannot be modified
further.
4. On receipt of the block, the second instance informs the GCS that it holds the block.
Note: The data block is not written to disk before the resource is granted to the second
instance.
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Global Cache Coordination: Example
Node1
Instance1
Node2
Instance2

Cache
Cluster
1009
1008
1 2
3
GCS
4
No disk I/O
LMON
LMD0
LMSx

LCK0
Cache
1009
DIAG
LMON
LMD0
LMSx
LCK0
DIAG
Block mastered
by instance 1
Which instance
masters the block?
Instance 2 has
the current version of the block.
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM D - 13
The scenario described in the slide illustrates how an instance can perform a checkpoint at
any time or replace buffers in the cache as a response to free buffer requests. Because
multiple versions of the same data block with different changes can exist in the caches of
instances in the cluster, a write protocol managed by the GCS ensures that only the most
current version of the data is written to disk. It must also ensure that all previous versions are
purged from the other caches. A write request for a data block can originate in any instance
that has the current or past image of the block. In this scenario, assume that the first instance
holding a past image buffer requests that the Oracle server write the buffer to disk:
1. The first instance sends a write request to the GCS.
2. The GCS forwards the request to the second instance, which is the holder of the current
version of the block.
3. The second instance receives the write request and writes the block to disk.
4. The second instance records the completion of the write operation with the GCS.
5. After receipt of the notification, the GCS orders all past image holders to discard their
past images. These past images are no longer needed for recovery.
Note: In this case, only one I/O is performed to write the most current version of the block to
disk.
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Write to Disk Coordination: Example
Node1
Instance1
Node2
Instance2
Cache
Cluster
1010
1010
1
3
2
GCS
4 5
Only one
disk I/O
LMON
LMD0
LMSx
LCK0
DIAG
LMON
LMD0
LMSx
LCK0
DIAG

Cache
1009
Need to make room
in my cache.
Who has the current version
of that block?
Instance 2 owns it.
Instance 2, flush the block
to disk.
Block flushed, make room
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM D - 14
When one instance departs the cluster, the GRD portion of that instance needs to be
redistributed to the surviving nodes. Similarly, when a new instance enters the cluster, the
GRD portions of the existing instances must be redistributed to create the GRD portion of the
new instance.
Instead of remastering all resources across all nodes, RAC uses an algorithm called lazy
remastering to remaster only a minimal number of resources during reconfiguration. This is
illustrated in the slide. For each instance, a subset of the GRD being mastered is shown along
with the names of the instances to which the resources are currently granted. When the
second instance fails, its resources are remastered on the surviving instances. As the
resources are remastered, they are cleared of any reference to the failed instance.
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Dynamic Reconfiguration
Node1
Instance1
masters
R1
granted
R2 1, 3
1, 2, 3
Node2
Instance2
masters
R3
granted
R4 1, 2
2, 3
Node3
Instance3
masters
R5
granted
R6 1, 2, 3
2
Node1
Instance1
masters
R1
granted
R2 1, 3
1, 3
Node2
Instance2
masters
R3
granted
R4 1, 2
2, 3
Node3
Instance3
masters
R5
granted
R6 1, 3
R3 3 R4 1
Reconfiguration remastering
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM D - 15
In addition to dynamic resource reconfiguration, the GCS, which is tightly integrated with the
buffer cache, enables the database to automatically adapt and migrate resources in the GRD.
This is called dynamic remastering. The basic idea is to master a buffer cache resource on
the instance where it is mostly accessed. In order to determine whether dynamic remastering
is necessary, the GCS essentially keeps track of the number of GCS requests on a per-
instance and per-object basis. This means that if an instance, compared to another, is heavily
accessing blocks from the same object, the GCS can take the decision to dynamically migrate
all of that objects resources to the instance that is accessing the object most.
The upper part of the graphic shows you the situation where the same object has master
resources spread over different instances. In that case, each time an instance needs to read a
block from that object whose master is on the other instance, the reading instance must send
a message to the resources master to ask permission to use the block.
The lower part of the graphic shows you the situation after dynamic remastering occurred. In
this case, blocks from the object have affinity to the reading instance, which no longer needs
to send GCS messages across the interconnect to ask for access permissions.
Note: The system automatically moves mastership of undo segment objects to the instance
that owns the undo segments.
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Object Affinity and Dynamic Remastering
Node1 Node2
Instance2
Instance1



Object
Read from
disk
GCS message to master
Messages are sent to remote node when reading into cache.
Node1
Node2
Instance2
Instance1

No messages are sent to remote node when reading into cache.


Before
dynamic
remastering
After
dynamic
remastering
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM D - 16
Global dynamic performance views retrieve information about all started instances accessing
one RAC database. In contrast, standard dynamic performance views retrieve information
about the local instance only.
For each of the V$ views available, there is a corresponding GV$ view excepting a few. In
addition to the V$ information, each GV$ view possesses an additional column named
INST_ID. The INST_ID column displays the instance number from which the associated V$
view information is obtained. You can query GV$ views from any started instance.
GV$ views use a special form of parallel execution. The parallel execution coordinator runs on
the instance that the client connects to, and one slave is allocated in each instance to query
the underlying V$ view for that instance.
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Global Dynamic Performance Views
Retrieve information about all started instances
Have one global view for each local view
Use one parallel slave on each instance
Node1
Instance1
Noden
Instancen
Cluster
V$INSTANCE V$INSTANCE
GV$INSTANCE
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM D - 17
Oracle supports efficient row-level locks. These row-level locks are created when data
manipulation language (DML) operations, such as UPDATE, are executed by an application.
These locks are held until the application commits or rolls back the transaction. Any other
application process will be blocked if it requests a lock on the same row.
Cache Fusion block transfers operate independently of these user-visible row-level locks. The
transfer of data blocks by the GCS is a row-level process that can occur without waiting for
row-level locks to be released. Blocks may be transferred from one instance to another while
row-level locks are held.
GCS provides access to data blocks allowing multiple transactions to proceed in parallel.
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Efficient Internode Row-Level Locking
Node1
Instance1
Node2
Instance2
UPDATE
Node1
Instance1
Node2
Instance2
UPDATE
Node1
Instance1
Node2
Instance2
UPDATE
Node1
Instance1
Node2
Instance2
COMMIT
No block-level
lock
1 2
3
4
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM D - 18
Oracles cost-based optimizer incorporates parallel execution considerations as a
fundamental component in arriving at optimal execution plans.
In a RAC environment, intelligent decisions are made with regard to intranode and internode
parallelism. For example, if a particular query requires six query processes to complete the
work and six parallel execution slaves are idle on the local node (the node that the user
connected to), then the query is processed by using only local resources. This demonstrates
efficient intranode parallelism and eliminates the query coordination overhead across multiple
nodes. However, if there are only two parallel execution servers available on the local node,
then those two, and four of another node, are used to process the query. In this manner, both
internode and intranode parallelism are used to speed up query operations.
In real-world decision support applications, queries are not perfectly partitioned across the
various query servers. Therefore, some parallel execution servers complete their processing
and become idle sooner than others. The Oracle parallel execution technology dynamically
detects idle processes and assigns work to these idle processes from the queue tables of the
overloaded processes. Thus, the Oracle server efficiently redistributes the query workload
across all processes. Real Application Clusters further extends these efficiencies to clusters
by enabling the redistribution of work across all the parallel execution slaves of a cluster.
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Parallel Execution with RAC
Execution slaves have node affinity with the execution
coordinator but will expand if needed.
Execution
coordinator
Parallel
execution
server
Shared disks
Node 4 Node 1 Node 2 Node 3
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM D - 19
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Summary
In this lesson, you should have learned how to:
Explain the necessity of global resources
Describe global cache coordination
Explain object affinity and dynamic remastering
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D
O
r
a
c
l
e

U
n
i
v
e
r
s
i
t
y

a
n
d

M
a
z
z

S
o
l
u
c
i
o
n
e
s

S
R
L

u
s
e

o
n
l
y
T
H
E
S
E

e
K
I
T

M
A
T
E
R
I
A
L
S

A
R
E

F
O
R

Y
O
U
R

U
S
E

I
N

T
H
I
S

C
L
A
S
S
R
O
O
M

O
N
L
Y
.


C
O
P
Y
I
N
G

e
K
I
T

M
A
T
E
R
I
A
L
S

F
R
O
M

T
H
I
S

C
O
M
P
U
T
E
R

I
S

S
T
R
I
C
T
L
Y

P
R
O
H
I
B
I
T
E
D