Vous êtes sur la page 1sur 47

Oracle Clusterware 11GR2

Presented By :
Qari Kamran Siddique
Senior Database Consultant
CGI
What is CLUSTER
Enables Servers to Communicate with each other as a
COLLECTIVE UNIT.
A software that make clustered hardware to run
multiple instances against ONE database .
Database files are stored on disks that are either
physically or logically connected to each node.
Cluster Software hides the structure.
Disks are available for read and write by all nodes.
Operating system is the same on each machine.
This architecture enables users and applications to
benefit from the processing power of multiple
machines.
In case of crash of one node or instance, application
can still access to the surviving node.
Benefits
Scalability of applications
Use of less expensive commodity hardware
Ability to fail over
Ability to increase capacity over time by adding servers
Ability to program the startup of applications in a
planned order
Ability to monitor processes and restart them if they
stop
Resource Control
More Benefits
Eliminate unplanned downtime due to hardware
failures.
Reduce or eliminate planned downtime for software
maintenance.
Increase throughput for cluster-aware applications
Reduce the total cost of ownership
Basic RAC Components
Oracle 10g R1,R2,11g R1
Oracle Clusterware
Shared Storage
Oracle RAC Database
Basic RAC Components

(Oracle 11g R2)


Grid Infrastructure

RAC Database
Oracle Clusterware Hardware
Concepts and Requirements
One or more servers connected with each other with a network,
called INTERCONNECT
At least two network interface cards: one for a public network and
one for a private network
The interconnect network is a private network using a switch (or
multiple switches) that only the nodes in the cluster can access
No not support using crossover cables
At least two network interfaces for the public network, bonded to
provide one address
At least two network interfaces for the private interconnect
network
Oracle Clusterware supports NFS, iSCSI, Direct Attached Storage
DAS), Storage Area Network (SAN) storage, and Network
Attached Storage (NAS).
Oracle Clusterware Hardware
Concepts and Requirements
(Continue)
Consider the I/O requirements of the entire cluster when choosing
your storage subsystem.
At least one local disk that is internal to the server
This disk is used for the operating system and Oracle Software
binaries
Increase HA by providing safe side for binary corruption
Allows rolling upgrades, which reduce downtime.
Oracle Clusterware Operating System Concepts and
Requirements
(Product Certification)
Software Concepts
Voting Disks
Oracle Clusterware uses voting disk files to determine
which nodes are members of a cluster.
Can be configured on Oracle ASM ,or on shared storage
( Raw volumes).
In case of ASM, redundancy level defines number of
voting disks
Without ASM => Minimum THREE voting disks for
HA
Use external redundancy
Do not use more than five voting disks
The maximum number of voting disks that is supported
is 15.
Software Concepts
Oracle Cluster Registry (OCR)
Store and manage information about the components
that oracle clusterware controls , e.g; Rac Database,
listeners, virtual IP addresses (VIPs), services,
applications.
Can be configured on Oracle ASM ,or on shared storage
( Raw volumes)
stores configuration information in a series of key-value
pairs in a tree structure.
multiple OCR locations (multiplexing) should be defined
You can have up to five OCR locations
Each OCR location must reside on shared storage that is
accessible by all of the nodes in the cluster
Software Concepts
Virtual Internet Protocol Address (VIP)

Oracle RAC requires a virtual IP address for


each server in the cluster.
It is an unused IP address on the same subnet
as the Local Area Network (LAN).
This address is used by applications to connect
to the RAC database (NOT 11G R2).
If a node fails, the Virtual IP is failed over to
another node in the cluster to provide an
immediate node down response to connection
requests.
Oracle Clusterware Network
Configuration Concepts
Grid Infrastructure through the self-management of the
network requirements for the cluster.
Oracle Clusterware 11g release 2 (11.2) supports the use
of dynamic host configuration protocol (DHCP) for all
private interconnect addresses, as well as for most of the
VIP addresses.
DHCP provides dynamic configuration of the host's IP
addresses.
Addition of the Oracle Grid Naming Service (GNS) to
the cluster=========11gR2Clusterware
Oracle Clusterware Network Configuration Concepts
(Continue)

Grid Naming Service (GNS)

Linked to the corporate Domain Name Service (DNS)

Clients can easily connect to the cluster.

Requires DHCP service on the public network.

Obtain an IP address on the public network for the GNS VIP.

DNS uses the GNS VIP to forward requests to the cluster .

Delegate a subdomain in the network to the cluster

Subdomain forwards all requests for addresses in the subdomain to the


GNS VIP.
Grid Naming Service (GNS

Reference

DNS and DHCP Setup Example for Grid


Infrastructure GNS [ID 946452.1]
Network Configuration Concepts
(Continue)
Single Client Access Name (SCAN)

Virtual hostname to provide for all clients connecting to the cluster (as
opposed to the vip hostnames in 10g and 11gR1).
Domain name registered to at least one and up to three IP addresses,
either in the domain name service (DNS) or the Grid Naming Service
(GNS).
By default, the name used as the SCAN is also the name of the cluster.
For installation to succeed, the SCAN must resolve to at least one
address.
Do not configure SCAN VIP addresses in the hosts file. But if you use
the hosts file to resolve SCAN name, you can have only one SCAN IP
address
If hosts file is used, Cluster Verification Utility failure at end of
installation.
Network Configuration Concepts
(Continue)
DNS Round Robin resolution to three addresses
RECOMMENDED
Add/remove nodes without reconfiguring clients
Adds location independence for the databases, so that client
configuration does not have to depend on which nodes are
running a particular database.
local listener LISTENER on all nodes to listen on local VIP, and
SCAN listener LISTENER_SCAN1 (up to three cluster wide) to
listen on SCAN VIP(s)

system/manager@cgi1-scan:1521/apps
jdbc:oracle:thin:@cgi-scan:1521/apps
Network Configuration Concepts
(Continue)

Sample TNS entry for SCAN

TEST. CGI.COM =
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=SCAN-TEST.CGI.COM)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=11GR2TEST.CGI.COM))
)

Sample TNS entry without SCAN

TEST.CGI.COM =
(DESCRIPTION=
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=tcp)(HOST=TEST1-vip.CGI.COM)(PORT=1521))
(ADDRESS=(PROTOCOL=tcp)(HOST=TEST2-vip.CGI.COM)(PORT=1521))
)
(CONNECT_DATA=(SERVICE_NAME=11GR2TEST.CGI.COM))
)
Network Configuration Concepts
(Continue)

The node VIP and the three SCAN VIPs are obtained from the
DHCP server when using GNS. If a new server joins the
cluster, then Oracle Clusterware dynamically obtains the
required VIP address from the DHCP server, updates the
cluster resource, and makes the server accessible through
GNS.
$ srvctl config scan
SCAN name: cgi-scan, Network:192.168.182.0/255.255.255.0/
SCAN VIP name: scan1, IP: /192.168.182.109
SCAN VIP name: scan2, IP: /192.168.182.110
SCAN VIP name: scan3, IP: /192.168.182.108
Node Instance Database Name
Name Name
cginode1 cgirac1 cgi.dbservices.ca
cginode2 cgirac2
Node Name Public IP Private IP VIP

cginode1 192.168.1.151 192.168.2.1 192.168.1.153

cginode2 192.168.1.152 192.168.2.2 192.168.1.154

SCAN NAME IP
SCAN VIP1 192.168.2.201
SCAN VIP2 192.168.2.202
SCAN VIP3 192.168.2.203
Oracle Clusterware startup sequence

Do not worry..

that is the Clusterware's


job!

image from the Oracle


Clusterware
Administration and
Deployment Guide)
Oracle Grid Infrastructure
Grid HOME

Grid Infrastructure home => Oracle ASM + Oracle Clusterware

Single Oracle home for both

OCR and voting disk files can be placed either on Oracle ASM,or
on a cluster file system or NFS system

Installing Oracle Clusterware files on raw or block devices is no


longer supported
Oracle Grid Infrastructure
Grid HOME

Oracle Clusterware and Oracle ASM are installed


into a single home directory , which is called Grid
Home

# su - grid
ORACLE_SID=+ASM1; export ORACLE_SID
ORACLE_BASE=/u01/app/grid;
export ORACLE_BASE
# Specifies the directory containing the Oracle Grid Infrastructure software.
ORACLE_HOME=/u01/app/11.2.0/grid
export ORACLE_HOME
Oracle Automatic Storage Management Cluster
File System (Oracle ACFS)

new multi-platform, scalable file system and storage


management solution

provides dynamic file system resizing

improved performance

provides storage reliability through the mirroring and


parity protection Oracle ASM provides.
Cluster Time Synchronization Service

Ensures that there is a synchronization service in the


cluster.

Network Time Protocol (NTP) is not found during


cluster configuration, then CTSS is configured to
ensure time synchronization.
Mandatory OS Users and Groups

Oracle Inventory Group (typically, oinstall) => Must


be the primary group for Oracle Software installation
owners.

Oracle Software Owner => Typically oracle

OSDBA group => typically, dba for Database


authentication. (SYSDB A + SYSAM)
Recommended Approach for OS Users and Groups
Reference (Oracle Grid Infrastructure installation guide

Grid Infrastructure software owner => GRID


Oracle RAC Software owner => ORACLE
Separate group for Oracle ASM => OSASM group
Members of this group would connect to ASM by
using sysasm O/S authentication
ASM Database Administrator group (OSDBA) =>
Members of the OSDBA group for Oracle ASM are
granted read and write access to files managed by
Oracle ASMThe Oracle Automatic Storage
Management Group (typically asmadmin)
OSOPER for Oracle ASM group (typically asmper) =>
Member of this group are granted access to a subset of
the SYSASM privileges.
Example of Creating Role-allocated Groups, Users, and
Paths
# groupadd -g 1000 oinstall
# groupadd -g 1020 asmadmin
# groupadd -g 1021 asmdba
# groupadd -g 1031 dba1
# groupadd -g 1041 dba2
# groupadd -g 1022 asmoper
# useradd -u 1100 -g oinstall -G asmadmin,asmdba grid
# useradd -u 1101 -g oinstall -G dba1,asmdba oracle1
# useradd -u 1102 -g oinstall -G dba2,asmdba oracle2
# mkdir -p /u01/app/11.2.0/grid
# mkdir -p /uo1/app/grid
# chown -R grid:oinstall /u01
# mkdir -p /u01/app/oracle1
# chown oracle1:oinstall /u01/app/oracle1
# mkdir -p /u01/app/oracle2
# chown oracle2:oinstal
Oracle Base Directory path

# mkdir -p /u01/app/11.2.0/grid
#chown grid:oinstall /u01/app/11.2.0/grid
#chmod -R 775 /u01/app/11.2.0/grid

# mkdir -p /u01/app/oracle
#chown -R oracle:oinstall /u01/app/11.2.0/oracle
#chmod -R 775 /u01/app/11.2.0/oracle
Storage Options
Whats Next !!!
Administering Oracle Clusterware, ASM and RAC databases
Oracle RAC Backup and Recovery
RAC Services
RAC , Oracle Clusterware and ASM tuning
Adding and Deleting RAC Nodes
Patch Management in RAC
Oracle Clusterware Cloning
Application high availability with clusterware
Oracle Clusterware utilities usage
Whole clusterware stack upgrade to 11g R2
RAC + Clusterware + ASM tips & tricks..and
Troubleshooting
Questions ???

Vous aimerez peut-être aussi