Vous êtes sur la page 1sur 24

How to determine the CRS Version?

Filed under: Clusterware Tags: Clusterware syedracdba @ 4:33 pm

The active version or the software version of Cluster (CRS) can be verified with following commands.
These versions details are required while upgrading a cluster.
To check the Active Version
Run the following command on the local node.
$ crsctl query crs activeversion
CRS active version on the cluster is [10.2.0.3.0]
Note: The active version is the lowest software version running in a cluster.
To check the Software Version
Run the following command on the local node.
$ crsctl query crs softwareversion
CRS software version on node [racnod01] is [10.2.0.3.0]
Note: The software version is the binary version of the software on a particular cluster node.

What is Oracle RAC One Node?


Oracle introduced a new option called RAC One Node with the release of 11gR2 in late 2009.
This option is available with Enterprise edition only. Basically, it provides a
cold failover solution for Oracle databases. Its a single instance ofOracle RAC running on
one node of the cluster while the 2nd node is in a cold standby mode. If the instance fails
for some reason, then RAC One Node detects it and first tries to restart the instance on the
same node. The instance is relocated to the 2nd node in case there is a failure or fault in 1st
node and the instance cannot be restarted on the same node. The benefit of this feature is
that it automates the instance relocation without any downtime and does not need a manual
intervention. It uses a technology called Omotion, which facilitates the instance
migration/relocation. RAC one is Oracles answer or solution to OS clustering solution
like Veritas Storage Foundation, Sun Solaris cluster,IBM HACMP, and HP Service guard etc.
Purpose
Its Oracles attempt to tie customers to a single vendor by eliminating the need to buy 3rd
party OS cluster solutions. First, it introduced Oracle Clusterware with 10g and stopped the
need to rely on 3rd party cluster software and now it intends to conquer the rest who are
still using HACMP, Sun Solaris cluster etc. for cold failover.
Benefits

The Oracle RAC One node provides the following benefits:

Built-in cluster failover for high availability


Rolling patches for single instance database
Proactive migration / failover of the instance
Live migration of instances across servers

Online upgrade to RAC

The rolling upgrade is really useful. Upgrade to the OS, and Database can be done without
any downtime unless upgrade requires some scripts to be run against the database. With
RAC One Node, the DBAs and Sys admins can be proactive and migrate/failover the
instance to another node to perform any critical maintenance activity.
What it's not suited for
According to me the RAC one node is not a viable or recommended solution in the following
scenarios:

To load balance unlike regular RAC


A true high availability solution
As a DR solution; Data guard best suits the bill
For mission critical applications

Cost
It is definitely not FREE. Oracle has priced RAC one at par with Active Data Guard. The RAC
One node is priced separately and costs $10,000 per processor as against $23,000 for
regular RAC. The licensing cost is required for ONE node only (in a 2-node setup). RAC one
node is eligible for 10-day rule, allowing a customer to migrate to another without the need
to buy additional license up to 10-days in a calendar year. People arguing against paying a
license fee for resources they are not using will still lament.
Conclusion
I am still not very convinced on the usefulness of RAC one node. I think customers invest in
RAC for their mission critical applications and achieving high availability and load balancing
at the same time. Those who dont go for RAC rely on Data Guard and now with 11g, on
Active Data Guard. So dont see a huge requirement for RAC One except seamless failover
within a data center. The licensing is a bit disappointing; they are making clients pay $10 K.
Moreover RAC is free with Standard edition though one doesnt get enterprise features and
limited to 4 CPU sockets only. So, thinking RAC One will be popular among customers who
are currently using standard edition and want to switch to enterprise will be wrong.
However, this is still a very new feature and as more people adopt it, we will get more
clarity on its usability. I am planning to do a POC on it and would publish the installation
steps and any findings (goods things and not so good things) of my POC.

A cluster comprises of multiple interconnected servers or computers that appear as if they


are one single server to end users and applications.

What is RAC ?

RAC stands for Real Application Clusters. It allows multiple nodes in a clustered system to
mount and open a single database that resides on shared disk storage. Should a single
system fail (node), the database service will still be available on the remaining nodes.
In RAC database, comprises of multiple instances, however there will be only one database.
A non-RAC database is only available on a single system. If that system fails, the database
service will be down (single point of failure).
Oracle Database 10g Real Application Clusters (RAC) enables the clustering of the Oracle
Database. A RAC database comprises of multiple instances residing on different computers
to access a common database residing on shared storage.

Why Real Application Cluster ?


The basic principle behind the Real Application Cluster is greater throughput and scalability
due to the combined power of multiple instances running on multiple servers
Real Application Cluster provides high availability and scalability for all application types.
The RAC infrastructure is also a key component for implementing the Oracle enterprise grid
computing architecture. Multiple instances access a single physical database prevents the
server from being a single point of failure. Combining smaller servers into a cluster can be
achieved to create scalable environment that support mission critical business applications.
Real application Cluster uses Oracle Cluster ware for the infrastructure to bind multiple
servers so that they operate as a single system.
Oracle Clusterware is a portable cluster management solution that is integrated with the
Oracle database. The Oracle Clusterware is also a required component for using Real
application Cluster.
In Oracle Database 10g, Oracle Provides you with an integrated software solution that
addresses cluster management, event management, application management, connection
management, storage management, load balancing and availability. These capabilities are
addressed while hiding the complexity through simple-to-use management tools and
automation.
Oracle Real Application Clusters 10g provides an integrated cluster ware layer that delivers a
complete environment for applications.
Oracle Real Application Cluster (RAC) uses Oracle Clusterware for the infrastructure to bind
multiple servers so that they operate as a single system
Main Components of Oracle Real Application Cluster 10g
It comprises of two main components
1.

Oracle Clusterware

2.

Oracle Software

In RAC 10g Clusterware is called CRS layer which resides below Oracle software Layer.
Second layer is the Oracle software itself.

Oracle Real Application Cluster (RAC) is the Oracle Database option that provides a single
system image for multiple servers to access one Oracle database. In RAC, each Oracle
instance usually runs on a separate server.
Oracle Clusterware is software, enables the servers to be bound together to operate as if
they were one server. The Oracle Clusterware comprises of two clusterware components: a
voting disk to record node membership information and the Oracle Cluster Registry (OCR) to
record cluster configuration information. In Oracle Clusterware each node is connected to a
private network by way of a private interconnect.
The Oracle Clusterware comprises several background processes that facilitate cluster
operations such as Cluster Synchronization Service (CSS) and Event Management (EVM).
What are the Real Application Cluster Main Processes ?
The main processes involved in Oracle RAC are primarily used to maintain database
coherency among each instance. They manage what is called the global resources.

LMON: Global Enqueue Service Monitor

LMD0: Global Enqueue Service Daemon

LMSx: Global Cache Service Processes. Where x can range from 0 to j

LCK0: Lock Process

DIAG : Diagnostibility Process


There are several tools that are used to manage the various resources available on the
cluster at a global level. Some of the tools used are Server Control (SRVCTL), DBCA and
Enterprise Manager.
Oracle Clusterware is a portable cluster management solution that is integrated with the
Oracle database. The Oracle Clusterware enables you to create a clustered pool of storage
to be used by any combination of single-instance and RAC databases.
Oracle Clusterware is the only clusterware that you need for most platforms on which RAC
operates. You can also use clusterware from other vendors if the clusterware is certified for
RAC.
The combined processing power of the multiple servers can provide greater throughput and
scalability than is available from a single server. RAC is the Oracle Database option that
provides a single system image for multiple servers to access one Oracle database. In RAC,
each Oracle instance runs on a separate server.
RAC is a unique technology that provides high availability and scalability for all application
types. The RAC infrastructure is also a key component for implementing the Oracle
enterprise grid computing architecture. Having multiple instances access a single database
prevents the server from being a single point of failure. RAC enables you to combine smaller
commodity servers into a cluster to create scalable environments that support mission
critical business applications

What are the Storage Principles for RAC Software and CRS ?
The Oracle Software 10g Real Application Clusters installation is a two-phase installation in
the first Phase, You install CRS. In the second phase, you install the Oracle Database
software with RAC components and create a cluster database.
The oracle home that you use for the CRS software must be different from the one that is
used for the RAC Software. CRS and RAC software is installed on cluster shared storage.
Note:- Cluster Software and Oracle software is usually installed on a regular file system
that is local to each node. This permits online patch upgrades without shutting down the
database and also eliminates the software as a single point of failure.
Do you need special hardware to run RAC ?
RAC requires the following hardware components:

A dedicated network interconnect - might be as simple as a fast network connection


between nodes; and a shared disk subsystem.

RAC and Shared Storage Technologies


1.

Supported shared storage for Oracle grids:

Network attached Storage

Storage Area Network

2.

Supported file systems for Oracle grids:-

Raw Volumes

Cluster file system

ASM

Oracle Real Application Cluster & High Availability


Oracle Real Application Cluster (RAC) is a cluster database. Compared to traditional sharednothing architecture of single instance database, oracle Real Application Cluster has shared
cache architecture and also shared-disk approaches to provide highly available and scalable
solution for any business applications.
One of the key components of Oracle enterprise grid architecture by oracle is the Real
application Cluster (RAC)
Oracle RAC is a unique technology that provides high availability and scalability for all
application types. The Oracle RAC infrastructure is also a key component for implementing
the Oracle enterprise grid computing architecture. Having multiple instances access a single
database prevents the server from being a single point of failure. Oracle RAC enables you to
combine smaller commodity servers into a cluster to create scalable environments that
support mission critical business applications. Applications that you deploy on Oracle RAC
databases can operate without code changes.

Unique feature called Fast Application Notification (FAN) configuration in oracle Real
application cluster helps load balancing when available service status change. Available
service status can change during a scheduled outage for patching or any other regular
maintenance task. Service status can also change due to unexpected faults such as node
reboot, database service unavailability etc.
Oracle Real Application Cluster has some cost effective options while scaling up or scaling
down your size of application usage volume. Olden days normally businesses have to preplan the scale of their application few years in advance. Though they did not made use of
the full capacity at the initial stage of deployment, the hardware had to be in place in
anticipation of high growth in the future. This kind of planning did add additional cost to
Businesses in planning their hardware and also software licensing. In case of miscalculation
of capacity, did cost further cost in moving their application to higher capacity servers in
later stages. Once the application is deployed and up and running and if you decide to
migrate to different server, it does cost additional overhead in terms of building servers,
outages, manpower etc etc. So migrating the application is not an easy exercise and if your
business is cost conscious (All business are cost conscious I guess) this type of migration
related to scaling up or scaling down should be avoided.
So what is the cost effective solution to avoid these overheads? That is where Oracle Real
application Cluster comes to your rescue.

Why Oracle Real application Cluster?


To accommodate unplanned, unanticipated growth in any business application, oracle real
application cluster can be built from standard, commodity-priced processors, with standard
network and storage components. Since oracle Real application Cluster is built on very
foundation of Grid computing, when you require more processing power or wish to scale up,
simply add another similar commodity priced server. Adding new server doesnt require you
to bring down the database and this can be done without interrupting the service with users
still accessing the database. Oracle Real application cluster supports up to 100 nodes in any
given Cluster configuration.
If your business decides that you need to scale down your application for whatever reason,
you could de-commission some of the servers, without bringing down your database and
while users still accessing the database without any interruption.

What is a Cluster?
A cluster comprises of multiple interconnected servers or computers that appear as if they
are one single server to end users and applications.
Oracle Database 10g Real Application Clusters (RAC) enables the clustering of the Oracle
Database. A RAC database comprises of multiple instances residing on different computers
to access a common database residing on shared storage.
The basic principle behind the Real Application Cluster is greater throughput and scalability
due to the combined power of multiple instances running on multiple servers

Some of the main benefits of Oracle Real Application Cluster (RAC)


Scalability: Service capacity can be expanded simply by adding servers to existing
cluster.

Availability round the clock (24/7): Zero downtime for database applications.

Relatively lower computing cost: Cost can be relatively reduced by using low-cost
commodity hardware.

Grid Computing: Very foundation of Oracle Grid computing is oracle Real


Application Cluster (RAC.
Main Components of Oracle Real Application Cluster 10g.
It comprises of two main components

1.

Oracle Clusterware

2.

Oracle Software

In RAC10g Clusterware is called CRS layer which resides below Oracle software layer.
Second layer is the Oracle software itself
RAC is the Oracle Database option that provides a single system image for multiple servers
to access one Oracle database. In oracle Real application cluster, each Oracle instance
usually runs on a separate server.
However when it comes to managing and looking after your production Real application
Cluster, it may not be practical to find the commands you needed in the right time and
possibly you dont want to keep searching for the right command and tips when you have a
major production issue and the users cannot access the database.
This book "Oracle Real Application Cluster Field DBA Admin Handbook" describes how to
administer the Oracle Clusterware and Oracle Real Application Clusters (Oracle RAC)
architecture and provides an overview of these products. Describes services and storage
and how to use RAC scalability features to add and delete instances and nodes in RAC
environments.
This book "Oracle Real Application Cluster Field DBA Admin Handbook" also describes, how
to use the Server Control (SRVCTL) utility to start and stop the database and instances,
manage configuration information, and to delete/add or move instances and services
Troubleshooting section describes how to interpret the content of various RAC-specific log
files, search on Metalink and also useful reference section with relevant Metalink Document
reference and Weblinks

Storage in Oracle Real Application Clusters


Storage for Real application cluster databases must be shared. In other words, datafiles
must reside on cluster file system, shared raw devices or Automatic Storage Management
(ASM) disk group.
The shared storage must include datafiles, undo tablespaces for each instance and also
online redo log files. It is highly recommended by oracle to use server parameter file spfile
(SPFILE) instead of parameter file (PFILE).

Shared storage Technologies and RAC


1.
Supported shared storage for Oracle grids

Network attached Storage

Storage area Network

2.

Supported file systems for Oracle grids

Raw Volumes

Cluster file system

ASM

Storage Area Network (SAN) represents the evolution of data storage technology.
Traditionally, on client server systems, data was stored on devices either inside or directly
attached to the server. Next in the evolution scale came Network Attached Storage (NAS)
that took the storage devices away from the server and connected them directly to the
network.
In RAC deployment, choosing the appropriate file system is critical. Because traditional file
systems do not support simultaneous mounting by more than one system, you must store
files in either raw volumes without any file system, or on a file system that supports
concurrent access by multiple systems.

Oracle Cluster File systems


Oracle cluster file system (OCFS) is a shared file system designed specially for oracle Real
Application Cluster

Automatic Storage Management (ASM)

Automatic and high-performance cluster file system

Manages Oracle Database files

Data spread across disks to balance load

Integrated mirroring across disks

Solves many storage management Challenges


The automatic storage management(ASM) is a new feature in oracle Database 10g. It
provides a vertical integration of the file system and the volume manager that is specially
built for Oracle Database files
The ASM distributes I/O load across all available resources to optimize performance while
removing the need for manual I/O tuning. ASM facilitates management of dynamic database
environment by allowing DBAs to increase the database size without having to shut down
the database to adjust the storage allocations.
The ASM can maintain redundant copies of data to provide fault tolerance.

Note:- ASM is the oracles strategic and stated direction as to where oracle database files
should be stored. However OCFS will continue to be developed and supported for those who
are using it.
Comparison between RAW or CFS

Using CFS
1.

Simple management

2.

Use of OMF with RAC

3.

Single Oracle Software installation

4.

Autoextend

Using raw
1.

Performance

2.

Use when CFS not available

3.

Cannot be used for archive log files

You can use a cluster file system or place files on raw devices.

Cluster file systems provide the following advantages:Greatly simplify the installation and administration of RAC

Use of Oracle Managed Files with RAC

Single Oracle Software installation

Autoextend enabled on oracle data files

Uniform accessibility to archive logs in case of physical node failure

RAW devices Implications


Raw devices are always used when CFS is not available or not supported by oracle
Raw devices offer best performance without any intermediate layer between oracle
and the disk

What is Automatic Storage Management


Automatic storage management (ASM) is a new feature in Oracle Database 10g from
oracle . It integrates file system and the Logical Volume Manager (LVM) . In ASM Volume
Manager is specifically built for Oracle database files. The ASM can provide management for
single SMP machines or across multiple nodes of a cluster for Oracle Real Application
Clusters support.
Automatic Storage Management (ASM) simplifies administration of Oracle related files by
allowing the administrator to reference disk groups rather than individual disks and files,
which are managed by ASM.
Manual I/O tuning can be eliminated while ASM distributes input/output (I/O) load across all
available resources to optimize performance while removing the need for manual I/O tuning.

The ASM has the flexibility of maintaining redundant copies of data to provide fault
tolerance, or it can be built on top of vendor-supplied reliable storage mechanisms. Data
management in ASM is basically done by choosing the desired reliability and performance
characteristics for classes of data rather than with human interaction of per-file basis.
Automated storage management gives the time to DBAs by increasing their ability to
manage larger databases and more of them with increased efficiency.
Automatic Storage Management (ASM) is a feature in Oracle Database 10g/11g that
provides the database administrator with a simple storage management interface that is
consistent across all server and storage platforms. ASM provides the performance of async
I/O with the easy management of a file system.

Some of the Key features of ASM

Stripes files rather than logical volumes

Enables online disk reconfiguration and dynamic rebalancing

Provides adjustable rebalancing speed

Provides file based redundancy

Supports only Oracle files

Its cluster aware

Why ASM ?
Some of the storage management features with ASM include

Striping

Mirroring

Asynchronous I/O

Direct I/O

SAME and Load Balancing

Is automatically installed as part of the base code set


ASM includes striping and mirroring to provide balanced and secure storage. The level of
redundancy and the granularity of the striping can be controlled using templates. The new
ASM functionality can be used in combination with existing raw and cooked file systems,
along with OMF and manually managed files.

Direct I/O
By making use of Direct I/O, higher cache hit ratio can be achieved. Buffered I/O uses most
important resources like CPU and memory. In case of buffered I/O Oracle blocks are cached
both in the SGA and in the file system buffer cache.

Buffered I/O fills up the file system cache with Oracle Data, where as using the Direct I/O
allows non-Oracle data to be cached in the file system much more efficiently.

Key Features and Benefits of ASM


The ASM functionality is controlled by an ASM instance.
The main components of ASM are disk groups, each of which comprise of several physical
disks that are controlled as a single unit. The physical disks are known as ASM disks, while
the files that reside on the disks are know as ASM files.
The ASM divides a file into pieces and spreads them evenly across all the disks. The ASM
uses an index technique to track the placement of each piece. Traditional striping techniques
use mathematical functions to stripe complete logical volumes. The ASM includes mirroring
protection without the need to purchase a third-party Logical Volume Manager. One unique
advantage of ASM is that mirroring is applied on file basis, rather than on a volume basis.
Therefore, the same disk group can contain a combination of files protected by mirroring, or
not protected at all.
The ASM supports data files, log files, control files, archive logs, Recovery Manager (RMAN)
backup sets, and other Oracle database file types. The ASM supports Real Application
Clusters (RAC) and eliminates the need for a cluster Logical Volume Manager or a cluster file
system.
Note:- ASM is shipped with the database and available as part of base code set and there is
no need to go through a separate installation in the custom tree installation. It is available
in both the Enterprise Edition and Standard Edition installations.
One of the flexible feature of ASM is it does not eliminate any existing database functionality
which uses non ASM files. Existing database are able to operate as they always have been.
New files may be created as ASM files, while existing ones are administered in the old way
or can be migrated to ASM.
In ASM, at the top of the new hierarchy, you can find what are called ASM disk groups. Any
single ASM file is contained in only one disk group. However, a disk group may contain files
belonging to several databases, and a single database may use storage from multiple disk
groups.
ASM files are always spread across all ASM disks in the disk group.
The ASM disks are partitioned in allocation units (AU) of on megabyte each. An AU is the
smallest contiguous disk space that ASM allocates. The ASM does not allow physical blocks
to split across AUs.

ASM General Architecture


To use ASM, you must start a special instance called an ASM instance before you start your
database instance.

ASM instances manage the metadata needed to make ASM files available to ordinary
database instances. Both ASM instances and database instances have access to a common
set of disks call disk group. Database instances access contents of ASM files directly,
communicating with an ASM instance only to get information about the layout of these files.
An ASM instance is like any other database instance except it contains two new background
processes. First one coordinates rebalance activity for disk groups and it is called RBAL. The
second one performs the actual rebalance activity for AU movements. At any given time
there can be many of these, and they are called ARB0, ARB1, and so on. An ASM instance
also has most of the same background processes as a ordinary database instance (SMON,
PMON, LGWR, and so on.)
Each database instance using ASM has two new background processes called ASMB and
RBAL. RABL performs global opens of the disks in the disk groups. At database instance
startup, ASMB connects as a foreground process into the ASM instance. All communication
between the database and ASM instances is performed via this bridge. This includes physical
file changes such as data file creation and deletion. Over this connection, periodic messages
are exchanged to update statistics and to verify that both instances are healthy
It is quite possible to cluster ASM instances and run them as RAC, using the existing Global
Cache Services (GCS) infrastructure. There is one ASM instance per node on a cluster.

Storage in Oracle Real Application Clusters


Storage for RAC databases must be shared. In other words, datafiles must reside in an
Automatic Storage Management (ASM) disk group, on a cluster file system, or on shared
raw devices. This must include space for an undo tablespace for each instance if you are
using automatic undo management. Additionally, for each instance you must create at least
two redo log files that reside on shared storage. Oracle recommends, you can use one
shared server parameter file (SPFILE) with instance-specific entries. Or you can use a local
file system to store client-side parameter files (PFILEs).
If you do not use ASM, if your platform does not support a cluster file system, or if you do
not want to use a cluster file system to store datafiles, then create additional raw devices as
described in your platform-specific Oracle Real Application Clusters installation and
configuration guide.

Automatic Storage Management in Real Application Clusters


ASM automatically optimizes storage to maximize performance by managing the storage
configuration across the disks. ASM does this by evenly distributing the storage load across
all of the available storage within your cluster database environment. ASM partitions your
total disk space requirements into uniformly sized units across all disks in a disk group. ASM
can also automatically mirror data to prevent data loss. Due to these added features, ASM
significantly reduces administrative overhead.
As in single-instance Oracle databases, To use ASM in RAC, select ASM as your storage
option when you create your database with the Database Configuration Assistant (DBCA).
Note:- using ASM in RAC does not require I/O tuning.

Automatic Storage Management Components in RAC


When you create your database, Oracle creates one ASM instance on each node in your RAC
environment if one does not already exist. Each ASM instance has either an SPFILE or PFILE
type parameter file.
The shared disk requirement is the only substantial difference between using ASM in a RAC
database compared to using it in a single-instance Oracle database. ASM automatically rebalances the storage load after you add or delete a disk or disk group.
In a cluster, each ASM instance manages its node's metadata updates to the disk groups. In
addition, each ASM instance coordinates disk group metadata with other nodes in the
cluster. As in single-instance Oracle databases, you can use Enterprise Manager, DBCA,
SQL*Plus, and the Server Control Utility (SRVCTL) to administer disk groups for ASM in RAC

Automatic Storage Management


ASM automatically optimizes storage to maximize performance by rebalancing the storage
configuration across the disks that ASM manages. ASM spreads the storage load across all
available storage within your cluster database environment for optimal performance. ASM
partitions your total disk space into uniformly sized units across all disks in a disk group.
ASM functionality is controlled by an ASM instance.
The main components of ASM are disk groups, each of which comprise of several physical
disks that are controlled as a single unit. The physical disks are known as ASM disks, while
the files that reside on the disks are know as ASM files.

What is a raw device ?


A raw device is a disk drive that does not yet have a file system set up. Raw devices are
used for Real Application Clusters since they enable the sharing of disks.
The term raw devices applies to the character oriented disk device files (as opposed to the
block oriented ones) normally found in /dev. These device files are a part of the interface
between the hardware disks and the UNIX system software.
Raw devices are character devices. A utility called raw can be used to bind a raw device to
an existing block device. These "existing block devices" may be disks or cdroms/dvds.
Raw Partition:
A raw partition is a portion of a physical disk that is accessed at the lowest possible level. A
raw partition is created when an extended partition is created and logical partitions are
assigned to it without any formatting. Once formatting is complete, it is called cooked
partition
SCSI, SAN and NAS, iSCSI

Although not directly related to CFS and raw devices questions arise around the storage
technologies being used.
SCSI:
Disk drives are connected individually to the host machine by small computer system
interfaces (SCSI) through one of a number of disk controllers.
SAN:
Storage Area Network is a shared dedicated high-speed network connecting storage
elements and the backend of the servers.
NAS:
Network Attached Storage is a special purpose server with its own embedded software that
offers cross platform file sharing across the network.
iSCSI:
Another form of network attached storage that communicates in block mode over Ethernet
(Gigabit Ethernet) to special storage subsystems. Like NFS attached storage, iSCSI uses
standard hardware and software to communicate - although a private network is
recommended. Because it operates in block mode, use of iSCSI with RAC requires either a
cluster file system or use of raw volumes.
Raw devices suitable for complex applications like Database Management Systems that
typically do their own caching because, raw device offers a more "direct" route to the
physical device and allows an application more control over the timing of IO to that physical
device. A raw device can be bound to an existing block device (for example a disk) and be
used to perform "raw" IO with that existing block device. Such "raw" IO bypasses the
caching that is normally associated with block devices
In most UNIX systems it is a performance advantage to use raw device files for data
storage. By using raw devices, the UNIX file system is bypassed and the operating system is
able to perform more effective I/O.
Since file size is fixed by the size of the partition, because of this, file size is constrained by
the size of the partition. If the partition becomes full, the raw device file must be moved to
a larger partition. In the worst case, the disk must be reformatted in order to create a larger
partition.

What is a Cluster File system (CFS) ?


A cluster file system (CFS) is a file system that may be accessed (read and write) by all
members in a cluster at the same time. This implies that all members of a cluster have the
same view.
If your platform supports an Oracle certified cluster file system, you can store the files that
Real Application Clusters requires directly on the cluster file system.

A clustered file system is a file system which is simultaneously mounted on multiple


servers. There are several approaches to clustering, most of which do not employ a
clustered file system. While many computer clusters don't use clustered file systems, unless
servers are underpinned by a clustered file system the complexity of the underlying storage
environment increases as servers are added.
Distributed file system - the generic term for a client/server or "network" file system
where the data isn't locally attached to a host.
Global file system - this refers to the namespace, so that all files have the same name
and path name when viewed from all hosts. This obviously makes it easy to share data
across machines and users in different parts of the organization.
OCFS2 (Oracle Cluster File System 2) is a free, open source, general-purpose, extent-based
clustered file system which Oracle developed and contributed to the Linux community, and
accepted into Linux kernel 2.6.16.
OCFS2 provides an open source, enterprise-class alternative to proprietary cluster file
systems, and provides both high performance and high availability. OCFS2 provides local file
system semantics and it can be used with any application. Cluster-aware applications can
leverage parallel I/O for higher performance, and other applications can make use of the file
system to provide a fail-over setup to increase availability.
Cluster file system - a distributed file system that is not a single server with a set of
clients, but instead a cluster of servers that all work together to provide high performance
service to their clients. To the clients the cluster is transparent - it is just "the file system",
but the file system software deals with distributing requests to elements of the storage
cluster.

Shared-disk cluster file system


The most common type of clustered file system is the shared disk file system, in which two
or more servers are connected to a single shared storage subsystem, such as a stand-alone
RAID array or SAN.
Symmetric file system - A symmetric file system is one in which the clients also run the
metadata manager code; that is, all nodes understand the disk structures.
Asymmetric file system - An asymmetric file system is one in which there are one or
more dedicated metadata managers that maintain the file system and its associated disk
structures.
Shared-nothing clustered file system
Another clustered file system approach is to have each node use its own local storage, and
communicate data changes to other nodes via some network or bus. In this case disks are
not shared amongst nodes, but are instead dedicated to a single node and made readable
and writable to other servers
Parallel file system - file systems with support for parallel applications, all nodes may be
accessing the same files at the same time, concurrently reading and writing. Data for a

single file is striped across multiple storage nodes to provide scalable performance to
individual files.
SAN file system - These provide a way for hosts to share Fibre Channel storage, which is
traditionally carved into private chunks bound to different hosts. To provide sharing, a blocklevel metadata manager controls access to different SAN devices. A SAN File system mounts
storage natively in only one node, but connects all nodes to that storage and distributes
block addresses to other nodes. Scalability is often an issue because blocks are a low-level
way to share data placing a big burden on the metadata managers and requiring large
network transactions in order to access data.

Oracle Clusterware
Oracle Clusterware is a portable cluster management solution that is integrated with Oracle
Database. Oracle Real Application Clusters (Oracle RAC) uses Oracle clusterware as the
infrastructure that binds together multiple nodes which operate as a single server. Oracle
Clusterware includes a high availability framework for managing any application that runs on
your cluster. Voting disk and the OCR is created on shared storage during Oracle
Clusterware installation process.
The Oracle Clusterware includes two important components: the voting disk and the Oracle
Cluster Registry (OCR). The voting disk is a file that manages information about node
membership and the OCR is a file that manages cluster and Oracle Real Application Clusters
(Oracle RAC) database configuration information.
1. Voting Disk: - Manages cluster membership by way of a health check and arbitrates
cluster ownership among the instances in case of network failures. RAC uses the voting disk
to determine which instances are members of a cluster. The voting disk must reside on
shared disk. For high availability, Oracle recommends that you have multiple voting disks.
The Oracle Clusterware enables multiple voting disks.
Note:- If you define a single voting disk, then you should use external mirroring to provide
redundancy.
2. OCR File :- Cluster configuration information is maintained in Oracle Cluster Registry file.
OCR relies on a distributed shared-cache architecture for optimizing queries against the
cluster repository. Each node in the cluster maintains an in-memory copy of OCR, along with
an OCR process that accesses its OCR cache.
When OCR client application needs to update the OCR, they communicate through their local
OCR process to the OCR process that is performing input/output (I/O) for writing to the
repository on disk.
The OCR client applications are Oracle Universal Installer (OUI), SRVCTL, Enterprise Manger
(EM), Database Configuration Assistant (DBCA), Database Upgrade Assistant(DBUA), NetCA
and Virtual Internet Protocol Configuration assistant (VIPCA). OCR also maintains
dependency and status information for application resources defined within CRS, specifically
databases, instances, services and node applications.
Note:- The name of the configuration file is ocr.loc and the configuration file variable is
ocrconfig.loc

Oracle Cluster Registry (OCR) :- Maintains cluster configuration information as well as


configuration information about any cluster database within the cluster. The OCR also
manages information about processes that Oracle Clusterware controls. The OCR stores
configuration information in a series of key-value pairs within a directory tree structure. The
OCR must reside on shared disk that is accessible by all of the nodes in your cluster. The
Oracle Clusterware can multiplex the OCR and Oracle recommends that you use this feature
to ensure cluster high availability.
Note:- You can replace a failed OCR online, and you can update the OCR through supported
APIs such as Enterprise Manager, the Server Control Utility (SRVCTL), or the Database
Configuration Assistant (DBCA

srvctl
srvctl is the oracle recommended tool for DBAs to use to interact with CRS and the cluster
registry. There are number of tools which can be used to interface with the cluster registry
and CRS, however they are undocumented and intended only for use by Oracle Support.
srvctl is well documented tool and its also easy to use.
srvctl must be run from the $ORACLE_HOME of the RAC you are administering.
The basic format of a srvctl command is
srvctl [options]
where command is one of
enable|disable|start|stop|relocate|status|add|remove|modify|getenv|setenv|unsetenv|
config
and the target, or object, can be a database, instance, service, ASM instance, or the
nodeapps.
options extends the use of preceding command, target combinations.
To see the online command syntax and options for each SRVCTL command, enter:
srvctl verb noun -h
SRVCTL for Administering Oracle Real Application Clusters
The Server Control (SRVCTL) utility is installed on each node by default. You can use
SRVCTL to start and stop the database and instances, manage configuration information,
and to move or remove instances and services. You can also use SRVCTL to add services.
SRVCTL also manages configuration information.
Some SRVCTL operations store configuration information in the Oracle Cluster Registry
(OCR). SRVCTL performs other operations, such as starting and stopping instances, by

sending requests to the Oracle Clusterware process (CRSD), which then starts or stops the
Oracle Clusterware resources.
Some of the the srvctl commands are summarized in this table:
Srvctl Commands
Command Description
srvctl add :- Adds database, instance, service and nodeapps
srvctl remove:- Removes database, instance, service and nodeapps
srvctl modify :- Modifies database, instance, service and nodeapps
srvctl disable:- Disables database, database instance, asm instance and service
srvctl enable:- Enables database, database instance, asm instance and service
srvctl start :- Starts database, database instance, asm instance, service and nodeapps
srvctl stop :- Stops database, database instance, asm instance, service and nodeapps
srvctl status:- Display status of database, database instance, asm instance, service and
nodeapps
As you can see, srvctl is a powerful utility. srvctl -help displays a basic usage message, and
srvctl -h displays full usage information for every possible srvctl command.
To see help for all SRVCTL commands, enter following command from the command line:
srvctl h
To see command syntax and list of options fir each SRVCTL command
srvctl command object -h
To see SRVCTL version number
srvctl -V
For example:
To add named instances to a database:
> srvctl add instance -d racdb -i racinst1 -n mynode1 > srvctl add instance -d racdb -i
racinst2 -n mynode2 > srvctl add instance -d racdb -i racinst3 -n mynode3
For example, to display configured databases:

->srvctl config database -d RACDB


where RACDB is the name of the database
To stop database and all or named instances. The syntax is:
> srvctl stop database -d database_name [-o stop_options] [-c connect_string]
> srvctl stop instance -d database_name -i instance_name [,instance_name_list] [-o
stop_options][-c connect_string]
To stop the database all instances:
> srvctl stop database -d RACDB
To stop named instances:
> srvctl stop instance -d RACDB -i racinst1
Summary:Few guidelines for using SRVCTL in Real Application Clusters

Always use SRVCTL from the Oracle_home of the database that you are
administering.

Only run one SRVCTL command at a time for each database, service, or other object,
because SRVCTL does not support concurrent executions of commands on the same
object.

To change your oracle RAC database configuration, log in to the database as the
oracle user.

srvctl most common and usefull commands


srvctl start database d database-name
srvctl stop database d database-name
srvctl start asm n node-name
srvctl stop asm n node-name
srvctl start nodeapps n node-name
srvctl stop nodeapps n node-name
srvctl status service -d -s<service
</service

For help with all commands:


srvctl h
For specific help:

srvctl start database -h


srvctl add service -h
srvctl relocate service h
How to check current configuration on various RAC components:
srvctl config database :- Displays the configuration information of the cluster database.
srvctl config service:- Displays the configuration information for the services.
srvctl config nodeapps:- Displays the configuration information for the node applications.
srvctl config asm :- Displays the configuration for the ASM instances on the node.

Shutdown and startup Real Application Cluster


Database
You can start up and shut down instances with Enterprise Manager, SQL*Plus or SRVCTL

Starting up and Shutting down with Enterprise Manager

Starting up and Shutting down with SQL*Plus

Starting up and Shutting down with SRVCTL


You can only perform certain operations when the database is in a NOMOUNT or MOUNT
state. Performing other operations requires that the database be OPEN. In addition, some
operations require that only one instance be in the required state, while other operations
require that all of the instances be in an identical state.

Starting up and Shutting down with Enterprise Manager


To access a cluster database instance, from the Enterprise manager home page, click the
Targets tab, then click the cluster database name. On the Cluster Database Home page, the
cluster database instances display at the bottom of the page. Click an instance name to go
to the Cluster Database Instance Home page where you can start or stop the cluster
database instance
Starting up and Shutting down with SQL*Plus
If you want to start or stop just one instance and you are connected to your local node, you
should first ensure that your current environment includes the SID for the local instance.
To start or shutdown your local instance, initiate a SQL*Plus session and connect with the
SYSDBA or SYSOPER privilege and then issue the required command. For example to start
and mount an instance on your local node, execute the following commands within your
SQL*Plus session:
CONNECT / AS SYSDBA
STARTUP MOUNT

You can start more than one instance from a single SQL*Plus session on one node by way of
Oracle Net Services.
For example, you can use a SQL*Plus session on a local node to perform a transactional
shutdown for two instances on remote nodes by connecting to each in turn using the
instance's individual alias name. Assume the alias name for the first instance is db1 and that
the alias for the second instance is db2. Connect to the first instance and shut it down as
follows:
CONNECT /@db1 AS SYSDBA
SHUTDOWN TRANSACTIONAL
Then connect to and shutdown the second instance by entering the following from you
SQL*Plus session:
CONNECT /@db2 AS SYSDBA
SHUTDOWN TRANSACTIONAL
Starting up and Shutting down with SRVCTL
Following SRVCTL syntax from the command line can be run, providing the required
database name and instance name, or include more than one instance name to start more
than one specific instance:
srvctl start instance -d db_name -i "inst_name_list" [-o start_options] [-c
connect_str | -q]
Note that this command will also start all enabled and non-running services that have the
listed instances either as preferred or available instances.
To stop one or more instances, use the following SRVCTL syntax from the command line:
srvctl stop instance -d name -i "inst_name_list" [-o stop_options] [-c connect_str
| -q]
Staring and stopping Real application cluster Instances with srvctl
$srvctl start instance d RACDB -i RACINS1, RACINS2
$srvctl stop instance d RACDB -i RACINS1, RACINS2
$srvctl start database d RACDB o open
To start or stop your entire cluster database, that is, all of the instances and its enabled
services, enter the following SRVCTL commands
srvctl start database -d name [-o stop_options] [-c connect_str | -q]
srvctl stop database -d name [-o stop_options] [-c connect_str | -q]This content is a
part of Practical field guide and handbook on Oracle Real application Cluster.

The Oracle Real Application Cluster Field DBA Admin Handbook is a practical guide for
Oracle DBAs working on Oracle RAC High availability.
This book provides practical steps to administer Oracle Real Application Clusters (RAC) 10g.
Its a field guide with practical issues and commands on the job with syntax and
explanation. Its written for the DBAs to handle the day to day challenges on the Job.

Init.ora Parameters in RAC


Initialisation parameter file is used by Oracle server while startup. The parameter in
Initialisation parameter file specify the name of the database, the amount of memory to
allocate, the control files location, and various limits and other system parameters. In
Oracle Database 10g, the initialisation files can be replaced with an SPFILE, which allows
many Oracle parameters to be changed dynamically.

Initialisation Parameter File in Real application Cluster (RAC)


The RAC system is a multi-instance single database system, and in general each of the
instances can have its own parameter values

An SPFILE is created if you use DBCA

The SPFILE must be created on a shared volume or shared raw device.

All instances use the same SPFILE

If the database was created manually, then create an SPFILE from the PFILE

Note:- All instance in the cluster database use the same spfile at startup. Because the
SPFILE is a binary file, do not edit it. Instead parameter in the SPFILE can be changed by
using Enterprise Manager or ALTER SYSTEM SQL statement
In Real application Cluster(RAC), there are few advantages in using SPFILE instead of
traditional pfile. Using SPFILE simplifies administration, consistent maintenance of
parameter settings, and guarantees parameter settings persistence across database
shutdown and startup.. In addition, you can configure RMAN to back up your SPFILE.

Real application Cluster (RAC) uses a traditional PFILE only if SPFILE does not exist or if you
specify PFILE in your STARTUP command.
To make all instances to use same SPFILE, each instance uses its own PFILE file that
contains only one parameter called SPFILE. The spfile parameter points to the shared spfile
on your shared storage.
Each instance call its own PFILE init.ora and by putting them in the $ORACLE_HOME/dbs
directory of each node, a startup command uses the shared spfile.
You can alter SPFILE settings with Enterprise Manager or by using the SETclause of
the ALTER SYSTEM statement. In addition, the ALTER SYSTEMcommand enables you to
override the effects of SPFILE settings that you made manually. And this can be done on
memory level as well as on file level
Configuring Initialization Parameters for an Oracle RAC Database
By default, Oracle Database sets most parameters to a default value and this value is the
same across all instances. However, many initialization parameters can also have different
values on different instances . Other parameters must either be unique or identical across
instances.
There are three types of initialization parameters in the Real Application Clusters
environment:
* Parameters that must be Identical across all Instances
* Parameters that must be Unique across all Instances
* Multi-Valued Parameters
Parameters that must be Identical across all Instances
These parameters are generally critical directives. Some are specified at the time of
database creation, and some are specified (or modified) while the RAC system is running.
They must always be common in order for the clustered instances to function.
Specify these parameter values in the SPFILE, or within the individual PFILEs for each
instance. The following list contains the parameters that must be identical on every
instance:

ACTIVE_INSTANCE_COUNT

ARCHIVE_LAG_TARGET

CLUSTER_DATABASE

CLUSTER_DATABASE_INSTANCES

COMPATIBLE

CONTROL_FILES

DB_BLOCK_SIZE

DB_DOMAIN

DB_FILES

DB_NAME

DB_RECOVERY_FILE_DEST

DB_RECOVERY_FILE_DEST_SIZE

DB_UNIQUE_NAME

INSTANCE_TYPE (RDBMS or ASM)

PARALLEL_MAX_SERVERS

REMOTE_LOGIN_PASSWORD_FILE

UNDO_MANAGEMENT
The setting for DML_LOCKS must be identical on every instance only if set to zero.
Unique Parameters (across instances)
In this category of parameters are uniquely identified for a particular instance. They specify
the identifiers of the independent instance, and give independent characteristics to an
instance

instance_name :- Unique name of the instance.

instance_number:- unique number that maps the instance

rollback_segments:- one or more rollback segments by name to this instance.

thread :- number of the redo thread to be used by an instance.

undo_tablespace:- undo tablespace to be used when an instance starts up.


Note :- If your SPFILE contains instance-specific settings, then these settings take
precedence over settings made with ALTER SYSTEMcommands.
SPFILE parameter values and RAC
You can change parameter settings using the
ALTER SYSTEM SET command from any instance
ALTER SYSTEM SET scope=memory sid=;
To remove an entry from your spfile
ALTER SYSTEM RESET scope=spfile sid=; This content is a part of Practical field guide and
handbook on Oracle Real application Cluster.

Adding Node to RAC Cluster

Vous aimerez peut-être aussi