Académique Documents
Professionnel Documents
Culture Documents
2
MAA: Getting There
With Less Downtime
• MAA <Insert Picture Here>
3
MAA: Getting There
With Less Downtime
• MAA <Insert Picture Here>
4
Maximum Availability Architecture:
MAA, and the MAA Team
• Oracle-recommended best practices for High
Availability
• Based on proven technologies
• Enhanced and validated with new Oracle versions
• Goal: reduce the complexity of implementing MAA while
minimizing downtime
• Best practices available through white papers and Oracle
documentation
• Implemented by the MAA Team
• HA engineering experts in Oracle’s core development group
• Deep-domain expertise designing, developing, and deploying
HA architectures using Oracle and system technologies, and
supporting the same at customer sites worldwide
5
MAA for EBS: Target Architecture
6
MAA: Getting There
With Less Downtime
• MAA <Insert Picture Here>
7
Minimizing Outage to Implement MAA
8
Initial Configuration
Clients
Oracle
E-Business
Suite
NAS Storage
SAN Oracle
Database
9
MAA Configuration
Primary Disaster
Site Recovery Site
Clients
SAN SAN
10
Single Node Two Node
Initial Configuration
RAC Configuration RAC Configuration
Primary Disaster
MAA Site Recovery Site
Clients
Configuration
NAS NAS
Storage Storage
Oracle E-Business Suite
SAN SAN
11
MAA: Getting There
With Less Downtime
• MAA <Insert Picture Here>
12
Phase 1: Local Cluster Creation
Initial Configuration Single Node RAC Configuration
Clients Clients
Oracle Oracle
E-Business E-Business
Suite Suite
Single Node
RAC on ASM
SAN Oracle SAN
Database
Original Node
Out-of-service
New
Database
Node
Disk and Tape Storage Disk and Tape Storage
13
Phase 1 – Establish Single Node RAC with ASM
Apps Node
Clone Prepare
Switchover
Apps for New
Apps
Software Database
Database Node Database Node
Prepare
Production
Clone
Database Backup
Database
for RAC Database
Software
and ASM
Cluster Standby
and Shared Clusterware Database Database
Database
and ASM Instance on ASM
Storage
14
Single Node
RAC Configuration
15
Single Node
RAC Configuration
16
Single Node
RAC Configuration
17
Single Node
RAC Configuration
18
Single Node
RAC Configuration
19
Single Node
RAC Configuration
20
Single Node
RAC Configuration
21
Single Node
RAC Configuration
Switchover to
Single Instance RAC
• Be sure you are up to date with redo apply
• Shut down the apps
• [0:43] Switch to the local standby
• [0:01] Enable flashback
• [0:05] Open the new primary database instance
• [0:02] Remove the old application topology
• [1:34] Run AutoConfig on the database server
• [0:02] Bounce the DB listener to get the correct services
• [2:50] Run AutoConfig on the middle tiers (in parallel)
• Start the application, pointing to your single-node RAC instance
• Add the single instance to the Clusterware configuration
22
Single Node RAC Configuration
Clients
Oracle
E-Business
Suite
NAS Storage
Original Node
Out-of-service
New
Database
Node
Disk and Tape Storage
23
MAA: Getting There
With Less Downtime
• MAA <Insert Picture Here>
24
Phase 2: Two Node RAC
Single Node RAC Configuration Two Node RAC Configuration
Clients Clients
Oracle Oracle
E-Business E-Business
Suite Suite
Oracle
Database
SAN Single Node SAN
RAC on ASM
Original Node Two Node
Out-of-service RAC on ASM
New
Database
Node
Disk and Tape Storage Disk and Tape Storage
25
Phase 2 – Add Secondary RAC Instance Using the Original Node
Apps Node
Rolling Apps
Restart to
Recognize
New Node
Database Node Database Node
New Production
Clone
Database
Software
Oracle New
Cluster to RAC
and Shared Clusterware Database
Cluster
and ASM Instance
Storage
26
Two Node
RAC Configuration
27
Two Node
RAC Configuration
28
Two Node
RAC Configuration
29
Two Node
RAC Configuration
30
Two Node RAC Configuration
Clients
Oracle
E-Business
Suite
NAS Storage
Oracle
Database
SAN
Two Node
RAC on ASM
31
MAA: Getting There
With Less Downtime
• MAA <Insert Picture Here>
32
Phase 3: Full MAA Architecture
Clients
Primary Disaster
Site Recovery Site
Clients
Oracle
E-Business
Suite
Oracle
Database
SAN SAN SAN
Disk and Tape Storage Disk and Tape Storage Disk and Tape Storage
33
Phase 3 – Establish Disaster Recovery Site Utilizing Oracle Data Guard
Apps Nodes
Primary
Clone Apps
Software
DR Apps
Prepare Apps
Nodes
Software for
DR Database
Database
Clone
Primary
Backup
Nodes
Database
Database
Software
Solaris Establish
Nodes
Oracle New
Cluster Standby
DR
34
MAA Configuration
Full MAA:
Establish Target Environment
• Build DR site hardware platform
(best: mimic production – multiple middle tiers, RAC
database server cluster)
• Install the operating system
• Install and configure Solaris Cluster
• Configure shared storage
• Install Oracle Clusterware and ASM
35
MAA Configuration
Full MAA:
Configure Prod Database
• Add TNS entries for standby
communications between sites
• Configure failover across nodes, not load balancing
• Set database parameters for standby operations.
Same as for local standby, except:
• Use only permanent sites in log_archive_config
• Use TNS entries for FAL_CLIENT and FAL_SERVER
parameters
• Use TNS entries for log_archive_dest_2
• Assuming in place: standby redo logs, extra undo
tablespace(s), redo threads, cluster catalog in
database, password files, SQL*Net access control, …
36
MAA Configuration
Full MAA:
Clone Prod DB Software
• Run the Apps pre-clone utility
• Copy database ORACLE_HOME to DR database
servers
• Run adcfgclone.pl dbTechStack on each DR
database server
37
MAA Configuration
Full MAA:
Generate Core init.ora
• Edit the context files to correct topology
information (incorrect, as the DB is not yet up)
• Instance_number
• Instance_thread
• Undo_tablespace
• Move / remove init<sid>.ora and
<sid>_APPS_BASE.ora so AutoConfig regenerates
• Run AutoConfig
• Adjust the database configuration for when this
environment is primary and when it is standby, and for
RMAN
38
MAA Configuration
Full MAA:
Configure Standby TNS
• TNS configuration
• Copy production <context>_ifile.ora to standby
<context>_ifile.ora, to add the “failover” services
• Listener configuration
• Add the ability to listen on the physical machine name to the
list of addresses, using include files
• Bounce the listener on each node on the DR site
39
MAA Configuration
Full MAA:
Clone the Database
• Using RMAN, back up the production
database including archivelogs, and the
production control file “as standby”
• Using RMAN, restore the database to the DR
site using one of the configured instances
• Start managed recovery
40
MAA Configuration
41
MAA Configuration
Full MAA:
Clone Application Software
• Run the pre-clone step, copy the software,
run adclonectx.pl and adcfgclone.pl on each DR
site middle tier server
• Ignore the error when running adcfgclone.pl appsTier
which occurs due to no connection to database
• Edit the context file to point Tools OH TWO_TASK,
iAS OH TWO_TASK, and Apps JDBC Connect
Alias to the appropriate load balancing service
42
MAA Configuration
Primary Disaster
Site Clients Recovery Site
SAN SAN
Oracle RAC Database
43
MAA: Getting There
With Less Downtime
• MAA <Insert Picture Here>
44
Ongoing Switchover and Failover
Testing
• Periodically verify viability of DR environment
• Practice steps so the process flows easily if
disaster strikes
• Use the DR environment to provide application
services when performing platform or site
maintenance
45
MAA Configuration
Test Failover
Requires a Brief Outage
• Be sure you are up to date with redo apply
• Shut down the app and all but 1 RAC instance on
each site
• Switch the standby to primary, enable flashback,
open, start other instances
• Run AutoConfig on database, then middle tier
• Do the “topology dance” on the DB tier first
• Start the Apps
• Use Flashback Database to start the original
database as a standby of the new production server
46
DR Testing Procedure Using Flashback
Database
47
MAA: Getting There
With Less Downtime
• MAA <Insert Picture Here>
48
Oracle Clusterware
49
Oracle Clusterware Provides
• VIP resource
• Provides Application VIP’s
• HA - Framework
• Extends Oracle Clusterware HA protection to application
• HA - API
• Interface to allow customers to change - at run time -
how Oracle Clusterware manages customer application
50
Oracle Clusterware with ASM Enables
Consolidated Clustered Storage
ASM ASM ASM ASM ASM
RAC or Instance Instance Instance Instance Instance
Single Instance
Databases ERP ERP CRM CRM HR
Clustered
Servers
Clustered
Pool of Storage
51
Partnering with Sun
52
Solaris Cluster with Oracle
53
Solaris Cluster: Quorum and I/O
Fencing for Data Integrity
• Solid implementation of quorum algorithm
to prevent split-brain
• I/O fencing prevents access to shared
storage by a node that is not part of the
cluster
• Guarantees no data corruption through
non-cluster nodes accessing shared data
• Node time synchronization
54
Solaris Cluster : Heartbeats,
Interconnects and Traffic Striping
• Implements cluster heartbeats in
“interrupt context”
• Not subject to scheduling problems due to
high load or resource starvation
• All interconnect links are used with
automatic failover built-in
• Up to six links supported
• Separate networks for each private
interconnect means redundancy even
at switch level
• All traffic is striped over private
interconnects, resulting in higher
throughput and lower latency
55
Solaris 10 Operating System
56
Sun Fire T2000 Servers with
CoolThreads Technology
57
Sun Fire X4200 Servers
58
Sun StorageTek 5320
NAS Appliance
59
Sun StorageTek 6540 Array
60
Sun StorageTek Tape Storage
61
Software Components from Oracle
and Sun
62
For More Information
http://search.oracle.com
maximum availability architecture
or
http://www.oracle.com/technology/deploy/availability/htdocs/maa.htm
63
64
65