Vous êtes sur la page 1sur 65

<Insert Picture Here>

Transitioning Oracle E-Business Suite to the Maximum


Availability Architecture on Sun Platforms
Oracle MAA Team and Sun Market Development
The following is intended to outline our general
product direction. It is intended for information
purposes only, and may not be incorporated into any
contract. It is not a commitment to deliver any
material, code, or functionality, and should not be
relied upon in making purchasing decisions.
The development, release, and timing of any
features or functionality described for Oracle’s
products remain at the sole discretion of Oracle.

2
MAA: Getting There
With Less Downtime
• MAA <Insert Picture Here>

• Philosophy and Flow


• Phase 1 – Local Cluster Creation
• Phase 2 – Two Node RAC
• Phase 3 – Full MAA Platform
• Ongoing Switchover and Failover Testing
• Partnering with Sun

3
MAA: Getting There
With Less Downtime
• MAA <Insert Picture Here>

• Philosophy and Flow


• Phase 1 – Local Cluster Creation
• Phase 2 – Two Node RAC
• Phase 3 – Full MAA Platform
• Ongoing Switchover and Failover Testing
• Partnering with Sun

4
Maximum Availability Architecture:
MAA, and the MAA Team
• Oracle-recommended best practices for High
Availability
• Based on proven technologies
• Enhanced and validated with new Oracle versions
• Goal: reduce the complexity of implementing MAA while
minimizing downtime
• Best practices available through white papers and Oracle
documentation
• Implemented by the MAA Team
• HA engineering experts in Oracle’s core development group
• Deep-domain expertise designing, developing, and deploying
HA architectures using Oracle and system technologies, and
supporting the same at customer sites worldwide

5
MAA for EBS: Target Architecture

• Redundancy for local hardware failures


• Solaris Cluster, Oracle Clusterware, Oracle RAC, ASM
• Protection against operator error
• Flashback database
• Redundancy for site-level failures
• Data Guard Redo Apply

But must we suffer an outage to implement MAA?

6
MAA: Getting There
With Less Downtime
• MAA <Insert Picture Here>

• Philosophy and Flow


• Phase 1 – Local Cluster Creation
• Phase 2 – Two Node RAC
• Phase 3 – Full MAA Platform
• Ongoing Switchover and Failover Testing
• Partnering with Sun

7
Minimizing Outage to Implement MAA

• Stage all the changes, then switch


• Clone the file systems: Applications software, tech stack
• Clone the database to create a physical standby of
production
• Stage as many configuration changes as possible
• Switch over, complete configuration

8
Initial Configuration

Clients

Oracle
E-Business
Suite

NAS Storage

SAN Oracle
Database

Disk and Tape Storage

9
MAA Configuration

Primary Disaster
Site Recovery Site
Clients

NAS Storage NAS Storage


Oracle E-Business Suite

SAN SAN

Oracle RAC Database

Disk and Tape Storage Disk and Tape Storage

10
Single Node Two Node
Initial Configuration
RAC Configuration RAC Configuration

Clients Clients Clients

Oracle Oracle Oracle


NAS E-Business NAS E-Business NAS E-Business
Storage Suite Storage Suite Storage Suite

Oracle Single Node


SAN SAN SAN
Database RAC on ASM
Two Node
Original Node RAC on ASM
Out-of-Service
New
Disk and Tape Storage Disk and Tape Storage Database Node Disk and Tape Storage

Primary Disaster
MAA Site Recovery Site
Clients
Configuration

NAS NAS
Storage Storage
Oracle E-Business Suite

SAN SAN

Oracle RAC Database

Disk and Tape Storage Disk and Tape Storage

11
MAA: Getting There
With Less Downtime
• MAA <Insert Picture Here>

• Philosophy and Flow


• Phase 1 – Local Cluster Creation
• Phase 2 – Two Node RAC
• Phase 3 – Full MAA Platform
• Ongoing Switchover and Failover Testing
• Partnering with Sun

12
Phase 1: Local Cluster Creation
Initial Configuration Single Node RAC Configuration

Clients Clients

Oracle Oracle
E-Business E-Business
Suite Suite

NAS Storage NAS Storage

Single Node
RAC on ASM
SAN Oracle SAN
Database

Original Node
Out-of-service
New
Database
Node
Disk and Tape Storage Disk and Tape Storage

13
Phase 1 – Establish Single Node RAC with ASM
Apps Node

Clone Prepare
Switchover
Apps for New
Apps
Software Database
Database Node Database Node

Prepare
Production

Clone
Database Backup
Database
for RAC Database
Software
and ASM

Establish Establish Prepare Switchover


Solaris Establish
Oracle New to New RAC
New

Cluster Standby
and Shared Clusterware Database Database
Database
and ASM Instance on ASM
Storage

14
Single Node
RAC Configuration

Local Cluster Creation:


Prep Target Server
• Patch OS to current recommended levels
• Install Solaris Cluster
• Install and configure shared disk
• Create shared logical volumes
• Create OCR, Voting, and ASM spfile disk groups - these can
each be 1GB
• Create Data and Flash Recovery disk groups.
• Install Oracle Clusterware and ASM

15
Single Node
RAC Configuration

Local Cluster Creation:


Prep Current Database for RAC
• Add redo threads for the new instance(s)
• Add undo tablespace(s) for the new instance(s)
• Add the clustering tables to the data dictionary by
running $ORACLE_HOME/rdbms/admin/catclust.sql

Do these steps ahead of time in production,


not using DBCA, to reduce and simplify
the steps required during the downtime

16
Single Node
RAC Configuration

Local Cluster Creation: Prep


Current DB for Data Guard
• Enable “force logging” to ensure all activity
is written to the redo logs
• Add standby redo logs
• Create database password files
• Create them for your final configuration – all instance names
• Grant SQL*Net access to other database nodes for
redo traffic
• 11i10 enables SQL*Net access control by default
• Use OAM to add all appropriate interfaces for your new
database nodes, local and remote
• Run AutoConfig to generate the new sqlnet.ora file

17
Single Node
RAC Configuration

Local Cluster Creation:


Prep DB Configuration Files
• Make configuration changes using the “include”
file, to avoid conflicts with AutoConfig
• For the temporary local standby database, we used
EZConnect to simplify network configurations, for
example:
• sqlplus sys/manager@ha1db:1521/VIS
• We set fewer parameters than for a normal standby
scenario, as this is a temporary setup

18
Single Node
RAC Configuration

Local Cluster Creation:


Clone the DB Oracle Home
• Run the Apps pre-clone utility against the
production database
• Copy the software to a new directory on the target
server (named differently than the original)
• E.g., /u01/appltop in production; /u01/visdbRAC on target
• Use adcfgclone.pl dbTechStack on the target server,
to define the new topology
• You will point to the standby, so will not successfully connect
to a database
• Configure and restart the listener

19
Single Node
RAC Configuration

Local Cluster Creation:


Establish the Local Standby
• Using RMAN, back up the production database,
then restore it to the new environment
• Start managed recovery:
• On the primary: set log_archive_dest_state_2 = enable
• On the standby: start managed recovery
• Validate that redo is being shipped and applied

20
Single Node
RAC Configuration

Local Cluster Creation:


Clone App Tier Software
• Clone the Application tier software to a new
directory structure on the current middle tier(s), so
configuration can be ready ahead of downtime
• Run the pre-clone utility
• Copy the software to a new directory
• Run adclonectx.pl to define the new topology
• Run adcfgclone.pl appsTier, pointing to the new context file
created above

At this point, all possible configuration changes are


staged, and the environment is ready for switchover

21
Single Node
RAC Configuration

Switchover to
Single Instance RAC
• Be sure you are up to date with redo apply
• Shut down the apps
• [0:43] Switch to the local standby
• [0:01] Enable flashback
• [0:05] Open the new primary database instance
• [0:02] Remove the old application topology
• [1:34] Run AutoConfig on the database server
• [0:02] Bounce the DB listener to get the correct services
• [2:50] Run AutoConfig on the middle tiers (in parallel)
• Start the application, pointing to your single-node RAC instance
• Add the single instance to the Clusterware configuration

22
Single Node RAC Configuration

Clients

Oracle
E-Business
Suite

NAS Storage

SAN Single Node


RAC on ASM

Original Node
Out-of-service

New
Database
Node
Disk and Tape Storage

23
MAA: Getting There
With Less Downtime
• MAA <Insert Picture Here>

• Philosophy and Flow


• Phase 1 – Local Cluster Creation
• Phase 2 – Two Node RAC
• Phase 3 – Full MAA Platform
• Ongoing Switchover and Failover Testing
• Partnering with Sun

24
Phase 2: Two Node RAC
Single Node RAC Configuration Two Node RAC Configuration

Clients Clients

Oracle Oracle
E-Business E-Business
Suite Suite

NAS Storage NAS Storage

Oracle
Database
SAN Single Node SAN
RAC on ASM
Original Node Two Node
Out-of-service RAC on ASM
New
Database
Node
Disk and Tape Storage Disk and Tape Storage

25
Phase 2 – Add Secondary RAC Instance Using the Original Node
Apps Node

Rolling Apps
Restart to
Recognize
New Node
Database Node Database Node
New Production

Clone
Database
Software

Establish Establish Prepare


Solaris Add Node
Original

Oracle New
Cluster to RAC
and Shared Clusterware Database
Cluster
and ASM Instance
Storage

26
Two Node
RAC Configuration

Prep Original Node for Cluster:


Hardware, OS, Storage
• Add in any hardware required for cluster
operations
• Apply OS patches as necessary
• Change the server name to be cluster-friendly (e.g.,
ha1db to ha1db02)
• Install Solaris Cluster and add the node to the cluster
• Configure access to shared disk
• Add this node to the cluster for Oracle Clusterware
and ASM

27
Two Node
RAC Configuration

Prep Original Node for Cluster:


Clone, Configure DB Software
• Clone the DB software from the production
RAC DB oracle_home to the original server
• Start the new DB instance on the original server
• Configure new DB instance using AutoConfig and the
DB parameter include file
• Run AutoConfig in the production RAC DB server to
regenerate the TNS configuration there

28
Two Node
RAC Configuration

Prep Original Node for Cluster:


Configure Middle Tier
• Using OAM’s Context Editor, set Tools and iAS
two_task to point to values in the generated
TNS_NAMES.ora file:
• To load-balance Forms sessions: set Tools OH TWO_TASK
to point to the <database name>_806_balance alias
• To load-balance self-service connections: set iAS OH
TWO_TASK to point to the <database name>_balance alias
• Run AutoConfig on the apps tier servers. Bounce
them when desired, to take advantage of the new
database instance

29
Two Node
RAC Configuration

Prep Original Node for Cluster:


Add Node to Clusterware
• To be able to use srvctl to control the new
cluster, add the resources to Clusterware via srvctl:
• Add the database
• Add all database instances
• Add listeners to Clusterware :
• Point to the Apps’ TNS_ADMIN directory in $OH/bin/racgwrap
• Make sure the listener is running
• Run netca, cluster configuration, choose local node. Run on
all nodes.
• Run AutoConfig again, to overwrite the listener.ora file
created by netca.

30
Two Node RAC Configuration

Clients

Oracle
E-Business
Suite

NAS Storage

Oracle
Database
SAN

Two Node
RAC on ASM

Disk and Tape Storage

31
MAA: Getting There
With Less Downtime
• MAA <Insert Picture Here>

• Philosophy and Flow


• Phase 1 – Local Cluster Creation
• Phase 2 – Two Node RAC
• Phase 3 – Full MAA Platform
• Ongoing Switchover and Failover Testing
• Partnering with Sun

32
Phase 3: Full MAA Architecture

Two Node RAC Configuration MAA Configuration

Clients
Primary Disaster
Site Recovery Site
Clients

Oracle
E-Business
Suite

NAS Storage NAS Storage Oracle E-Business Suite NAS Storage

Oracle
Database
SAN SAN SAN

Two Node Oracle RAC Database


RAC on ASM

Disk and Tape Storage Disk and Tape Storage Disk and Tape Storage

33
Phase 3 – Establish Disaster Recovery Site Utilizing Oracle Data Guard
Apps Nodes
Primary

Clone Apps
Software
DR Apps

Prepare Apps
Nodes

Software for
DR Database
Database

Clone
Primary

Backup
Nodes

Database
Database
Software

Establish Establish Prepare


Database

Solaris Establish
Nodes

Oracle New
Cluster Standby
DR

and Shared Clusterware Database


Database
Storage and ASM Instance

34
MAA Configuration

Full MAA:
Establish Target Environment
• Build DR site hardware platform
(best: mimic production – multiple middle tiers, RAC
database server cluster)
• Install the operating system
• Install and configure Solaris Cluster
• Configure shared storage
• Install Oracle Clusterware and ASM

35
MAA Configuration

Full MAA:
Configure Prod Database
• Add TNS entries for standby
communications between sites
• Configure failover across nodes, not load balancing
• Set database parameters for standby operations.
Same as for local standby, except:
• Use only permanent sites in log_archive_config
• Use TNS entries for FAL_CLIENT and FAL_SERVER
parameters
• Use TNS entries for log_archive_dest_2
• Assuming in place: standby redo logs, extra undo
tablespace(s), redo threads, cluster catalog in
database, password files, SQL*Net access control, …

36
MAA Configuration

Full MAA:
Clone Prod DB Software
• Run the Apps pre-clone utility
• Copy database ORACLE_HOME to DR database
servers
• Run adcfgclone.pl dbTechStack on each DR
database server

37
MAA Configuration

Full MAA:
Generate Core init.ora
• Edit the context files to correct topology
information (incorrect, as the DB is not yet up)
• Instance_number
• Instance_thread
• Undo_tablespace
• Move / remove init<sid>.ora and
<sid>_APPS_BASE.ora so AutoConfig regenerates
• Run AutoConfig
• Adjust the database configuration for when this
environment is primary and when it is standby, and for
RMAN

38
MAA Configuration

Full MAA:
Configure Standby TNS
• TNS configuration
• Copy production <context>_ifile.ora to standby
<context>_ifile.ora, to add the “failover” services
• Listener configuration
• Add the ability to listen on the physical machine name to the
list of addresses, using include files
• Bounce the listener on each node on the DR site

39
MAA Configuration

Full MAA:
Clone the Database
• Using RMAN, back up the production
database including archivelogs, and the
production control file “as standby”
• Using RMAN, restore the database to the DR
site using one of the configured instances
• Start managed recovery

40
MAA Configuration

Full MAA: Update


Clusterware With Standby DB
• Update the Oracle Clusterware
configuration on the standby site:
• Add the database
• Add all instances
• Add listeners
• Run AutoConfig once more to restore the base
listener.ora files

41
MAA Configuration

Full MAA:
Clone Application Software
• Run the pre-clone step, copy the software,
run adclonectx.pl and adcfgclone.pl on each DR
site middle tier server
• Ignore the error when running adcfgclone.pl appsTier
which occurs due to no connection to database
• Edit the context file to point Tools OH TWO_TASK,
iAS OH TWO_TASK, and Apps JDBC Connect
Alias to the appropriate load balancing service

42
MAA Configuration

Primary Disaster
Site Clients Recovery Site

NAS Storage Oracle E-Business Suite NAS Storage

SAN SAN
Oracle RAC Database

Disk and Tape Storage Disk and Tape Storage

At this point, all possible configuration changes are


staged, and the environment is ready for switchover

43
MAA: Getting There
With Less Downtime
• MAA <Insert Picture Here>

• Philosophy and Flow


• Phase 1 – Local Cluster Creation
• Phase 2 – Two Node RAC
• Phase 3 – Full MAA Platform
• Ongoing Switchover and Failover Testing
• Partnering with Sun

44
Ongoing Switchover and Failover
Testing
• Periodically verify viability of DR environment
• Practice steps so the process flows easily if
disaster strikes
• Use the DR environment to provide application
services when performing platform or site
maintenance

45
MAA Configuration

Test Failover
Requires a Brief Outage
• Be sure you are up to date with redo apply
• Shut down the app and all but 1 RAC instance on
each site
• Switch the standby to primary, enable flashback,
open, start other instances
• Run AutoConfig on database, then middle tier
• Do the “topology dance” on the DB tier first
• Start the Apps
• Use Flashback Database to start the original
database as a standby of the new production server

46
DR Testing Procedure Using Flashback
Database

• Create a database restore point on the DR standby


database
• Open the standby database, complete the
configuration
• Perform testing
• Flash the standby back to the restore point, resume
recovery as a standby

47
MAA: Getting There
With Less Downtime
• MAA <Insert Picture Here>

• Philosophy and Flow


• Phase 1 – Local Cluster Creation
• Phase 2 – Two Node RAC
• Phase 3 – Full MAA Platform
• Ongoing Switchover and Failover Testing
• Partnering with Sun

48
Oracle Clusterware

• In Oracle RAC 10g - Various Oracle resources are


configured to be managed by Oracle Clusterware
• ONS (Oracle Notification Service)
• VIP (Virtual IP Address)
• LISTENERS
• DATABASE INSTANCES
• SERVICES

49
Oracle Clusterware Provides

• VIP resource
• Provides Application VIP’s
• HA - Framework
• Extends Oracle Clusterware HA protection to application
• HA - API
• Interface to allow customers to change - at run time -
how Oracle Clusterware manages customer application

50
Oracle Clusterware with ASM Enables
Consolidated Clustered Storage
ASM ASM ASM ASM ASM
RAC or Instance Instance Instance Instance Instance
Single Instance
Databases ERP ERP CRM CRM HR

Clustered
Servers

Clustered
Pool of Storage

Disk Group Disk Group

51
Partnering with Sun

• Part of MAA means proving and testing


our best practices, and working closely
with Sun to ensure our joint solutions work
well together

52
Solaris Cluster with Oracle

• Oracle Clusterware and Solaris Cluster work


together in providing a reliable joint HA solution for
Oracle 10G RAC on Sun platforms
• Proven and mature Sun Cluster framework
• I/O fencing and data integrity
• Interconnect failover and application traffic striping
• Shared storage support, APIs, and more
• End-to-end Sun technology stack for better integration
• Integrated cluster file system and volume manager
• Supports up to 8 node RAC configurations
• SPARC and AMD x64
• More choice with lower total cost of ownership

53
Solaris Cluster: Quorum and I/O
Fencing for Data Integrity
• Solid implementation of quorum algorithm
to prevent split-brain
• I/O fencing prevents access to shared
storage by a node that is not part of the
cluster
• Guarantees no data corruption through
non-cluster nodes accessing shared data
• Node time synchronization

54
Solaris Cluster : Heartbeats,
Interconnects and Traffic Striping
• Implements cluster heartbeats in
“interrupt context”
• Not subject to scheduling problems due to
high load or resource starvation
• All interconnect links are used with
automatic failover built-in
• Up to six links supported
• Separate networks for each private
interconnect means redundancy even
at switch level
• All traffic is striped over private
interconnects, resulting in higher
throughput and lower latency

55
Solaris 10 Operating System

• Offers over 600 exciting new features


• Supports horizontal or vertical scaling
• Provides relentless availability
• Delivers extreme performance
• Provides unparalleled security
• Facilitates leveraging of low cost hardware
• Enables standardization on a single OS
• Offers interoperability with Linux, Windows

56
Sun Fire T2000 Servers with
CoolThreads Technology

• Used in the MAA application tier running


Oracle EBS apps 11.5.10.
• Designed for Web, application tier, and
multithreaded workloads
• Utilize an innovative design
• Incorporate UltraSPARC T1 processors with
CoolThreads technology
• Deliver breakthrough performance
• Provide massive thread-level parallelism
• Increase application throughput
• Offer dramatic space and power efficiency
• Configured with an 8 core, 1.2 GHz
UltraSPARC T1 processor, 32 GB memory,
two 73 GB disk drives

57
Sun Fire X4200 Servers

• Used in the MAA database tier running


Oracle RAC database on Solaris 10 x64.
• Support up to two single or dual-core
AMD Opteron processors
• Deliver fast network performance with
four Gigabit Ethernet ports, up to five
64-bit PCI-X slots
• Virtually eliminate I/O bottlenecks with
AMD’s HyperTransport technology
• Provide redundant power supplies, fans,
hard disk drives
• Bring extreme performance and a new level
of energy efficiency to the x86 market
• Configured with 8 GB memory, two 73 GB
disk drives

58
Sun StorageTek 5320
NAS Appliance

• Used in the MAA application tier


• Easy to deploy and manage
• Scales to 336 TB
• Maximizes security with a closed operating system
• Ensures regulatory compliance with the Sun StorageTek
Compliance Archiving software
• Increases availability and reliability with dual redundant RAID
controllers, journalling file system, and checkpointing
• Handles multiple protocols for UNIX and Windows clients

59
Sun StorageTek 6540 Array

• Used in the MAA database tier


• Provides online, data-in-place expansion
• Scales to 168 TB in a small footprint
• Uses a high availability architecture and
data protection software
• Enables configuration and management
over the network

60
Sun StorageTek Tape Storage

• Manage and protect data with tape libraries


• Gain control of information and make it
manageable with tape virtualization
technology
• Take advantage of price, capacity and
performance without straining budgets
with tape drives
• Centrally authorize, secure and manage
encryption keys with tape encryption
technology
• Improve the efficiency and productivity of
automated tape libraries with a full range
of tape management software

61
Software Components from Oracle
and Sun

• Oracle E-Business Suite 11.5.10.2


• Oracle RAC database, ASM and
Clusterware 10.2.0.2
• Oracle Enterprise Manager 10g Grid
Control
• Solaris™ 10 Operating System
(Update 3)
• Solaris Cluster 3.2 Advanced Edition

for Oracle RAC


• Sun N1™ System Manager software

62
For More Information

http://search.oracle.com
maximum availability architecture

or
http://www.oracle.com/technology/deploy/availability/htdocs/maa.htm

63
64
65

Vous aimerez peut-être aussi