Académique Documents
Professionnel Documents
Culture Documents
1
7/8/13
Agenda
2
7/8/13
AC
) 3. Cost-effective Workload Management
(R
rs
ste
Clu
on I): )
ati (G SM
lic re t (A
pp ctu en
lA tru gem CW)
ac
le
Re
a
Inf
id e M ar
s
ra ana (O
e
Using
Or Gr rag terw
le o
ac c St Clus
r
O ati le
tom Ora
c A standardized and improved
Au
deployment and management
AS
M A familiar and matured HA stack
x
Fle
a cle
Or
RA
cle
a
Or
de DB A
e No
C On DB A
RA DB A GI
cle le
a ac
Or Or
rt DB A
sta
Re DB A GI
a cle ac
le
Or Or
DB A
GI
le
ac
Or
High Availability
3
7/8/13
RA
cle
a
od
e Or
neN
CO DB A
RA DB A
le
ac
Or DB A GI
le
ac
Or
DB A
DB A le
GI
Or
ac
ation
U A pplic
nd PS
at ch a
ng) P
ne (rolli
Onli
High Availability
RA
cle
a
Or
de DB A
e No
C On DB A
RA DB A GI
cle le
a ac
Or Or
ine
rt DB A Onl de
ta ra
R es upg
le DB A le
GI
ac ac
Or Or
tion
nf igura ster
o u
DB A Re-c able cl
o e n
ac
le
GI t
Or
Flexibility
4
7/8/13
C
RA
e cle
od a
Or
neN
CO DB A
RA
le DB A
ac
Or
DB A le
GI
ac
Or
DB A
rt DB A GI
sta ac
le
Re Or
a cle
Or
DB A
GI
le
ac
Or
Consolidation
ac
de Or
No
ne
ACO
R
le
ac
Or GI
le
ac
Or
GI
le
ac
Or
Consolidation
5
7/8/13
Agenda
Application Continuity
6
7/8/13
DB A Often leads to
DB A
User pains
DB A
D BA
Duplicate submissions
DB A
Rebooting mid-tiers
Developer pains
Transaction Guard
A Reliable protocol and API that returns the outcome of the last transaction
Application Continuity
Safely attempts to replay in-flight work following outages and planned
operations.
7
7/8/13
Application Continuity
Masks Unplanned & Planned Outages
Replays in-flight (DML)
work on recoverable errors
M
CR
DB A
8
7/8/13
End User
Application Servers
9
7/8/13
Application Servers
Network Switches
Database Servers
10
7/8/13
Customer challenges
Large databases required considerable storage management
Best performance required raw storage
NFS solutions, while simple, did not perform as well as raw
For RAC, cluster file systems were not available
11
7/8/13
ASM Overview
Simplify the Stack
File System
Server Server
ASM Overview
Oracle Database 11.2 or earlier
RAC Cluster
Database Instance
One to One
Mapping of ASM DBA DBA DBB DBB DBB DBC
Instances to ASM Instance
Servers
Node1
ASM Node2 ASM Node3
ASM Node4
ASM Node5 ASM
12
7/8/13
One to One
Mapping of ASM DBA
ASM Instance
DBA DBB DBB DBB DBC
Instances to ASM Instance
Servers
Node1 ASM Node2 ASM Node3 ASM Node4
ASM Node5 ASM
Databases share
ASM instances DBA
ASM Instance
DBA DBB DBB DBB DBC
ASM Instance
13
7/8/13
Flex ASM
Remote Access
14
7/8/13
Flex ASM
Other Flex ASM Features
Increase maximum number of Disk Groups to 511
Previous limit was 63
Command for renaming ASM Disk
ASM instance Patch-level verification
Patch level verification is disabled during rolling patches
Replicated Physical Metadata
Improves reliability
Virtual Metadata has always been replicated with ASM mirroring
15
7/8/13
Agenda
16
7/8/13
Policy-Managed Databases
Highly available workload management
Allocate resources
As demand requires it
Policy-Managed Databases
Better High Availability for any cluster
Improve HA
By choosing servers from the
least important server pool
17
7/8/13
Policy-Managed Databases
Customer quote: Policy managed; Its all about the workload
Policy Logic defines:
EA Availability Server pools are dynamically adjusted
EM
Service Levels Uniform services.dont care
Maint. windows where instances are or their
as
e ric name. All about capacity and
Am Performance workload
PCI requirements Instances are controlled by min/
Regional/business max combined with services. No
more add/drop instance.
Version
QoS is critical to our management
Policy Min Max Importance
Americas 1 3 High
EMEA 1 3 High
18
7/8/13
er
s erv
ve The cluster administrator view:
Mo pools
n:
t io en >> crsctl eval relocate server lnxrac12srv1 -to ora.mail -f
Ac etwe
b Stage Group 1:
------------------------------------------------------------------------------
Stage Number Required Action
[grid@LnxRAC12Srv1 bin]$ ./srvctl config srvpool ------------------------------------------------------------------------------
...
Server pool name: mail 1 Y Server 'lnxrac12srv1' will be moved from pools
[ora.prod] to pools [ora.mail]
Importance: 0, Min: 1, Max: 3
... 2 Y Resource 'ora.rac.db' (1/1) will be in state
[OFFLINE]
Candidate server names:
Server pool name: prod ------------------------------------------------------------------------------
Importance: 0, Min: 1, Max: 2
...
19
7/8/13
Agenda
44 Copyright 2012, Oracle and/or its affiliates. All rights reserved. Insert Information Protection Policy Classification from Slide 12
20
7/8/13
e rac
od O
neN
CO
e RA
l
ac
Or GI
le
ac
Or
GI
le
ac
Or
Consolidation
45 Copyright 2012, Oracle and/or its affiliates. All rights reserved. Insert Information Protection Policy Classification from Slide 12
46 Copyright 2012, Oracle and/or its affiliates. All rights reserved. Insert Information Protection Policy Classification from Slide 12
21
7/8/13
Services
Database Instance
Database Instance
Server
Server
47 Copyright 2012, Oracle and/or its affiliates. All rights reserved. Insert Information Protection Policy Classification from Slide 12
Services
RAC Instance 1
Node 1 Node 2
Node 1
48 Copyright 2012, Oracle and/or its affiliates. All rights reserved. Insert Information Protection Policy Classification from Slide 12
22
7/8/13
Node1 Node2
CDB
Node1 Node2
23
7/8/13
Oracle RAC
and
Virtualization
24
7/8/13
Increasing Consolidation
54 Copyright 2013, Oracle and/or its affiliates. All rights reserved. Confidential Oracle Restricted
Dom-0 Dom-0
Guest Guest Guest Guest
DOM-0 Hypervisor Hypervisor DOM-0
Bare-Metal Server Bare-Metal Server
25
7/8/13
Live Migration
Dom-0 Dom-0
Guest Guest Guest Guest
DOM-0 Hypervisor Hypervisor DOM-0
Bare-Metal Server Bare-Metal Server
Dom-0 Dom-0
Guest Guest Guest Guest
DOM-0 Hypervisor Hypervisor DOM-0
Bare-Metal Server Bare-Metal Server
26
7/8/13
Dom-0
DOM-0
Guest Guest
Hypervisor
DBA
? Hypervisor
Guest
Dom-0
DOM-0
Bare-Metal Server Bare-Metal Server
Dom-0 Dom-0
Guest Guest Guest Guest
DOM-0 Hypervisor Hypervisor DOM-0
Bare-Metal Server Bare-Metal Server
27
7/8/13
28
7/8/13
DB
B
DB
A Lightweight cluster stack on leaf nodes
Hub Nodes
DB
B
Sta
DB
B www.oracle.com/goto/clusterware
DB
A
DB
A
DB
A
29
7/8/13
30