Académique Documents
Professionnel Documents
Culture Documents
D63697
Edition 2.1
D50081GC21
September 2010
Volume I Student Guide
Oracle Database 11g: New
Features for Administrators
Authors Copyright 2009, 2010, Oracle and/or it affiliates. All rights reserved.
Contents
I Introduction
Overview I-2
Oracle Database Innovation I-3
Enterprise Grid Computing I-4
iii
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
2 Installation Enhancements
Objectives 2-2
Oracle Database 11g Installation: Changes 2-3
Part Two: Installing the Oracle Database Software 2-6
Choosing the Type of Installation 2-7
Choosing Grid Installation Options 2-8
Choosing Language Settings 2-9
Choosing the Database Edition 2-10
Specifying Installation Location 2-11
Choosing Operating System Groups 2-12
Performing Prerequisite Checks 2-13
Installation Summary Page 2-14
Install Product Page 2-15
iv
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
3 Oracle Restart
Objectives 3-2
Oracle Restart 3-3
Oracle Restart Process Startup 3-5
Controlling Oracle Restart 3-6
Choosing the Correct SRVCTL Utility 3-8
Oracle Restart Configuration 3-9
Using the SRVCTL Utility 3-10
Obtaining Help for the SRVCTL Utility 3-11
Starting Components by Using the SRVCTL Utility 3-12
Stopping Components by Using the SRVCTL Utility 3-13
Viewing Component Status 3-14
Displaying the Oracle Restart Configuration for a Component 3-15
Manually Adding Components to the Oracle Restart Configuration 3-16
Quiz 3-17
Summary 3-18
Practice 3-1: Overview 3-19
4 ASM Enhancements
Objectives 4-2
Without ASM Fast Mirror Resync 4-3
v
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
5 Storage Enhancements
Objectives 5-2
Supporting 4 KB Sector Disks 5-3
vi
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
vii
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
7 Oracle SecureFiles
Objectives 7-2
Managing Enterprise Information 7-3
viii
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
ix
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
x
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
11 Database Replay
Objectives 11-2
Why Use Database Replay? 11-3
Using Database Replay 11-4
The Big Picture 11-5
System Architecture: Capture 11-6
System Architecture: Preprocessing the Workload 11-7
System Architecture: Replay 11-8
Prechange Production System 11-9
Supported Workloads 11-10
Capture Considerations 11-11
Replay Considerations 11-12
Replay Options 11-14
Workload Replay Filters 11-16
Replay Analysis 11-18
Quiz 11-20
Database Replay Workflow in Enterprise Manager 11-21
Accessing Database Replay 11-22
Packages and Procedures 11-23
Database Replay: PL/SQL Example 11-24
Data Dictionary Views: Database Replay 11-26
Calibrating Replay Clients 11-27
Summary 11-28
Practice 11: Overview 11-29
xi
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
xii
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
14 Diagnosability Enhancements
Objectives 14-2
Oracle Database 11g Fault Management 14-3
Ease Diagnosis: Automatic Diagnostic Workflow 14-4
Automatic Diagnostic Repository 14-5
ADRCI: The ADR Command-Line Tool 14-7
V$DIAG_INFO 14-8
Location for Diagnostic Traces 14-9
Viewing the Alert Log Using Enterprise Manager 14-10
Viewing the Alert Log Using ADRCI 14-11
Quiz 14-13
Problems and Incidents 14-14
Incident Packaging Service (IPS) 14-17
Incident Packages 14-18
Enterprise Manager Support Workbench: Overview 14-20
Enterprise Manager Support Workbench Roadmap 14-21
View Critical Error Alerts in Enterprise Manager 14-22
Package and Upload Diagnostic Data to Oracle Support 14-23
xiii
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
16 Performance Enhancements
Objectives 16-2
Using the DBMS_ADDM Package 16-3
Advisor Named Findings and Directives 16-6
xiv
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
xv
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
xvi
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
xvii
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
xviii
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
xix
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
xx
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
Introduction
Overview
Overview
This course is designed to introduce you to the new features of Oracle Database 11g that are
applicable to the work usually performed by database administrators and related personnel. The
course does not attempt to provide every detail about a feature or cover aspects of a feature that
were available in previous releases (except when defining the context for a new feature or
comparing past behavior with current behavior). Consequently, the course is most useful to you
if you have administered other versions of Oracle databases, particularly Oracle Database 10g.
Even with this background, you should not expect to be able to implement all of the features
discussed in the course without supplemental reading, especially the Oracle Database 11g
documentation.
The course consists of instructor-led lessons and demonstrations, plus many hands-on practices
that enable you to see for yourself how certain new features behave. As with the course content
in general, these practices are designed to introduce you to the fundamental aspects of a feature.
They are not intended to test your knowledge of unfamiliar syntax or to provide an opportunity
for you to examine every nuance of a new feature. The length of this course precludes such
activity. Consequently, you are strongly encouraged to use the provided scripts to complete the
practices rather than struggle with unfamiliar syntax.
Manageability
Availability
Performance
Business intelligence and data warehousing
Information management
Content management
XML
Oracle Text
Management Automation
Auto-tuning
Instrumentation
Recovery
Replication
RAC
Memory
Schema
Apps/SQL
Backup
Storage
Management Automation
Oracle Database 11g continues the effort begun in Oracle9i Database and carried on through
Oracle Database 10g to dramatically simplify and ultimately, fully automate the tasks that DBAs
must perform. What is new in Oracle Database 11g is Automatic SQL Tuning with self-learning
capabilities. Other new capabilities include automatic, unified tuning of both SGA and PGA
memory buffers, and new advisors for partitioning, database repair, streams performance, and
space management. Enhancements to Oracle Automatic Database Diagnostic Monitor (ADDM)
give it a better global view of performance in Oracle Real Application Clusters (RAC)
environments and improved comparative performance analysis capabilities.
Further Information
Suggested Schedule
Installation I, 1 - 3 1
Manage Change 9 - 11 3
Manage Availability 18 - 20 5
Suggested Schedule
The lessons in this guide are arranged in the order in which you will probably study them in the
class. The lessons are grouped into topic areas, but they are also organized by other criteria,
including the following:
A feature is introduced in an early lesson and then referenced in later lessons.
Topics alternate between difficult and easy to facilitate learning.
Lessons are supplemented with hands-on practices throughout the course to provide regular
opportunities for you to explore what you are learning.
If your instructor teaches the class in the sequence in which the lessons are printed in this guide,
the class should run approximately as shown in the schedule. Your instructor, however, may vary
the sequence of the lessons for a number of reasons, including:
Customizing material for a specific audience
Covering a topic in a single day instead of splitting the material across two days
Maximizing the use of course resources (such as hardware and software)
Objectives
11.2
11.2
Application Application
Application 3rd Party
11.2
Memory requirements:
1 GB for the database instance with Database Control
1.5 GB for the ASM instance and Oracle Restart
Disk space requirements:
11.2
11.2
update
Checking the System Requirements
11.2
11.2
Installation Scenario
Installation Scenario
As part of the Oracle Grid Infrastructure installation, the steps to configure the ASM disk
groups are shown and Oracle Restart is configured. The Oracle Grid Infrastructure is installed
first so that the database created after the Oracle Database software installation will be able to
use the ASM disk groups and be automatically registered with Oracle Restart.
11.2
Part One: Installing the Oracle Grid Infrastructure for Stand-Alone Server
To install the Oracle Grid Infrastructure software by using OUI, log on to your computer as a
member of the administrative group that is authorized to install the Oracle Grid Infrastructure
software, and create and manage the database. Insert the distribution CD for the clusterware
into your CD drive, or navigate to the Oracle clusterware staging location. From that location,
enter ./runInstaller to start the Oracle Universal Installer (OUI). The Select Installation
Option page is displayed. Select the Install and Configure Grid Infrastructure for a
Standalone Server option and click Next.
11.2
11.2
11.2
11.2
11.2
11.2
Creating Inventory
Creating Inventory
If an Oracle Inventory directory does not exist on the host machine, the "Create Inventory"
page is displayed. In the "Inventory Directory" field, enter the desired location for the
oraInventory directory. The recommended oraInventory directory should be one
level higher than the ORACLE_BASE directory. The directory is
/u01/app/oraInventory in the example.
Select the oraInventory group name from the selection list. The recommended name is
oinstall. Click the Next button to proceed with the installation.
11.2
11.2
11.2
11.2
11.2
11.2
11.2
11.2
Important Instructions
To restore the OLR, use the following commands as root:
crsctl stop crs
ocrconfig -local -restore file_name
ocrcheck local
crsctl start crs
Quiz
Answer: 2
11.2
user
ASM
ASMInstance
Instance
Kernel ASM
ASMCluster
ClusterFile
FileSystem
System(ACFS)
(ACFS)
OS ACFS Snapshots
ACFS Snapshots
Dynamic
DynamicVolume
VolumeManager
Manager
11.2
11.2
Primary
AU
Allocation
Unit (AU)
Volume Extent
... ... ... ...
Volume Extent 1 Volume Extent 2 Volume Extent 3 Volume Extent 4
(64MB)
ASM Dynamic Volume (Default: 128KB stripe width, 4 stripe columns)
1
2
3 ACFS File
4 (640KB)
5
11.2
ADVM Restrictions
ADVM Restrictions
Although ADVM provides a standard disk device interface for dynamic volumes, the
following restrictions should be noted:
Device partitions are not supported on Oracle ADVM dynamic volumes. Dynamic
volumes supersede traditional device partitioning. Each volume is individually named and
may be configured for a single file system. Oracle ADVM volumes may be created on
demand from ASM disk group storage and dynamically resized as required. These
attributes make Oracle ADVM volumes far more flexible than physical devices and
associated partitioning schemes.
On Linux platforms, Oracle ADVM volume devices are created as block devices
regardless of the configuration of the storage underpinning the ASM disk group. Do not
use raw (8) to map Oracle ADVM volume block devices into raw volume devices.
You should not create multipath devices over Oracle ADVM volume devices.
Multipathing should be applied over the disk devices that are initially consumed by ASM
to construct the disk group underpinning an ADVM volume.
You should not use ASMLIB over an ADVM volume device. You cannot layer ASM over
ASM in a recursive fashion because this serves no useful purpose and is not supported.
11.2
11.2
11.2
11.2
Add a volume:
SQL> ALTER DISKGROUP DGROUPA
ADD VOLUME asmvol1 SIZE 10G;
Resize a volume:
Drop a volume:
SQL> ALTER DISKGROUP DGROUPA
DROP VOLUME asmvol1;
11.2
Enable a volume:
SQL> ALTER DISKGROUP DGROUPA
ENABLE VOLUME asmvol1;
Disable a volume:
Modify a volume:
SQL> ALTER DISKGROUP DGROUPA
MODIFY VOLUME asmvol1 USAGE 'acfs';
11.2
11.2
11.2
ACFS Architecture
ASM Instance
rd
Database & 3 Party
ACFS
Clusterware File System
Files
Files Files
Disk Group Dynamic Volume
ACFS Architecture
ACFS extends the architecture of ASM. The ACFS architecture introduces a new ASM file
type, known as a dynamic volume. A dynamic volume is essentially an ASM file that can be
presented as a volume to a file system rather than as a data file to the Oracle database. The
ASM Dynamic Volume Manager (ADVM), and associated device driver provides the interface
between dynamic volumes and the ASM cluster file system. You can even run other file
systems, such as NT File System (NTFS) for example, over an ASM dynamic volume.
11.2
11.2
11.2
Linux/UNIX Extensions
Linux/UNIX Extensions
The administration commands have extensions to handle the differences between ACFS and
other file systems. Most of these extensions use the familiar file system type and file system
specific option parameters just as other file systems do.
Create an ACFS file system:
mkfs [-vf] -t acfs [-b blksz] [-n name ] device [blocks]
Mount an ACFS file system:
mount [-v] -t acfs [-o options] device dir
Unmount an ACFS file system:
umount [-v] device|dir
Check and repair an ACFS file system:
fsck [-avnf] -t acfs [info] device
The example commands in the gray boxes show the most common usage. Each command is
typed on one line. The mount command example is shown with the backslash continuation
character to format the command for the slide.
Note: The examples assume that the volume that was created was named asmvol1-216,
where 216 is a number unique to the disk group.
11.2
11.2
Windows Extensions
Windows Extensions
Create an ACFS file system:
acfsformat [/vf] [/b blksz] [/n name] device [blocks]
Mount an ACFS file system:
acfsmountvol [/all] [/v]
Unmount an ACFS file system:
acfsdismount [/v] [/p] [/P] path
Check and repair an ACFS file system:
acfschkdsk [/a] [/v] [/n] [/f] [info] device
11.2
ACFS Platform-Independent
Commands
Command Function
acfsutil info Display new ACFS file and file system features
(ACFS, snapshots, registry, size) and information.
acfsutil snapshot Create and display ACFS snapshots.
11.2
ACFS:
Cannot be used for root file system or bootable
partition
Cannot be used for the Grid Infrastructure home
11.2
Creating an ASM
Cluster File System
11.2
ACFS Snapshots
ACFS Snapshots
An Oracle ACFS snapshot is an online, read-only, point-in-time copy of an Oracle ACFS file
system. The snapshot copy is initially sparse and merely references the storage allocation
information maintained by the file system. Before an Oracle ACFS file extent is modified or
deleted, its current value is copied to the snapshot using a copy-on-write (COW) technique to
maintain the snapshot's point-in-time view of the file system.
Oracle ACFS snapshots are immediately available for use after they are created. They are
always online while the file system is mounted. As a result, an Oracle ACFS snapshot can
support the online recovery of files that are inadvertently modified or deleted from a file
system. With up to 63 snapshot views supported for each file system, flexible online file
recovery solutions spanning multiple views can be employed. An Oracle ACFS snapshot can
also be used as the source of a file system backup because it can be created on demand to
deliver a current, consistent, online view of an active file system.
Oracle ACFS snapshot storage is maintained within the file system, eliminating the need to
manage separate storage pools for file systems and snapshots. Oracle ACFS file systems can
be dynamically resized to accommodate additional file and snapshot storage requirements.
11.2
Quiz
Answer: 1
Summary
Objectives
11.2
11.2
11.2
11.2
11.2
11.2
11.2
11.2
11.2
11.2
11.2
11.2
Quiz
Answer: 1
11.2
9.2.0.8
9.2.0.8
10.2.0.2
10.2.0.2
11.2
11.2
11.1.0.6
11.1.0.6
11.2
Deprecated Features in
Oracle Database 11g Release 1 and Release 2
USER_DUMP_DEST
BACKGROUND_DUMP_DEST DIAGNOSTIC_DEST
CORE_DUMP_DEST
UNDO_MANAGEMENT not set implies AUTO mode.
/etc/mtab
SVR_ID/ V$DNFS_FILES
ID
Join column
ID/
SVR_ID V$DNFS_CHANNELS
Quiz
Answers: 1, 3, and 4
Summary
Oracle Restart
11.2
Objectives
11.2
Oracle Restart
Oracle Restart
Oracle Restart is designed to improve the availability of your Oracle Database. It implements a high
availability solution for single instance (nonclustered) environments only. For Oracle Real
Application Cluster (Oracle RAC) environments, the functionality to automatically restart
components is provided by Oracle Clusterware. Oracle Restart can monitor the health and
automatically restart the following components:
Database instances
Oracle Net listener
Database services
ASM instance
ASM disk groups
Oracle Notification Services (ONS/eONS): Service for sending Fast Application Notification
(FAN) events to integrated clients upon failover. The eONS is used by Oracle Enterprise
Manager to receive notification of change in status of components managed by Oracle Restart.
Restarting an ASM disk group means mounting it. The ability to restart ONS is applicable only in
Oracle Data Guard installations for automatic failover of connections between primary and standby
databases through FAN.
11.2
11.2
11.2
11.2
11.2
11.2
11.2
11.2
11.2
Refer to the Oracle Database Administrators Guide for a list of options for each of the above objects.
11.2
11.2
Add a database:
$ srvctl add database -d orcl
-o /u01/app/oracle/acfsmounts/acfs_db11
Quiz
Answers: 1, 2, and 4
11.2
Summary
11.2
Objectives
Note
In this lesson, the term ASM data extent is shortened to extent.
Secondary
Primary
extent
extent
Secondary
Primary
extent
extent
P S
P P: Primary AU S
S: Secondary AU
Setup
On first instance
ASM_PREFERRED_READ_FAILURE_GROUPS=DATA.SITEA
On second instance
ASM_PREFERRED_READ_FAILURE_GROUPS=DATA.SITEB
P S P S S S S P
Only two failure groups: One for each instance Max four failure groups: Two for each instance
P S S
Quiz
You can set up the ASM fast mirror resync feature on a per
disk group basis.
1. True
2. False
Answer: 1
11.2
update
ASM Scalability and Performance
Enhancements
11.2
update
SYSASM Privilege
SYSASM Privilege
Oracle Database 11g includes the SYSASM privilege that is specifically intended for
performing ASM administration tasks. Using the SYSASM privilege instead of the SYSDBA
privilege improves security by separating ASM administration from database administration.
The OSASM group, a new OS-privileged group, is used exclusively for ASM. Members of this
group can connect as SYSASM using OS authentication and have full access to ASM.
You also have the ability to use the combination of CREATE USER and GRANT SYSASM SQL
statements from an ASM instance to create a new SYSASM user. This can be useful for remote
or local ASM administration. These commands update the password file of each ASM
instance. Similarly, you can revoke the SYSASM role from a user by using the REVOKE
command, and you can drop a user from the password file by using the DROP USER
command.
The V$PWFILE_USERS view includes a new column called SYSASM, which indicates
whether the user can connect with SYSASM privileges (TRUE) or not (FALSE).
11.2
update
ASM Disk Group Compatibility
The compatibility of each disk group is separately
controllable:
ASM compatibility controls ASM metadata on-disk
structure.
RDBMS compatibility controls the minimum consumer
client level.
11.2
update
ASM Disk Group Attributes
Name Property Values Description
au_size C 1|2|4|8|16|32|64MB Size of allocation units in the disk group
A: ALTER
C: CREATE
User-created directories
Templates
Disk group compatibility md_backup
Disk group name full
Disk names and failure groups
newdg
lsdsk
11.2
Quiz
Answer: 1
11.2
ASM ACLs
ASM ACLs
ASM ACLs provide optional protection for ASM files. The objective of the ASM access
control list is not security, but separation of duties to prevent accidental file damage. Without
ACLs, any user with the SYSDBA privilege may access ASM files in the mounted disk group,
including removing them.
To set up Oracle ASM File Access Control, you must create separate operating system groups
for the OSASM, OSDBA for ASM, and OSDBA for database groups. The OSDBA group for the
database must be different for each database instance using the same ASM instance.
Each ASM file is created by a DBUSER. This DBUSER is usually an owner of a database
instance. The ASM files created for that database are owned by that DBUSER. The OS user ID
of the database software owner identifies the DBUSER to the ASM instance. Access is limited
by the OS effective user ID NUMBER of the DBUSER. The operating system user of a
running database instance is automatically added to a disk group when the database instance
accesses that disk group and creates files.
Each DBUSER can create access control lists.
The ASM ACL includes a user group, a list of DBUSERS, but not any passwords. ASM trusts
the OS authentication mechanisms. A DBUSER is a member of the OSDBA group for ASM.
11.2
11.2
11.2
11.2
11.2
11.2
mark files
ASM files
COMPATIBLE.ASM>=11.2
COMPATIBLE.RDBMS>=11.2
11.2
11.2
11.2
11.2
11.2
Quiz
Answer: 1
Summary
Practice 4: Overview
Objectives
11.2
Emulation mode:
LBA0 LBA1 LBA2 LBA3 LBA4 LBA5 LBA6 LBA7
Physical sector
11.2
11.2
Specify the sector size for disk drives with the new
SECTOR_SIZE and BLOCKSIZE clauses of the following
commands:
CREATE DISKGROUP
11.2
11.2
No extra work
No GUI change
Default sector size based on hardware (not the earlier 512
bytes sectors)
11.2
Specifying BLOCKSIZE
Specifying BLOCKSIZE
Specify BLOCKSIZE to override the operating systemdependent sector size. The example in the
slide shows the ALTER DATABASE and the CREATE CONTROLFILE commands.
When you add a redo log file to a 512-byte sector disk or to a 4 KB native disk, the block size of
the new file must be the original platform base block size.
When you add a redo log file to a 4 KB sector disk with 512-byte emulation, then you can
specify either 512, 1024 (or 1K), or 4096 (or 4K) as the block size, depending on your platform.
All logs within a log group must have the same block size. Two log groups created on separate disks
can have different block sizes. However, the mixed configuration introduces overhead at every log
switch. Oracle recommends that you create all log files with the same block size.
This clause is useful when the 4 KB sector size is in use, and you want to optimize disk space use
rather than performance. In such a case, you can override the operating system sector size by
specifying BLOCKSIZE 512 or, for HP-UX, BLOCKSIZE 1024.
Reminder: Use the CREATE CONTROLFILE command very carefully. Oracle recommends that
you perform a full backup of all files in the database before using this command. For more
information, see Oracle Database Backup and Recovery User's Guide.
11.2
11.2
Postmigration steps:
1. Query the V$LOG view to ensure that all 512-byte block size logs have the INACTIVE status.
SQL> SELECT group#, blocksize, sequence#, status FROM v$log;
GROUP# BLOCKSIZE SEQUENCE# STATUS
---------- ---------- ---------- ----------------
1 512 1 INACTIVE
2 512 2 INACTIVE
3 4096 3 ACTIVE
4 4096 4 CURRENT
3. Query the V$LOG view to ensure that there are only 4 KB block size logs available for your
database.
SQL> SELECT group#, blocksize, sequence#, status FROM v$log;
GROUP# BLOCKSIZE SEQUENCE# STATUS
---------- ---------- ---------- ----------------
3 4096 3 INACTIVE
4 4096 4 CURRENT
Quiz
Answer: 1
Compressed
data
PCTFREE reached
Free
space
Inserts are
uncompressed.
11.2
update
Compressing Table Data
11.2
changes
Using OLTP Compression
11.2
What
partitions, indexes,
and MVs do I need SQL
to optimize Solution Access
my entire Advisor
workload?
SQL Access
Advisor
SQL cache
Workload
Hypothetical
Possible Recommendations
Possible Recommendations
SQL Access Advisor carefully considers the overall impact of recommendations and makes
recommendations by using only the known workload and supplied information. Two workload
analysis methods are available:
Comprehensive: With this approach, SQL Access Advisor addresses all aspects of tuning
partitions, materialized views, indexes, and materialized view logs. It assumes that the workload
contains a complete and representative set of application SQL statements.
Limited: Unlike the comprehensive workload approach, a limited workload approach assumes
that the workload contains only problematic SQL statements. Thus, advice is sought for
improving the performance of a portion of an application environment.
When comprehensive workload analysis is chosen, SQL Access Advisor forms a better set of global
tuning adjustments, but the effect may be a longer analysis time. As shown in the table, the chosen
workload approach determines the type of recommendations made by the advisor.
Note: Partition recommendations can work on only those tables that have at least 10,000 rows, and
workloads that have some predicates and joins on columns of NUMBER or DATE type. Partitioning
advice can be generated only on these types of columns. In addition, partitioning advice can be
generated only for single-column interval and hash partitions. Interval partitioning recommendations
can be output as range syntax but interval is the default. Hash partitioning is done to leverage only
partitionwise joins.
Step 3
ADD_STS_REF
Step 1
DELETE_STS_REF
CREATE_TASK EXECUTE_TASK
UPDATE_TASK_ATTRIBUTES INTERRUPT/CANCEL_TASK
DELETE_TASK MARK_RECOMMENDATION
Report/Scripts
SET_TASK_PARAMETER
RESET_TASK
Step 2
BEGIN 1
dbms_advisor.create_task(dbms_advisor.sqlaccess_advisor,'MYTASK');
END;
BEGIN
dbms_advisor.set_task_parameter('MYTASK','ANALYSIS_SCOPE','ALL');
2
BEGIN 3
dbms_advisor.add_sts_ref('MYTASK','SH','MYSTS');
dbms_advisor.execute_task('MYTASK');
dbms_output.put_line(dbms_advisor.get_task_script('MYTASK'));
END;
DBA_TEMP_FREE_SPACE
DBA_TEMP_FREE_SPACE
This dictionary view reports temporary space usage information at the tablespace level. The
information is derived from various existing views.
11.2
11.2
11.2
11.2
11.2
Quiz
Answer: 2, 4
Summary
Objectives
11.2
New feature:
Preprocessing
of files for
external tables
11.2
Windows Example
[C:\TEMP]type C:\DATA/input.dat
hello world
[C:\TEMP]type C:\TEMP/bar.bat
Windows uses a batch file
@cat %1 | tr hw HW rather than shell script to
SQL> @ex1
SQL> connect /as sysdba preprocess the data.
Connected.
SQL> create or replace directory def_dir1 as 'C:\DATA;
Directory created.
SQL> create or replace directory execdir as 'C:\TEMP ;
Directory created.
SQL> grant read, write on directory def_dir1 to scott;
Grant succeeded.
SQL> grant read, write, execute on directory execdir to scott;
Grant succeeded.
SQL> connect scott/tiger
Windows Example
In this Windows example, the table to be loaded has a single VARCHAR2 column. The batch file will
convert the string hello world (the contents of the input data) to Hello World.
Windows uses a batch file rather than shell script to preprocess the data. Secondly, note the @
character used in the batch file. This will prevent the preprocessing command from echoing what
would then be considered input data. Lastly, note that Windows batch files use the % character
rather than the UNIX style $ to provide access to arguments.
11.2
Example of Preprocessing
Example of Preprocessing
With Oracle Database 11g Release 2, you can specify a program to be executed (such as zcat) that
will process files (such as compressed data files) and enable Oracle Database to use the output (such
as uncompressed data files). This enables you to load large amounts of compressed data without first
uncompressing it on a disk.
This example shows an external table named SALES_TRANSACTIONS_EXT, representing the
structure of the complete sales transaction data in the external file sh_sales.dat.gz. The
product department is especially interested in a cost analysis on product and time, so a fact table
named COSTS is created in the sales history schema. The operational source data is the same as for
the SALES fact table. However, because you are not investigating every dimensional information
that is provided, the data in the COSTS fact table has a coarser granularity than in the SALES fact
table, for example, all different distribution channels are aggregated.
You cannot load the data into the COSTS fact table without applying the previously mentioned
aggregation of the detailed information, due to the suppression of some of the dimensions.
The external table framework offers a solution to solve this. Unlike SQL*Loader, where you would
have to load the data before applying the aggregation, you can combine the loading and
transformation within a single SQL DML statement, as shown in the following. You do not have to
stage the data temporarily before inserting into the target table.
Note that the preprocessor file is in a separate directory from the data files and log files.
Oracle Database 11g: New Features for Administrators 6 - 5
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
11.2
11.2
Security Implications
Security Implications
To use the preprocessing feature, you must specify the PREPROCESSOR clause in the access
parameters of the ORACLE_LOADER access driver. The PREPROCESSOR must be a directory
object, and the user accessing the external table must have EXECUTE privileges for the directory
object.
Because the access driver needs to run the preprocessor that will modify the data in the data files, the
database user must be prevented from specifying, either accidentally or maliciously, a program that
could damage or corrupt the database or the operating system. Because the program that is being
executed is run under the OS credentials of the OS user ORACLE, the preprocessor can also access
any files that can be accessed by the OS user ORACLE.
By forcing the preprocessor to live in a directory object, the DBA and the OS system manager
control who gets to access the preprocessor. The OS system manager needs to create the directory
corresponding to the directory object and needs to make sure that the OS user ORACLE has access to
the directory. The DBA needs to make sure that only the required database users are allowed access
to the directory object associated with the directory path. Although multiple database users can have
access to a directory object, only those with the EXECUTE privilege can run a preprocessor in that
directory. No existing database user with READ/WRITE privileges to a directory object will be able
to use the preprocessing feature. A DBA can prevent preprocessors from ever being used by never
granting the EXECUTE privileges to a directory object.
11.2
11.2
11.2
Quiz
Answer: 1
11.2
11.2
1 DOP determination
Generate a serial plan.
Serial cost plan
4 Compare
Worse
estimated parallel
elapsed time with estimated
No serial elapsed time.
Better
11.2
11.2
Using PARALLEL_MIN_TIME_THRESHOLD
Using PARALLEL_MIN_TIME_THRESHOLD
One new concept related to simplified parallelism is minimum threshold for parallelism. Even if a
SQL statement is deemed a candidate for automatic determination of parallelism, does the parallel
overhead justify the cost of this SQL going parallel?
The PARALLEL_MIN_TIME_THRESHOLD parameter limits parallelism to only SQL statements for
which it is worth the effort. Any SQL that has the computed elapsed time below this threshold will
execute in serial.
This new parameter indicates the threshold (expressed in seconds) that limits the scope of automatic
parallel execution. Only if the estimated serial execution elapsed time of a statement is greater than
the threshold, will the statement be a candidate for automatic parallel execution. It accepts any
numerical value greater than 0 or a keyword AUTO. When it is set to AUTO, the threshold is
computed by the system.
Note: PARALLEL_MIN_PERCENT works as in past releases if the estimated elapsed time exceeds
PARALLEL_MIN_TIME_THRESHOLD.
11.2
Using PARALLEL_DEGREE_POLICY
Using PARALLEL_DEGREE_POLICY
MANUAL Mode: MANUAL mode exists for backward compatibility. In this mode, it is the
responsibility of a DBA to specify the degree of parallelism with object properties name value pairs,
if a statement is to be executed in parallel. Statements touching objects with parallel properties name
value pairs are executed with a DOP derived directly from the properties name value pairs.
This mode is enabled when PARALLEL_DEGREE_POLICY is set to MANUAL. MANUAL mode is the
default mode.
LIMITED Mode: The degree of parallelism is derived from the elapsed time of the most costly
operation during the first serial compilation. To derive the DOP, the optimizer uses a unit of work
that can be performed by a slave. This unit is expressed in seconds. (The default is 10s and is
controlled by an underscore parameter.)
To have access to another slave, an operation needs to require at least that unit of work above the
threshold. As an example, if the unit is 10s, the threshold is also 10s and the estimated elapsed time
of the operation is 43s, the ideal degree of parallelism will be 4.
The optimizer still uses response time to compute the best parallel plan. The DOP is derived from the
expected response time of a single operation. The unit of work that a slave should have to efficiently
cover for its cost is the quotient used to derive the DOP.
DOP = ElapsedTime(operation)/ parallel_time_unit.
This mode is enabled when the PARALLEL_DEGREE_POLICY parameter is set to LIMITED.
Oracle Database 11g: New Features for Administrators 6 - 21
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
11.2
Using PARALLEL_DEGREE_LIMIT
CPU scan
CPU scan
CPU scan
Maximum DOP?
Using PARALLEL_DEGREE_LIMIT
Another new concept related to simplified parallelism is maximum degree of parallelism. By
default, the maximum degree of parallelism for a statement is capped by the default DOP. In some
cases, this DOP might be too high. There is a need to have a limit on the maximum degree of
parallelism for all parallel statements in the system to prevent overloading of the system.
A new parameter, PARALLEL_DEGREE_LIMIT, allows you to specify the maximum DOP.
The allowable values for the parameter are any numerical value greater than or equal to 1, CPU, IO,
or AUTO. If the parameter is set to CPU, IO, or AUTO, the degree limit will be automatically
computed by the system, based on the system resource constraint. Users are also allowed to set the
limit to a fixed value, if they want to limit the degree of parallelism below a certain value for
whatever reasons.
When PARALLEL_DEGREE_LIMIT is set to AUTO, the threshold is derived by the optimizer.
When PARALLEL_DEGREE_LIMIT is set to IO, the threshold is derived from I/O resource
constraints.
The default value for this parameter is CPU, calculated as follows:
threads_per_cpu * total number of CPUs in the cluster
This control is applicable to LIMITED mode only. The DOP derived either from object decoration
directly under manual mode or by the optimizer under limited mode would never exceed the degree
limit specified in the control.
Oracle Database 11g: New Features for Administrators 6 - 23
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
11.2
Using PARALLEL_FORCE_LOCAL
Using PARALLEL_FORCE_LOCAL
Within a cluster, the instance with the highest number of CPUs is used to determine the degree limit.
Example: In a cluster with two 4-CPU machines, one 2-CPU machine and one 8-CPU machine, 8 is
the higher number and is used to compute the degree limit with:
threads per cpu * #cpu
You can enable this feature by setting PARALLEL_FORCE_LOCAL to TRUE.
11.2
11.2
11.2
11.2
11.2
Parallel
hint
11.2
Quiz
Answer: 1
Oracle Partitioning
Oracle Partitioning
The slide summarizes the ten years of partitioning development at Oracle.
Note: REF partitioning enables pruning and partitionwise joins against child tables. While
performance seems to be the most visible improvement, do not forget about the rest. Partitioning
must address all business-relevant areas of performance, manageability, and availability.
Partitioning Enhancements
Interval partitioning
System partitioning
Composite partitioning enhancements
Virtual column-based partitioning
Partitioning Enhancements
Partitioning is an important tool for managing large databases. Partitioning allows the DBA to
employ a divide and conquer methodology for managing database tables, especially as those tables
grow. Partitioned tables allow a database to scale for very large data sets while maintaining
consistent performance, without unduly impacting administrative or hardware resources.
Partitioning enables faster data access within an Oracle database. Whether a database has 10 GB or
10 TB of data, partitioning can speed up data access by orders of magnitude.
Oracle Database 11g includes the following partitioning enhancements:
Addition of interval partitioning
Addition of system partitioning
Composite partitioning enhancements
Addition of virtual column-based partitioning
Addition of reference partitioning
Interval Partitioning
Interval Partitioning
Before the introduction of interval partitioning, the DBA was required to explicitly define the range
of values for each partition. The problem with this approach is that explicitly defining the bounds for
each partition does not scale as the number of partitions grow.
Interval partitioning is an extension of range partitioning, which instructs the database server to
automatically create partitions of a specified interval when data inserted into the table exceeds all of
the range partitions. You must specify at least one range partition. The range partitioning key value
determines the high value of the range partitions, which is called the transition point, and the
database server creates interval partitions for data beyond that transition point.
Interval partitioning fully automates the creation of range partitions. Managing the creation of new
partitions can be a cumbersome and highly repetitive task. This is especially true for predictable
additions of partitions covering small ranges, such as adding new daily partitions. Interval
partitioning automates this operation by creating partitions on demand.
When using interval partitioning, consider the following restrictions:
You can specify only one partitioning key column, and it must be of NUMBER or DATE type.
Interval partitioning is not supported for index-organized tables.
You cannot create a domain index on an interval-partitioned table.
Automatically created
when you insert data
P1 P2 P3 Pi1 Pin
PREVIOUS
< 01/01/07 INSERT INTO orders_interval ();
Transition point
Not yet
PREVIOUS materialized
SYS_Px SYS_Py SYS_Pz SYS_Pt
System Partitioning
System partitioning:
Enables application-controlled partitioning for selected
tables
Provides the benefits of partitioning but the partitioning
System Partitioning
System partitioning enables application-controlled partitioning for arbitrary tables. This is mainly
useful when you develop your own partitioned domain indexes. The database simply provides the
ability to break down a table into meaningless partitions. All other aspects of partitioning are
controlled by the application. System partitioning provides the well-known benefits of partitioning
(scalability, availability, and manageability), but the partitioning and actual data placement are
controlled by the application.
The most fundamental difference between system partitioning and other methods is that system
partitioning does not have any partitioning keys. Consequently, the distribution or mapping of the
rows to a particular partition is not implicit. Instead, the user specifies the partition to which a row
maps by using partition-extended syntax when inserting a row.
Because system-partitioned tables do not have a partitioning key, the usual performance benefits of
partitioned tables are not available for system-partitioned tables. Specifically, there is no support for
traditional partition pruning, partitionwise joins, and so on. Partition pruning is achieved by
accessing the same partitions in the system-partitioned tables as those that were accessed in the base
table.
System-partitioned tables provide the manageability advantages of equipartitioning. For example, a
nested table can be created as a system-partitioned table that has the same number of partitions as the
base table. A domain index can be backed up by a system-partitioned table that has the same number
of partitions as the base table.
Oracle Database 11g: New Features for Administrators 6 - 39
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
Because of the peculiar requirements of system partitioning, the following operations are not
supported for system partitioning:
Unique local indexes are not supported because they require a partitioning key.
CREATE TABLE AS SELECT is not supported because there is no partitioning method. It is
not possible to distribute rows to partitions. Instead, you should first create the table and then
insert rows into each partition.
SPLIT PARTITION operations
11.2
Reference Partitioning
Reference Partitioning
Reference partitioning provides the ability to partition a table based on the partitioning scheme of the
table referenced in its referential constraint. The partitioning key is resolved through an existing
parent/child relationship, which is enforced by active primary key and foreign key constraints. The
benefit of this is that tables with a parent/child relationship can be logically equipartitioned by
inheriting the partitioning key from the parent table without duplicating the key columns. The logical
dependency also automatically cascades partition maintenance operations, making application
development easier and less error prone.
Range(ORDER_DATE)
Primary key (ORDER_ID)
Table
ORDER_ITEMS
Range(ORDER_DATE)
Foreign key (ORDER_ID)
Interval-Hash
LIST, RANGE, HASH
Summary
Practice 6: Overview
Oracle SecureFiles
Objectives
F
PD
Oracle SecureFiles
Oracle SecureFiles
Oracle Database 11g completely reengineers the LOB data type as Oracle SecureFiles,
dramatically improving the performance, manageability, and ease of application development.
The new implementation also offers advanced, next-generation functionality such as intelligent
compression and transparent encryption.
With SecureFiles, chunks vary in size from Oracle data block size up to 64 MB. The Oracle
Database server attempts to colocate data in physically adjacent locations on disk, thereby
minimizing internal fragmentation. By using variable chunk sizes, SecureFiles avoids versioning
of large, unnecessary blocks of LOB data.
SecureFiles also offer a new client/server network layer allowing for high-speed data transfer
between the client and server supporting significantly higher read and write performance.
SecureFiles automatically determines the most efficient way for generating redo and undo,
eliminating user-defined parameters. SecureFiles automatically determines whether to generate
redo and undo for only the change, or create a new version by generating a full redo record.
SecureFiles is designed to be intelligent and self-adaptable as it maintains different in-memory
statistics that help in efficient memory and space allocation. This provides for easier
manageability due to lower number of tunable parameters that are harder to tune with
unpredictable loads.
Note: LOB implementation available in prior database versions is now referred to as BasicFiles.
Altering the RETENTION with the ALTER TABLE statement affects the space created only after
the statement is executed.
For SecureFiles, you no longer need to specify CHUNK or PCTVERSION. Because SecureFiles
are stored in ASSM tablespaces FREEPOOLS, FREELISTS, and FREELIST GROUPS cannot
be specified. For compatibility with existing scripts, these clauses are parsed but not interpreted.
Creating SecureFiles
Creating SecureFiles
You create SecureFiles with the storage keyword SECUREFILE in the CREATE TABLE
statement with a LOB column. When you add a LOB column to a table, you can specify whether
it should be created as SecureFiles or BasicFiles. If you do not specify the storage type, the LOB
is created as BasicFiles to ensure backward compatibility.
In the first example in the slide, you create a table called FUNC_SPEC to store documents as
SecureFiles. Here you are specifying that you do not want duplicates stored for the LOB, that the
LOB should be cached when read, and that redo should not be generated when updates are
performed to the LOB. In addition, you are specifying that the documents stored in the DOC
column should be encrypted using the AES128 encryption algorithm. KEEP_DUPLICATES is
the opposite of DEDUPLICATE, and can be used in an ALTER statement.
In the second example you are creating a table called TEST_SPEC that stores documents as
SecureFiles. For this table you have specified that duplicates may be stored, and that the LOBs
should be stored in compressed format and should be cached but not logged. Additional
information on compression is provided later in the lesson.
The third and fourth examples above affect the same result: creating a table with a SecureFiles
LOB column using the default AES192 encryption.
LOB Cache
Altering SecureFiles
ALTER TABLE t1 Disable deduplication.
MODIFY LOB(a) ( KEEP_DUPLICATES );
ALTER TABLE t1 Enable deduplication.
MODIFY LOB(a) ( DEDUPLICATE LOB );
ALTER TABLE t1 Enable partition
deduplication.
MODIFY PARTITION p1 LOB(a) ( DEDUPLICATE LOB );
ALTER TABLE t1 Disable compression.
MODIFY LOB(a) ( NOCOMPRESS );
Altering SecureFiles
Using the DEDUPLICATE option, you can specify that LOB data that is identical in two or more
rows in a LOB column should share the same data blocks. The opposite of this is
KEEP_DUPLICATES. Oracle uses a secure hash index to detect duplication and combines
LOBs with identical content into a single copy, reducing storage and simplifying storage
management. The LOB keyword is optional and is for syntactic clarity only.
The COMPRESS or NOCOMPRESS keywords enable or disable LOB compression, respectively.
All LOBs in the LOB segment are altered with the new compression setting. Additional
information on compression is provided later in the lesson.
The ENCRYPT or DECRYPT keyword turns on or off LOB encryption using Transparent Data
Encryption (TDE). All LOBs in the LOB segment are altered with the new setting. A LOB
segment can be altered only to enable or disable LOB encryption. That is, ALTER cannot be
used to update the encryption algorithm or the encryption key. The encryption algorithm or
encryption key can be updated using the ALTER TABLE REKEY syntax. Encryption is done at
the block level allowing for better performance (smallest encryption amount possible) when
combined with other options.
Note: For a full description of the options available for the ALTER TABLE statement, see the
Oracle Database SQL Reference.
Quiz
Answers: 1 and 3
SecureFile Compression
provides maximized storage
utilization by compressing
unstructured data stored within
the database.
11.2
Compression Modes
Compression Modes
In Oracle Database 11g Release 1, there were two modes of SecureFile compression: HIGH and
MEDIUM. HIGH compression provides maximum compression. It is useful for databases that are
CPU intensive and have a relatively higher latency. MEDIUM compression mode provides lower
compression than HIGH mode. It is useful for databases that have a lower latency compared to
HIGH mode databases. MEDIUM compression is the default compression mode if not specified.
In Oracle Database 11g Release 2, a new mode of SecureFile compression is introduced: LOW.
This mode of compression is recommended for customers who want faster performance during
reads and writes.
11.2
11.2
Configuration Parameters
Configuration Parameters
To optimize performance, you can force the I/O to be written to disk and avoid file system
buffering by setting the FILESYSTEMIO_OPTIONS parameter to SETALL.
Quiz
Answer: 4
DBMS_LOB
SecureFiles
DBMS_SPACE.SPACE_USAGE
Migrating to SecureFiles
Migrating to SecureFiles
A superset of LOB interfaces allows easy migration from BasicFile LOBs. The two
recommended methods for migration to SecureFiles are partition exchange and online
redefinition.
Partition Exchange
Needs additional space equal to the largest of the partitions in the table
Can maintain indexes during the exchange
Can spread the workload out over several smaller maintenance windows
Requires that the table or partition needs to be offline to perform the exchange
Online Redefinition (recommended practice)
No need to take the table or partition offline
Can be done in parallel
Requires additional storage equal to the entire table and all LOB segments to be available
Requires that any global indexes be rebuilt
These solutions generally mean using twice the disk space used by the data in the input LOB
column. However, using partitioning and taking these actions on a partition-by-partition basis
may help lower the disk space required.
begin
dbms_redefinition.start_redef_table('scott','tab1','tab1_tmp','id id, c c');
dbms_redefinition.copy_table_dependents('scott','tab1','tab1_tmp',1,
true,true,true,false,error_count);
dbms_redefinition.finish_redef_table('scott','tab1','tab1_tmp');
end;
SecureFiles Monitoring
Summary
Practice 7: Overview
Objectives
Password Configuration
By default:
Default password profile is enabled
Account is locked after 10 failed login attempts
In upgrade:
Audited Privileges
Audited Privileges
The privileges listed on the slide are audited for all users ON SUCCESS and FAILURE, and BY
ACCESS.
Quiz
Answers: 1, 2, and 4
Tablespace Encryption
Tablespace encryption is based on block-level encryption that encrypts on write and decrypts on
read. The data is not encrypted in memory. The only encryption penalty is associated with I/O. The
SQL access paths are unchanged and all data types are supported. To use tablespace encryption, the
encryption wallet must be open.
The CREATE TABLESPACE command has an ENCRYPTION clause that sets the encryption
properties, and an ENCRYPT storage parameter that causes the encryption to be used. You specify
USING 'encrypt_algorithm' to indicate the name of the algorithm to be used. Valid
algorithms are 3DES168, AES128, AES192, and AES256. The default is AES128. You can view
the properties in the V$ENCRYPTED_TABLESPACES view.
The encrypted data is protected during operations such as JOIN and SORT. This means that the data
is safe when it is moved to temporary tablespaces. Data in undo and redo logs is also protected.
Encrypted tablespaces are transportable if the platforms have same endianess and the same wallet.
Restrictions
Temporary and undo tablespaces cannot be encrypted. (Selected blocks are encrypted.)
Bfiles and external tables are not encrypted.
Transportable tablespaces across different endian platforms are not supported.
11.2
11.2
Quiz
Answer: 1
BEGIN
DBMS_NETWORK_ACL_ADMIN.CREATE_ACL (
acl => 'us-oracle-com-permissions.xml',
11.2
2001:0DB8:0000:0000:0000:0000:200C:417A
2001:0DB8:0:0::200C:417A
2001:DB8::200C:417A
11.2
host01=
(DESCRIPTION= 1 IPv6 address:
(ADDRESS=(PROTOCOL=tcp) 2001:0DB8::200C:417A
x(HOST=host01-server)
(PORT=1521))
11.2
FEDC:BA98:7654:3210:FEDC:BA98:7654:3210
1080:0:0:0:8:800:200C:417A
Summary
Practice 8: Overview
Objectives
Set up test
Realistic testing
Test
Diagnose and
Make change resolve problems
Set up test
Provisioning automation
Production
database
Cursor cache
11.2
update
Step 3: Build Before Change
Performance Data
Step 4: Implement Planned Change and Step 5: Build After-Change Performance Data
The fourth step is to make the changes to the test system. Rerun the SQL statements to assess the
impact of the changes on the SQL performance as step five.
Database instance
Use SQL execution frequency to
define a weight of importance.
Detect improvements, regressions,
and unchanged performance.
Detect changes in execution plans.
Recommend running SQL Tuning
Advisor to tune regressed SQLs. Test
Analysis results can be used to seed database
SQL Plan Management baselines.
Step 6: Compare and Analyze Performance and Step 7: Tune Regressed SQL
The comparison is based on the execution statistics, such as elapsed time, CPU time, and buffer gets.
Enterprise Manager provides the tools to make a full comparison of performance data, including
execution statistics such as elapsed time, CPU time, and buffer gets. If the SQL performance has
regressed in some of the cases, you must then run SQL Tuning Advisor to tune the SQL statements
either immediately or at a scheduled time. As with any tuning strategy, it is recommended that only
one change be implemented at a time and retested before making further changes.
You can use SQL Tuning Advisor or Access Advisor against the identified statements and then
implement those recommendations. Alternatively, you can seed SQL Plan Management (SPM) with
plans captured in step 3 to guarantee that the plans remain the same.
Quiz
Answers: 2, 3, and 4
11.2
Testing Database Upgrades: Oracle9i Database and Oracle Database 10g Release 1
SQL Performance Analyzer supports testing database upgrades of Oracle9i and Oracle Database 10g
Release 1, to Oracle Database 10g Release 2 and later releases by executing the SQL tuning set on
the upgraded database remotely over a database link. Because SQL Performance Analyzer only
accepts a set of SQL statements stored in a SQL tuning set as its input source, and SQL tuning sets
are not supported in Oracle9i, a SQL tuning set must be constructed so that it can be used as an input
source for SQL Performance Analyzer if you are upgrading from Oracle9i.
The production system which you are upgrading from should be running Oracle9i or Oracle Database
10g Release 1. The test system which you are upgrading to should be running Oracle Database 10g
Release 2 (10.2.0.2) or a newer release. If you are upgrading to Oracle Database 10g Release
10.2.0.2, 10.2.0.3, or 10.2.0.4, you will also need to install a one-off patch before proceeding.
Set up a separate system for SQL Performance Analyzer running Oracle Database 11g Release 1
(11.1.0.7) or a later release. Use this system to build a SQL tuning set and to run SQL Performance
Analyzer. You do not need your production data or schema on this system, because the SQL tuning
set will be built using statistics stored in the SQL trace files from the production system. SQL
Performance Analyzer tasks will be executed remotely on the test system to generate the execution
plan and statistics for the SQL trial over a database link that you specify. The database link must be a
public database link that connects to a user with the EXECUTE privilege for the DBMS_SQLPA
package and the ADVISOR privilege on the test system. You should also drop any existing
PLAN_TABLE from the users schema on the test system.
Testing Database Upgrades: Oracle9i Database and Oracle Database 10g Release 1
(continued)
To use SQL Performance Analyzer in a database upgrade from Oracle9i or Oracle Database 10g
Release 1 to a later release, perform the following steps:
1. Enable the SQL Trace facility on the production system. Consider enabling SQL Trace for only
a subset of the sessions, for as long as required, to capture all important SQL statements at least
once.
2. On the production system, create a mapping table that will be used to convert the user and object
identifier numbers in the SQL trace files to their string equivalents.
3. Move the SQL trace files and the mapping table from the production system to the SQL
Performance Analyzer system.
4. On the SQL Performance Analyzer system, construct a SQL tuning set by using the SQL trace
files. The SQL tuning set will contain the SQL statements captured in the SQL trace files, along
with their relevant execution context and statistics.
Testing Database Upgrades: Oracle9i Database and Oracle Database 10g Release 1
(continued)
5. On the SQL Performance Analyzer system, use SQL Performance Analyzer to create a SQL
Performance Analyzer task and convert the contents in the SQL tuning set into a preupgrade
SQL trial that will be used as a baseline for comparison. Remotely test execute the SQL
statements on the test system over a database link to build a postupgrade SQL trial. You can
access the SQL Performance Analyzer through Enterprise Manager or by using the
DBMS_SQLPA package.
6. Compare SQL performance and fix any regressed SQL statements.
Repeat the process of executing the SQL tuning set and comparing its performance to a previous
execution to test any changes made until you are satisfied with the results.
Testing Database Upgrades: Oracle Database 10g Release 2 and Later Releases
You can use SQL Performance Analyzer to test the impact on SQL response time of a database
upgrade from Oracle Database 10g Release 2 or a later release to any later release by capturing a
SQL tuning set on the production system, then executing it twice remotely over a database link on a
test systemfirst to create a pre-change SQL trial, and then again to create a postchange SQL trial.
The production system which you are upgrading from should be running Oracle Database 10g
Release 2 or a later release. Initially, the test system should also be running the same release. To
ensure that the analysis made by SQL Performance Analyzer is accurate, the test system should
contain an exact copy of the production data found on the production system. The hardware
configuration on the test system should be as similar to the production system as possible.
Set up a separate system for SQL Performance Analyzer running Oracle Database 11g Release 1
(11.1.0.7) or a later release. Use this system to build a SQL tuning set and to run SQL Performance
Analyzer. You do not need your production data or schema on this system, because the SQL tuning
set will be built using statistics stored in the SQL trace files from the production system. SQL
Performance Analyzer tasks will be executed remotely on the test system to generate the execution
plan and statistics for the SQL trial over a database link that you specify. The database link must be a
public database link that connects to a user with the EXECUTE privilege for the DBMS_SQLPA
package and the ADVISOR privilege on the test system. You should also drop any existing
PLAN_TABLE from the users schema on the test system.
Testing Database Upgrades: Oracle Database 10g Release 2 and Later Releases
(continued)
To use SQL Performance Analyzer in a database upgrade from Oracle Database 10g Release 2 and
later releases to a newer release, perform the following steps:
1. On the production system, capture the SQL workload that you intend to analyze and store it in a
SQL tuning set.
2. Set up the test system so that it matches the production environment as closely as possible.
3. Transport the SQL tuning set to the SQL Performance Analyzer system
4. On the SQL Performance Analyzer system, create a SQL Performance Analyzer task using the
SQL tuning set as its input source. Remotely test execute the SQL statements in the SQL tuning
set on the test system over a database link to build a preupgrade SQL trial that will be used as a
baseline for comparison.
Testing Database Upgrades: Oracle Database 10g Release 2 and Later Releases
(continued)
5. Upgrade the test system.
6. Remotely test execute the SQL statements a second time on the upgraded test system over a
database link to build a postupgrade SQL trial.
7. Compare SQL performance and fix any regressed SQL statements.
Repeat the process of executing the SQL tuning set and comparing its performance to a previous
execution to test any changes made until you are satisfied with the results.
Summary
Practice 9: Overview
Objectives
Statement log
Plan history
Plan history
Plan
baseline Plan
GB
GB baseline
GB GB
GB SQL
HJ HJ
GB
HJ HJ profile
Repeatable HJ HJ HJ
HJ HJ
SQL
statement
Plan History
Plan
baseline
GB
GB GB
HJ HJ HJ
HJ
HJ HJ Automatic
SQL Tuning
task
Plan verification before
integration to baseline
load_plans_from_cursor_cache
lin
ba GB e
an se
Pl lin
load_plans_from_sqlset
1 e HJ
HJ
GB
HJ
HJ
GB
HJ 3
HJ Staging
Cursor table
cache
Plan history
ba
an se
Pl GB lin
e 4
HJ
HJ
DBA
Plan history
ba
an se
Automatic Pl lin
e
HJ
DBA GB
HJ
HJ
>?
SQL
Tuning
Advisor
Plan history
ba
an se
GB
Pl lin
e
GB
Enabled but HJ
Enabled and
HJ
not accepted HJ HJ
accepted
ORIGIN:
AUTO-CAPTURE: Automatically captured
MANUAL-LOAD: Manually evolved
MANUAL-SQLTUNE: Automatically evolved by SQL Tuning
Plan history
Yes
Plan
No
baseline
GB
GB GB
Plan part HJ HJ
HJ
HJ
Yes of baseline?
HJ
No
GB GB GB GB
Yes No
DBMS_XPLAN.DISPLAY(,'BASIC +NOTE) HJ HJ > HJ HJ
or HJ HJ HJ HJ
PLAN_TABLE(other_xml)
Quiz
Answers: 1, 2, and 3
HJ HJ HJ HJ
No plan No plan
regressions regressions
GB HJ HJ
Well-
tuned HJ HJ HJ
plan HJ Well-tuned Baseline
plan plans
staging table
Oracle Database 10g Development database
HJ HJ
Regressing
optimizer_features_enable
GB GB GB
HJ HJ HJ
HJ HJ HJ Well-
tuned
plans
Oracle Database 10g
Plan baseline
GB GB GB
GB
GB Well- HJ HJ HJ
HJ tuned
HJ HJ HJ HJ
HJ plans
HJ
DBA_SQL_MANAGEMENT_CONFIG
53
11.2
Summary
Database Replay
Objectives
Clients/app servers
Capture directory Replay
system
Capture directory
Database
backup
Production
database
Capture directory
Database
backup
Production
Preprocess
database capture
Test
system
with
Database changes
backup
Test
database
11.2
update
Supported Workloads
Supported:
All SQL (DML, DDL, PL/SQL) with practically all types of
binds
Full LOB functionality (cursor-based and direct OCI)
Local transactions
Logins and logoffs
Supported Workloads
The slide shows supported and nonsupported database operations.
The following types of client requests are not captured in a workload:
Direct path load of data from external files using utilities such as SQL*Loader
Advanced replication streams
Non-PL/SQL-based Advanced Queuing (AQ)
Flashback queries
Oracle Call Interface (OCI)based object navigations
Non-SQL-based object access
Distributed transactions (any distributed transactions that are captured will be replayed as local
transactions)
Note: SQL-based XML manipulations are also captured. The system only captures explicit SQL
statements (SQL statements issued by clients). Implicit calls generated by the database itself are not
captured. For example, auditing is implicit, like background processes activity is implicit.
Capture Considerations
Planning:
Adequate disk space for captured workload (binary files)
Database restart:
Only way to guarantee authentic replay
Startup restrict
Capture will un-restrict
May not be necessary depending on the workload
Capture Considerations
You perform the following tasks in the planning phase of the workload recording:
Check the database backup strategy, ensuring that the database can be restored to StartSCN
when the recording starts.
Plan the capture period by selecting it based on the application and the peak periods. You can
use existing manageability features such as Automatic Workload Repository (AWR) and Active
Session History (ASH) to select an appropriate period based on workload history. The starting
time for capture should be carefully planned because it is recommended that you shut down and
restart the database before starting the capture.
Specify the location of the workload capture data. You must set up a directory that is to be used
to store the workload capture data. You should provide ample disk space because the recording
stops if there is insufficient disk space. However, everything captured up to that point is usable
for replay.
Define capture filters for user sessions that are not to be captured. You can specify a recording
filter to skip sessions that should not be captured.
No new privileges or user roles are introduced with the Database Replay functionality. The
recording user and replay user must have either the SYSDBA or SYSOPER privilege. This is
because only a user having SYSOPER or SYSDBA can start up or shut down the database to start
the recording. Correct operating system (OS) privileges should also be assigned so that the user
is able to access the recording, replay directories, and manipulate the files under those
directories.
Oracle Database 11g: New Features for Administrators 11 - 11
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
Replay Considerations
Replay Considerations
The preprocess phase is a once-only required action for the specified database version. After the
necessary metadata has been created, you can replay the workload as many times as required.
You must restore the replay database to match the capture database at the start of the workload
capture. A successful replay depends on the application transactions accessing the application data
identical to that on the capture system. You can choose to restore the application data using point-in-
time recovery, flashback, and import/export.
Replay Considerations
11.2
update
Replay Options
Synchronized replay:
Ensures minimal data divergence
Commit-based synchronization
Unsynchronized replay:
Useful for load/stress testing
Replay Options
The following replay options can be modified while replaying your workload:
The SYNCHRONIZATION parameter determines whether synchronization will be used during
workload replay.
- If this parameter is set to SCN, the COMMIT order in the captured workload will be
preserved during replay and all replay actions will be executed only after all dependent
COMMIT actions have completed. SCN is the default.
- If this parameter is set to OBJECT_ID, all replay actions will be executed only after all
relevant COMMIT actions have completed. Relevant COMMIT actions are COMMIT actions
that are issued before the given action in the workload capture, and that modify at least one
of the database objects for which the given action is referencing either implicitly or
explicitly.
- If this parameter is set to OFF, the option is disabled and the replay will likely yield
significant data divergence. This may be acceptable if the workload consists primarily of
independent transactions.
The CONNECT_TIME_SCALE parameter scales the elapsed time from when the workload
capture started to when the session connects with the specified value; it is interpreted as a
percentage. Use this option to manipulate the session connect time during replay. The default
value is 100.
11.2
update
Replay Options
11.2
Replay Analysis
Data divergence
Number of rows compared for each call (queries, DML)
Error divergence:
New errors
Replay Analysis
There may be some divergence of the replay compared to what was recorded. For example, when
replaying on a newer version of the RDBMS, a new algorithm may cause specific requests to be
faster, resulting in divergence appearing as a faster execution. This is considered a desirable
divergence. Another example of a divergence is when a SQL statement returns fewer rows during
replay than those returned during recording. This is clearly undesirable.
For data divergence, the result of an action can be considered as:
The result set of SQL query
An update to persistent database state
A return code or error code
Performance divergence is useful in determining how new algorithms introduced in the replay
system may affect overall performance. There are numerous factors that can cause replay divergence.
Though some of them cannot be controlled, others can be mitigated. It is the task of the DBA to
understand the workload run-time operations and take the necessary actions to reduce the level of
record-and-replay divergence.
Online divergence should aid the decision to stop a replay that has diverged significantly. The results
of the replay before the divergence may still be useful, but further replay would not produce reliable
conclusions. Offline divergence reporting is used to determine how successful the replay was after
the replay has finished.
Quiz
Answer: 1
11.2
update
Packages and Procedures
DBMS_WORKLOAD_CAPTURE DBMS_WORKLOAD_REPLAY
START_CAPTURE PROCESS_CAPTURE
FINISH_CAPTURE INITIALIZE_REPLAY
ADD_FILTER PREPARE_REPLAY
Summary
Objectives
Workload
ADDM
DBA
3
Run SQL Tuning Advisor
AWR
To
p
SQ
L
Workload Auto matic
SQL Tuning
R
ep
or
4
ts DBA
Summary of Automation
in Oracle Database 11g
AWR
Weekly Daily Hourly Avrg execution
1. Pull the top queries from the past week into four buckets:
y Top for the past week
y Top for any day in the past week
y Top in any single hour
y Top by average single execution
2. Combine four buckets into one (assigning weights).
3. Cap at 150 queries per bucket.
Pick
candidate
Tune Test Accept Tune
S1 P1 P1 S2
SQL
Maintenance
window
Existing N 3X Y
N
Y
3X N
Ignore new profile
benefit?
Stale
stats
Y
Replace profile
GATHER_STATS_JOB
Autotask configuration:
On/off switch
Maintenance windows running tuning task
CPU resource consumption of tuning task
BEGIN
dbms_sqltune.set_tuning_task_parameter('SYS_AUTO_SQL_TUNING_TASK',
'LOCAL_TIME_LIMIT', 1400);
dbms_sqltune.set_tuning_task_parameter('SYS_AUTO_SQL_TUNING_TASK',
Quiz
Answers: 1, 2, and 3
Summary
Enhancements
Objectives
11.2
DBA_HIST_DB_CACHE_ADVICE
DBA_HIST_DISPATCHER
DBA_HIST_DYN_REMASTER_STATS
DBA_HIST_IOSTAT_DETAIL
time
Baseline Templates
Baseline Templates
Creating baselines for future time periods allows you to mark time periods that you know will
be interesting. For example, you may want the system to automatically generate a baseline for
every Monday morning for the whole year, or you can ask the system to generate a baseline for
an upcoming holiday weekend if you suspect that it is a high-volume weekend.
Previously, you could create baselines only on snapshots that already existed. With Oracle
Database 11g, a nightly MMON task goes through all the templates for baseline generation and
checks to see if any time ranges have changed from the future to the past within the last day.
For the relevant time periods, the MMON task then creates a baseline for the time period.
DBMS_WORKLOAD_REPOSITORY Package
DBMS_WORKLOAD_REPOSITORY Package
The slide shows the set of PL/SQL interfaces offered by Oracle Database 11g in the
DBMS_WORKLOAD_REPOSITORY package for administration and filtering.
MODIFY_BASELINE_WINDOW_SIZE enables you to modify the size of the
SYSTEM_MOVING_WINDOW.
BEGIN
DBMS_WORKLOAD_REPOSITORY.CREATE_BASELINE_TEMPLATE (
start_time => to_date('21-JUN-2010','DD-MON-YYYY'),
end_time => to_date('21-SEP-2010','DD-MON-YYYY'),
baseline_name => 'FALL10',
template_name => 'FALL10',
expiration => NULL ) ;
END;
Creating a Repeating
Baseline Template
BEGIN
DBMS_WORKLOAD_REPOSITORY.CREATE_BASELINE_TEMPLATE (
day_of_week => 'SATURDAY',
hour_in_day => 6,
Baseline Views
Baseline Views
The data dictionary views supporting the AWR baselines have changed.
DBA_HIST_BASELINE: Modified View
DBA_HIST_BASELINE has been modified to support the SYSTEM_MOVING_WINDOW
baseline and the baselines generated from templates. Additional information includes the date
created, time of last statistics calculation, and type of baseline.
DBA_HIST_BASELINE_DETAILS: New View
DBA_HIST_BASELINE_DETAILS displays information that allows you to determine the
validity of a given baseline, such as whether there was a shutdown during the baseline period
and the percentage of the baseline period that is covered by the snapshot data.
DBA_HIST_BASELINE_TEMPLATE: New View
DBA_HIST_BASELINE_TEMPLATE holds the baseline templates. This view provides the
information needed by MMON to determine when a baseline will be created from a template and
when the baseline should be removed.
For details, see Oracle Database Reference 11g.
Quiz
Answers: 1 and 2
Maintenance Windows
Maintenance Windows
Oracle Database 10g introduced the execution of automated maintenance tasks during a
maintenance window. The automated tasks are statistics collection, segment advisor, and
Automatic SQL Tuning.
With Oracle Database 11g, the Automated Maintenance Tasks feature relies on the Resource
Manager being enabled during the maintenance windows. Thus the resource plan associated
with the window is automatically enabled when the window opens. The goal is to prevent
maintenance work from consuming excessive amounts of system resources. Each maintenance
window is associated with a resource plan that specifies how the resources will be allocated
during the window duration.
In Oracle Database 11g, WEEKNIGHT_WINDOW and WEEKEND_WINDOW (defined in Oracle
Database 10g) are replaced with daily maintenance windows. Automated tasks are assigned to
specific windows. All daily windows belong to MAINTENANCE_WINDOW_GROUP by default.
You may define other maintenance windows as well as change start times and durations for the
daily maintenance windows. Likewise, any maintenance windows that are deemed
unnecessary can be disabled or removed. The operations can be performed by using Enterprise
Manager or Scheduler interfaces.
NAME
--------------------------------
DEFAULT_MAINTENANCE_PLAN
Run Job1 Run Job2 Run Job3 Run Job3 Run Job4
with with with with with
urgent urgent high medium medium
priority priority priority priority priority
Maintenance medium
window ABP
Space
Job1 Jobn
SQL
DBA_AUTOTASK_TASK
Quiz
Answer: 1
Usage notes:
Only users with the SYSDBA privilege can run this procedure.
V$IOSTAT_FUNCTION
AWR and EM
V$IOSTAT_CONSUMER_GROUP
11.2
11.2
11.2
11.2
exec dbms_resource_manager.create_plan_directive( -
plan => 'db_consolidation_plan',
group_or_subplan => 'App_1',
mgmt_p1 => 50,
max_utilization_limit => 60);
11.2
11.2
Instance Caging
Instance Caging
Instance caging is a method that uses the CPU_COUNT initialization parameter to limit the
number of CPUs that an instance can use. In addition, Oracle Database Resource Manager is
employed to allocate the CPUs for the database sessions based on the instance resource plan.
Instance caging approaches are as follows:
Over-provisioning: With this approach, the sum of the CPU limit for each instance
exceeds the actual number of CPUs on the server. This approach is appropriate for non-
critical databases and low-load, non-critical production systems. Although the instances
impact each other's performance, at any given time one or more of the instances may be
idle or experiencing a low load.
Partitioning: With this approach, CPU limits are set such that the sum for all instances
equals the number of CPUs on the server. This approach is appropriate for critical product
systems. It prevents the instances from interfering with each other and provides
predictable performance.
11.2
32 16 Total Number
Instance D of CPUs = 16
Instance D
24 12 Instance C
11.2
11.2
Quiz
Answers: 1 and 3
Default Plan
Default Plan
The slide shows the properties of DEFAULT_PLAN. Note that there are no limits for its
thresholds. As you can see, Oracle Database 11g introduces two new I/O limits that you can
define as thresholds in a Resource Plan.
Summary