Vous êtes sur la page 1sur 468

Slide 1

Oracle Database Administration L1


Slide 2

Document Control
Change Record Date Author Version Change Reference

10/09/09 Venugopal 1.0


Tadepalli
10/09/09 Ponnilavan 1.1 DB Performance Monitoring

Reviewers Name Position


Indraneel Bose
Parag K Bhayana

1
Slide 3

General Information
This document contains the course notes for Oracle Database Administration L1
Scope
- The scope of the training will cover areas of Oracle Database Administration as needed by a
DBA of L1 level.

Training Prerequisites
- None

Conventions
(N) Navigation
(B) Button
(M) Menu
(T) Tab

Notes
- Please refer to the notes attached to the slides for detailed explanation and for Navigations

2
Slide 4

Course Objectives
Understanding Database Security
Managing Database Objects including indexes, tables,
clusters, and partitions, Managing Users, Resources,
Privileges and Roles
Understanding Oracle Network configuration
Understanding various Oracle Utilities like Exp/Imp,
DataPump, LogMiner, SQL Loader, and Dbverify
Using the Database Resource Manager
Detecting and Repairing Data Block Corruption
Oracle DB Performance Monitoring and analysis

3
Slide 5

SlNo Chapter Name From page To page


Understanding of basic administration
1 tasks 6 11
2 Understanding Oracle DB Architecture 12 44
3 Creating a Database 45 62
4 Managing the the Control File 63 73
5 Maintaining Redo Log Files 74 89
6 Managing Tablespaces and Datafiles 90 114
7 Storage and Relationship Structure 115 119
8 Undo Management 120 152
9 Database Audit Concepts 153 164
10 Managing Password Security 165 180
11 Managing Users 181 192
12 Managing Privileges 193 209
13 Managing Roles 210 228
4
Slide 6

SlNo Chapter Name From page To page

14 Managing database objects 229 251


15 Networking Overview 252 266
16 Naming Method Configuration 267 291
17 Understanding Oracle DB Utilities 292 332

18 Using the Database Resource Manager 333 357


Detecting and Recovering From
19 Datafile Block Corruption 358 373
20 Oracle Backup and Recovery Concepts 374 380
21 Overview of RMAN Backups 381 412
Oracle DB Performance Monitoring and
22 analysis 413 456

5
Slide 7

Understanding of basic administration tasks

Understanding of basic
administration tasks

6
Slide 8

Basic Database Administrative tasks


After completing this lesson, you should be able to do the
following:

Understand the Basic Database Administrative tasks

Instructor Notes

Explain all the basic database administrative tasks given in the subsequent slides.
Slide 9

Understanding of basic administration tasks


Install Oracle software
Create Oracle databases
Perform database upgrades
Perform Oracle software to new release
Familiar with different methods of Start up and shut down
the Oracle database
Manage the database's storage structures
Manage users and security
Manage schema objects, such as tables, indexes, and
views
Proactively monitor the database's health and take
preventive or corrective action as required

8
Slide 10

Understanding of basic administration tasks


Monitor the database
Tune the database for optimal performance
Adding users
Backup of the Database
Familiar with various recovery methods
Export full or parts of database
Import parts or full database
Create and Manage partitions
Schedule different tasks using Scheduler
Managing Oracle Redo Logs
Interact with Oracle Corporation for technical support

9
Slide 11

Understanding of basic administration tasks


Capacity Planning
DBA should be able to work as part of a team
Ability to work in a 24X7 environment
Give consultation to development teams

10
Slide 12

Basic Administration Tasks


Summary

In this lesson you have understood the following concepts


Understand the difference between a DBA and Developer.
Understand what is meant by Database Administration
Understand various tasks done by a DBA

11
Slide 13

Oracle Database Architecture

Understanding Oracle DB
Architecture

12
Slide 14

Objectives

After completing this lesson, you should be able to do


the following:

Outline the Oracle architecture and its main components

10g Feature : Automatic Shared Memory Management

13

Objectives
This lesson introduces the Oracle server architecture by examining the physical, memory,
process, and logical structures involved in establishing a database connection, creating a
session, and executing SQL commands.
Slide 15

Overview of Primary Components

Instance
User
process Shared Pool SGA
Redo Log
Library Database Buffer
Cache Buffer Cache
Streams Pool
Server Data Dictionary
process Cache Java Pool Large Pool

PGA
PMON SMON DBWR LGWR CKPT Others

Datafiles Control Redo Log


Parameter files files Archived
file Log files
Password
file
Database

14

Overview of Primary Components


The Oracle architecture includes a number of primary components, which are discussed further
in this lesson.

The Oracle Server


The Oracle server is an object-relational database management system that provides an
integrated approach to information management. An Oracle server consists of an Oracle
database and an Oracle server instance.

An Oracle Instance
Every time a database is started, a system global area (SGA) is allocated and Oracle
background processes are started. The system global area is an area of memory used for
database information shared by the database users. The combination of the background
processes and memory buffers is called an Oracle instance.
Slide 16

Oracle Instance

An Oracle Instance:
Is a means to access an Oracle database
Always opens one and only one database
Consists of memory and background process structures

Instance
Shared Pool SGA
Memory
Library Redo Log
Cache
Database structures
Buffer Cache Buffer
Data Dictionary
Cache Java Pool Large Pool

PMON SMON DBWR LGWR CKPT Others Background


process structures

15

Oracle Instance
An Oracle Instance consists of the System Global Area (SGA) memory structure and the
background processes used to manage a database. An instance is identified by using methods
specific to each operating system. The instance can open and use only one database at a time.
Slide 17

Establishing a Connection and Creating a Session


Connecting to an Oracle Instance:
Establishing a user connection
Creating a session

Connection Server
process
established Session created

User Oracle Server


process

Database user

16

Establishing a Connection and Creating a Session


Before users can submit SQL statements to an Oracle database, they must connect to an
instance.
The user starts a tool such as SQL*Plus or runs an application developed using a tool such
as Oracle Forms. This application or tool is executed as a user process.
In the most basic configuration, when a user logs on to the Oracle server, a process is
created on the computer running the Oracle server. This process is called a server
process. The server process communicates with the Oracle Instance on behalf of the user
process that runs on the client. The server process executes SQL statements on behalf of
the user.

Connection:
A connection is a communication pathway between a user process and an Oracle server. A
database user can connect to an Oracle server in one of three ways:
The user logs on to the operating system running the Oracle Instance and starts an
application or tool that accesses the database on that system. The communication
pathway is established using the interprocess communication mechanisms available on
the host operating system.
Slide 18

Oracle Database
An Oracle database:
Is a collection of data that is treated as a unit
Consists of three file types

Oracle Database
Redo
Parameter Datafiles Control Log
Archived
files files
file Log files
Password
file

17

Oracle Database
The general purpose of a database is to store and retrieve related information. An Oracle
database has a logical and a physical structure. The physical structure of the database is the set
of operating system files in the database. An Oracle database consists of three file types.
Datafiles containing the actual data in the database
Redo log files containing a record of changes made to the database to enable recovery of the
data in case of failures
Control files containing information necessary to maintain and verify database integrity

Other key file structures


The Oracle server also uses other files that are not part of the database:
The parameter file defines the characteristics of an Oracle Instance. For example, it contains
parameters that size some of the memory structures in the SGA.
The password file authenticates users privileged to start up and shut down an Oracle
Instance.
Archived redo log files are offline copies of the redo log files that may be necessary to
recover from media failures.
Slide 19

Physical Structure
The physical structure includes three types of files:
Control files
Datafiles
Redo log files

Header
Control
files
Datafiles
(includes
Online
Data
Redo Log
Dictionary)
files

18

Physical Database Structures


The following sections explain the physical database structures of an Oracle database,
including datafiles, redo log files, and control files.
Datafiles
Every Oracle database has one or more physical datafiles. The datafiles contain all the database
data. The data of logical database structures, such as tables and indexes, is physically stored in
the datafiles allocated for a database.

Redo Log Files


Every Oracle database has a set of two or more redo log files. The set of redo log files for a
database is collectively known as the database's redo log. A redo log is made up of redo entries
(also called redo records).
The primary function of the redo log is to record all changes made to data. If a failure prevents
modified data from being permanently written to the datafiles, the changes can be obtained
from the redo log so work is never lost.
Slide 20

Memory Structure

Oracles memory structure consists of two memory areas


known as:
System Global Area (SGA): Allocated at instance startup,
and is a fundamental component of an Oracle Instance
Program Global Area (PGA): Allocated when the server
process is started

19
Slide 21

System Global Area

The SGA consists of several memory structures:


Shared Pool
Database Buffer Cache
Redo Log Buffer
Other structures (for example, lock and latch
management, statistical data)
There are two additional memory structures that
can be configured within the SGA:
Large Pool
Java Pool

20

System Global Area (SGA)


The SGA is also called the shared global area. It is used to store database information that is
shared by database processes. It contains data and control information for the Oracle server and
is allocated in the virtual memory of the computer where Oracle resides.
The following statement can be used to view SGA memory allocations:
SQL> SHOW SGA:
Total System Global Area 36437964 bytes
Fixed Size 6543794 bytes
Variable Size 19521536 bytes
Database Buffers 16777216 bytes
Redo Buffers 73728 bytes
Slide 22

System Global Area


SGA is dynamic
Sized by the SGA_MAX_SIZE parameter and SGA_TARGET
parameter
No separate resizing for each memory area is required in 10g.
Important dynamic performance view V$SGA_TARGET_ADVICE
and V$PARAMETER

21

System Global Area (continued)


Unit of Allocation:
A granule is a unit of contiguous virtual memory allocation. The size of a granule depends on
the estimated total SGA size whose calculation is based on the value of the parameter
SGA_MAX_SIZE.
4 MB if estimated SGA size is < 128 MB
16 MB otherwise

The components (Database Buffer Cache and Shared Pool) are allowed to grow and shrink
based on granule boundaries. For each component which owns granules, the number of
granules allocated to the component, any pending operations against the component (for
example, allocation of granules through ALTER SYSTEM, freeing of granules through ALTER
SYSTEM, corresponding self-tuning), and target size in granules will be tracked and displayed
by the V$BUFFER_POOL view. At instance startup, the Oracle server allocates granule
entries, one for each granule to support SGA_MAX_SIZE bytes of address space. As startup
continues, each component acquires as many granules as it requires. The minimum SGA
configuration is three granules (one granule for fixed SGA [includes redo buffers]; one granule
for Database Buffer Cache; one granule for Shared Pool).
Slide 23

Shared Pool
Used to store:
Most recently executed SQL statements
Most recently used data definitions
It consists of two key performance-related memory
structures:
Library Cache
Data Dictionary Cache
Sized by the parameter
SHARED_POOL_SIZE

Shared Pool
Library
Cache
Data
ALTER SYSTEM SET Dictionary
SHARED_POOL_SIZE = 64M; Cache

22

Shared Pool
The Shared Pool environment contains both fixed and variable structures. The fixed structures
remain relatively the same size, whereas the variable structures grow and shrink based on user
and program requirements. The actual sizing for the fixed and variable structures is based on an
initialization parameter and the work of an Oracle internal algorithm.

Sizing the Shared Pool:


Since the Shared Pool is used for objects that can be shared globally, such as reusable SQL
execution plans; PL/SQL packages, procedures, and functions; and cursor information, it must
be sized to accommodate the needs of both the fixed and variable areas. Memory allocation for
the Shared Pool is determined by the SHARED_POOL_SIZE initialization parameter. It can be
dynamically resized using ALTER SYSTEM SET. After performance analysis, this can be
adjusted but the total SGA size cannot exceed SGA_MAX_SIZE.
Slide 24

Library Cache
Stores information about the most recently used SQL and PL/SQL
statements
Enables the sharing of commonly used statements
Is managed by a least recently used (LRU) algorithm
Consists of two structures:
Shared SQL area
Shared PL/SQL area
Size determined by the Shared Pool sizing

23

Library Cache
The Library Cache size is based on the sizing defined for the Shared Pool. Memory is allocated
when a statement is parsed or a program unit is called. If the size of the Shared Pool is too
small, statements are continually reloaded into the Library Cache, which affects performance.
The Library Cache is managed by an LRU algorithm. As the cache fills, less recently used
execution paths and parse trees are removed from the Library Cache to make room for the new
entries. If the SQL or PL/SQL statements are not reused, they eventually are aged out.

The Library Cache consists of two structures:


Shared SQL: The Shared SQL stores and shares the execution plan and parse tree for SQL
statements run against the database. The second time that an identical SQL statement is
run, it is able to take advantage of the parse information available in the shared SQL to
expedite its execution. To ensure that SQL statements use a shared SQL area whenever
possible, the text, schema, and bind variables must be exactly the same.

Shared PL/SQL: The Shared PL/SQL area stores and shares the most recently executed
PL/SQL statements. Parsed and compiled program units and procedures (functions,
packages, and triggers) are stored in this area.
Slide 25

Data Dictionary Cache

A collection of the most recently used definitions in the database


Includes information about database files, tables, indexes, columns,
users, privileges, and other database objects
During the parse phase, the server process looks at the data
dictionary for information to resolve object names and validate
access
Caching data dictionary information into memory improves response
time on queries and DML
Size determined by the Shared Pool sizing

24

Data Dictionary Cache


The Data Dictionary Cache is also referred to as the dictionary cache or row cache. This
multiple caching of data dictionary information both into the Database Buffer Cache and
Shared Pool memory improves performance. Information about the database (user account
data, data file names, segment names, extent locations, table descriptions, and user privileges)
is stored in the data dictionary tables. When this information is needed by the server, the data
dictionary tables are read, and the data that is returned is stored in the Data Dictionary Cache.

Sizing the data dictionary:


The overall size is dependent on the size of the Shared Pool size and is managed internally by
the database. If the Data Dictionary Cache is too small, then the database has to query the data
dictionary tables repeatedly for information needed by the server. These queries are called
recursive calls and are slower than the direct queries on the Data Dictionary Cache as direct
queries do not use SQL.
Slide 26

Database Buffer Cache


Stores copies of data blocks that have been retrieved from the
datafiles
Enables great performance gains when you obtain and update data
Managed through an LRU algorithm
DB_CACHE_SIZE determines primary block size

Database Buffer
Cache

25

Database Buffer Cache


When a query is processed, the Oracle server process looks in the Database Buffer Cache for
any blocks it needs. If the block is not found in the Database Buffer Cache, the server process
reads the block from the datafile and places a copy in the Database Buffer Cache. Because
subsequent requests for the same block may find the block in memory, the requests may not
require physical reads. The Oracle server uses an LRU algorithm to age out buffers that have
not been accessed recently to make room for new blocks in the Database Buffer Cache.
Slide 27

Database Buffer Cache


Consists of independent sub-caches:
DB_CACHE_SIZE
DB_KEEP_CACHE_SIZE
DB_RECYCLE_CACHE_SIZE
Can be dynamically resized

DB_CACHE_ADVICE set to gather statistics for


predicting different cache size behavior
Statistics displayed by V$DB_CACHE_ADVICE
ALTER SYSTEM SET DB_CACHE_SIZE = 96M;

26

Database Buffer Cache


Sizing the Database Buffer Cache:
The size of each buffer in the Database Buffer Cache is equal to the size of an Oracle block,
and it is specified by the DB_BLOCK_SIZE parameter. The Database Buffer Cache consists of
independent subcaches for buffer pools and for multiple block sizes. The parameter
DB_BLOCK_SIZE determines the primary block size, which is used for the SYSTEM
tablespace.
Three parameters define the sizes of the Database Buffer Caches:
DB_CACHE_SIZE: Sizes the default buffer cache only, it always exists and cannot be set to
zero
DB_KEEP_CACHE_SIZE: Sizes the keep buffer cache, which is used to retain blocks in
memory that are likely to be reused
DB_RECYCLE_CACHE_SIZE: Sizes the recycle buffer cache, which is used to eliminate
blocks from memory that have little chance of being reused
Slide 28

Redo Log Buffer

Records all changes made to the database data blocks


Primary purpose is recovery
Changes recorded within are called redo entries
Redo entries contain information to reconstruct or redo changes
Size defined by LOG_BUFFER

Redo Log
Buffer

27

Redo Log Buffer


The Redo Log Buffer is a circular buffer that contains changes made to datafile blocks. This
information is stored in redo entries. Redo entries contain the information necessary to recreate
the data prior to the change made by INSERT, UPDATE, DELETE, CREATE, ALTER, or
DROP operations.

Sizing the Redo Log Buffer:


The size of the Redo Log Buffer is defined by the initialization parameter LOG_BUFFER.
Slide 29

Large Pool

An optional area of memory in the SGA


Relieves the burden placed on the Shared Pool
Used for:
Session memory (UGA) for the Shared Server
I/O server processes
Backup and restore operations or RMAN
Parallel execution message buffers
PARALLEL_AUTOMATIC_TUNING set to TRUE
Does not use an LRU list
Sized by LARGE_POOL_SIZE

28

Large Pool
By allocating session memory from the Large Pool for Shared Server, Oracle XA, or parallel
query buffers, Oracle can use the Shared Pool primarily for caching Shared SQL statements.
Thus relieving the burden on areas within the Shared Pool. The Shared Pool does not have to
give up memory for caching SQL parse trees in favor of Shared Server session information,
I/O, and backup and recover processes. The performance gain is from the reduction of
overhead from increasing and shrinkage of the shared SQL cache.

Backup and restore:


Recovery Manager (RMAN) uses the Large Pool when the BACKUP_DISK_IO= n and
BACKUP_TAPE_IO_SLAVE = TRUE parameters are set. If the Large Pool is configured but
is not large enough, the allocation of memory from the Large Pool fails. RMAN writes an error
message to the alert log file and does not use I/O slaves for backup or restore.

Parallel execution:
The Large Pool is used if PARALLEL_AUTOMATIC_TUNING is set to TRUE, otherwise,
these buffers are allocated to the Shared Pool.
Slide 30

Java Pool

Services parsing requirements for Java commands


Required if installing and using Java
Sized by JAVA_POOL_SIZE parameter

29

Java Pool
The Java Pool is an optional setting but is required if installing and using Java. Its size is set, in
bytes, using the JAVA_POOL_SIZE parameter. In Oracle9i, the default size of the Java Pool is
24 MB.
Slide 31

10g New Feature: Streams Pool


Oracle Streams Pool is used for
Data replication
Advanced message queuing
Event management and Notification
Data warehouse loading
Data protection

30
Slide 32

Program Global Area


Memory reserved for each user process
connecting to an Oracle database
Allocated when a process is created
Deallocated when the process is
PGA
terminated
Server
Used by only one process process
PGA_AGGREGATE_TARGET parameter
decides the memory allocation.
User
process

31

Program Global Area (PGA)


The Program Global Area or Process Global Area (PGA) is a memory region that contains data
and control information for a single server process or a single background process. The PGA is
allocated when a process is created and deallocated when the process is terminated. In contrast
to the SGA, which is shared by several processes, the PGA is an area that is used by only one
process.

Contents of PGA:
The contents of the PGA memory varies, depending whether the instance is running in a
Dedicated Server or Shared Server configuration. Generally the PGA memory includes these
components:
Private SQL area: Contains data such as bind information and runtime memory structures.
Each session that issues a SQL statement has a private SQL area. Each user that submits
the same SQL statement has his or her own private SQL area that uses a single shared
SQL area. Thus, many private SQL areas can be associated with the same shared SQL
area. The private SQL area of a cursor is divided into two areas:
Persistent area: Contains bind information, and is freed only when the cursor is closed
Run-time area: Created as the first step of an execute request. For INSERT, UPDATE,
and DELETE commands, this area is freed after the statement has been executed.
For queries, this area is freed only after all rows are fetched or the query is
canceled.
Slide 33

Process Structure
Oracle takes advantage of various types of processes:
User process: Started at the time a database user requests
connection to the Oracle server
Server process: Connects to the Oracle Instance and is started
when a user establishes a session
Background processes: Started when an Oracle Instance is
started

32
Slide 34

User Process
A program that requests interaction with the Oracle server
Must first establish a connection
Does not interact directly with the Oracle server

Server
process
User
process
Connection
established

Database user

33

Program Global Area (PGA)


The Program Global Area (PGA) is a memory buffer that contains data and control
information for a server process. A PGA is created by Oracle when a server process is
started. The information in a PGA depends on the Oracle configuration.
Process Architecture
A process is a "thread of control" or a mechanism in an operating system that can execute
a series of steps.
An Oracle server has two general types of processes: user processes and Oracle processes.
User (Client) Processes
A user process is created and maintained to execute the software code of an application
program The user process also manages the communication with the server processes.
Oracle Process Architecture
Oracle processes are called (invoked) by other processes to perform functions on behalf of
the invoking process.
Server Processes
Oracle creates server processes to handle requests from connected user processes. A server
process is in charge of communicating with the user process and interacting with Oracle to
carry out requests of the associated user process.
Slide 35

Server Process
A program that directly interacts with the Oracle server
Fulfills calls generated and returns results
Can be Dedicated or Shared Server

Server
Connection process
established Session created
User Oracle server
process

Database user

34

Server Process
Once a user has established a connection, a server process is started to handle the user
processes requests. A server process can be either a Dedicated Server process or a Shared
Server process. In a Dedicated Server environment, the server process handles the request of a
single user process. Once a user process disconnects, the server process is terminated. In a
Shared Server environment, the server process handles the request of several user processes.
The server process communicates with the Oracle server using the Oracle Program Interface
(OPI).
Slide 36

Background Processes

Maintains and enforces relationships between physical and


memory structures
Mandatory background processes:
DBWn PMON CKPT
LGWR SMON MMON

Optional background processes:


ARCn LMDn RECO CJQ0
LMON Snnn Dnnn Pnnn
LCKn QMNn RVWR MMAN

35

Background Processes
The Oracle architecture has five mandatory background processes that are discussed further in
this lesson. In addition to the mandatory list, Oracle has many optional background process
that are started when their option is being used. These optional processes are not within the
scope of this course, with the exception of the background process, ARCn. Following is a list
of some optional background processes:
RECO: Recoverer
QMNn: Advanced Queuing
ARCn: Archiver
LCKn: RAC Lock ManagerInstance Locks
LMON: RAC DLM MonitorGlobal Locks
LMDn: RAC DLM MonitorRemote Locks
CJQ0: Coordinator Job Queue background process
Dnnn: Dispatcher
Snnn: Shared Server
Pnnn: Parallel Query Slaves
Slide 37

Database Writer (DBWn)

Instance
DBWn writes when:
Checkpoint occurs
SGA
Dirty buffers reach threshold
Database
Buffer There are no free buffers
Cache Timeout occurs
RAC ping request is made
Tablespace OFFLINE
DBWn
Tablespace READ ONLY
Table DROP or TRUNCATE
Tablespace BEGIN BACKUP
Redo
Datafiles Control Log
files files

Database

36

Database Writer (DBWn)


The server process records changes to undo and data blocks in the Database Buffer Cache.
DBWn writes the dirty buffers from the Database Buffer Cache to the datafiles. It ensures that
a sufficient number of free buffers (buffers that can be overwritten when server processes need
to read in blocks from the datafiles) are available in the Database Buffer Cache. Database
performance is improved because server processes make changes only in the Database Buffer
Cache.
DBWn defers writing to the datafiles until one of the following events occurs:
Incremental or normal checkpoint
The number of dirty buffers reaches a threshold value
A process scans a specified number of blocks when scanning for free buffers and cannot
find any
Timeout occurs
A ping request in Real Application Clusters (RAC) environment
Placing a normal or temporary tablespace offline
Placing a tablespace in read-only mode
Dropping or truncating a table
ALTER TABLESPACE tablespace name BEGIN BACKUP
Slide 38

Log Writer (LGWR)

Instance
SGA LGWR writes:
Redo Log
At commit
Buffer
When one-third full
When there is 1 MB
DBWn LGWR of redo
Every three seconds
Before DBWn writes
Redo
Datafiles Control Log
files files

Database

37

Log Writer (LGWR)


LGWR performs sequential writes from the Redo Log Buffer to the redo log file under the
following situations:
When a transaction commits
When the Redo Log Buffer is one-third full
When there is more than 1 MB of changes recorded in the Redo Log Buffer
Before DBWn writes modified blocks in the Database Buffer Cache to the datafiles
Every three seconds
Because the redo is needed for recovery, LGWR confirms the commit operation only after the
redo is written to disk.
LGWR can also call on DBWn to write to the datafiles.
Slide 39

System Monitor (SMON)


Responsibilities:
Instance Instance recovery
SGA Rolls forward changes
in redo logs
Opens database for
user access
Rolls back uncommitted
transactions
SMON
Coalesces free
space
Redo Deallocates
Datafiles Control Log
files files temporary segments

Database

38

System Monitor (SMON)


If the Oracle Instance fails, any information in the SGA that has not been written to disk is lost.
For example, the failure of the operating system causes an instance failure. After the loss of the
instance, the background process SMON automatically performs instance recovery when the
database is reopened. Instance recovery consists of the following steps:

1. Rolling forward to recover data that has not been recorded in the datafiles but that has
been recorded in the online redo log. This data has not been written to disk because of the
loss of the SGA during instance failure. During this process, SMON reads the redo log
files and applies the changes recorded in the redo log to the data blocks. Because all
committed transactions have been written to the redo logs, this process completely
recovers these transactions.
2. Opening the database so that users can log on. Any data that is not locked by unrecovered
transactions is immediately available.
3. Rolling back uncommitted transactions. They are rolled back by SMON or by the
individual server processes as they access locked data.
SMON also performs some space maintenance functions:
It combines, or coalesces, adjacent areas of free space in the datafiles.
It deallocates temporary segments to return them as free space in datafiles.
Slide 40

Process Monitor (PMON)

Instance Cleans up after failed


SGA processes by:
Rolling back the
transaction
Releasing locks
Releasing other
resources
PMON
Restarting dead
dispatchers

PGA area

39

Process Monitor (PMON)


The background process PMON cleans up after failed processes by:
Rolling back the users current transaction
Releasing all currently held table or row locks
Freeing other resources currently reserved by the user
Restarts dead dispatchers
Slide 41

Manageability Monitor (MMON)

Instance It collects Automatic


SGA Workload Repository
(AWR) snapshot
informaton.
MMON also issues
MMON
alerts when database
metrics violate their
threshold values.

40

Process Monitor (PMON)


The background process PMON cleans up after failed processes by:
Rolling back the users current transaction
Releasing all currently held table or row locks
Freeing other resources currently reserved by the user
Restarts dead dispatchers
Slide 42

Memory Monitor (MMAN)

Instance It co-ordinates the


SGA sizing of the memory
components.
It observes the
system and workload
MMAN in order to determine
the ideal distribution
of memory.

41

Process Monitor (PMON)


The background process PMON cleans up after failed processes by:
Rolling back the users current transaction
Releasing all currently held table or row locks
Freeing other resources currently reserved by the user
Restarts dead dispatchers
Slide 43

Checkpoint (CKPT)

Instance Responsible for:


SGA Signaling DBWn at
checkpoints
Updating datafile headers
with checkpoint
information
Updating control files with
DBWn LGWR CKPT checkpoint information

Redo
Datafiles Control Log
files files

Database

42

Checkpoint (CKPT)
Every three seconds the CKPT process stores data in the control file to identify that place in the
redo log file where recovery is to begin, this being called a checkpoint. The purpose of a
checkpoint is to ensure that all of the buffers in the Database Buffer Cache that were modified
prior to a point in time have been written to the datafiles. This point in time (called the
checkpoint position) is where database recovery is to begin in the event of an instance failure.
DBWn will already have written all of the buffers in the Database Buffer Cache that were
modified prior to that point in time. Prior to Oracle9i, this was done at the end of the redo log.
In the event of a log switch CKPT also writes this checkpoint information to the headers of the
datafiles.
Checkpoints are initiated for the following reasons:
To ensure that modified data blocks in memory are written to disk regularly so that data is
not lost in case of a system or database failure
To reduce the time required for instance recovery. Only the redo log entries following the
last checkpoint need to be processed for recovery to occur
To ensure that all committed data has been written to the datafiles during shutdown
Checkpoint information written by CKPT includes checkpoint position, system change
number, location in the redo log to begin recovery, information about logs, and so on.
Slide 44

Archiver (ARCn)

Optional background process


Automatically archives online redo logs when ARCHIVELOG mode is set
Preserves the record of all changes made to the database

Redo ARCn
Datafiles Control Log Archived
files files Redo Log
files

43

Archiver (ARCn)
All other background processes are optional, depending on the configuration of the database;
however, one of them, ARCn, is crucial to recovering a database after the loss of a disk. As
online redo log files get filled, the Oracle server begins writing to the next online redo log file.
The process of switching from one redo log to another is called a log switch. The ARCn
process initiates backing up, or archiving, of the filled log group at every log switch. It
automatically archives the online redo log before the log can be reused, so that all of the
changes made to the database are preserved. This enables the DBA to recover the database to
the point of failure even if a disk drive is damaged.
Archiving redo log files:
One of the important decisions that a DBA has to make is whether to configure the database to
operate in ARCHIVELOG or in NOARCHIVELOG mode.
NOARCHIVELOG mode: In NOARCHIVELOG mode, the online redo log files are
overwritten each time a log switch occurs. LGWR does not overwrite a redo log group until the
checkpoint for that group is complete. This ensures that committed data can be recovered if
there is an instance crash. During the instance crash, only the SGA is lost. There is no loss of
disks, only memory. For example, an operating system crash causes an instance crash.
Slide 45

Summary

In this lesson, you should have learned how to:


Explain database files: datafiles, control files, online redo logs
Explain SGA memory structures: Database Buffer Cache, Shared Pool,
and Redo Log Buffer
Explain primary background processes:
DBWn, LGWR, CKPT, PMON, SMON
Explain the use of the background process ARCn
Identify optional and conditional background processes
Explain logical hierarchy

Oracle Server consists of Instance and database


Datafiles, control files and online redo logs form the database
Instance is the combination of Memory structures: Database Buffer
Cache, Shared Pool, and Redo Log Buffer and Background processes:
DBWn, LGWR, CKPT, PMON, SMON
The background process ARCn is optional
Logical hierarchy: Database, Tablespaces, Segments, Extents and
Oracle Blocks
Oracle takes advantage of various types of processes: User process,
Server process, Background processes

44
Slide 46

Creating a Database
Slide 47

Objectives

After completing this lesson, you should be able to do the following:


Understand the prerequisites necessary for database creation
Create a database using Oracle Database Configuration Assistant
Create a database manually
Create a database using Oracle Managed Files

46
Slide 48

Managing and Organizing a Database

Planning for your database is the first step in managing a database


system
Define the purpose of the database
Define the type of the database
Outline a database architectural design
Choose the database name
Create your database
Oracle Data Migration Assistant is used to migrate from an earlier
version of the database

47

Planning and Organizing a Database


Planning for your database is the first step in organizing and implementing a database system.
First define how the database will be used. This will determine what type of database you need
to create that will meet the needs of your business, for example, data warehousing, high online
transaction processing, or general purpose. Once you have determined the purpose and type,
you need to outline the database architecture that will be used. For example: How will
datafiles, control files, and redo log files be organized and stored? Oracles Optimal Flexible
Architecture can help you to organize your database file structure and locations. After defining
your architecture, you must choose a database and a system identification name for your new
database.

Creating your database is a task that prepares several operating system files and is needed only
once no matter how many datafiles the database will use.

During migration from an older version of Oracle, database creation is necessary only if an
entirely new database is needed. Otherwise you can use a migration utility. The Oracle Data
Migration Assistant is a tool designed to assist you in migrating your current database system.
Slide 49

Optimal Flexible Architecture (OFA)

Oracles recommended standard database architecture


layout
OFA involves three major rules:
Establish a directory structure where any database file can be
stored on any disk resource.
Separate objects with different behavior into different
tablespaces.
Maximize database reliability and performance by separating
database components across different disk resources.

48

Optimal Flexible Architecture (OFA)


Installation and configuration on all supported platforms complies with Optimal Flexible
Architecture (OFA). OFA organizes database files by type and usage. Binary files, control files,
log files, and administrative files can be spread across multiple disks.

Consistent naming convention provides the following benefits:


Database files can be easily differentiated from other files.
It is easy to identify control files, redo log files, and datafiles.
Administration of multiple Oracle homes on the same machine by separating files on
different disks and directories becomes easier.
Better performance is achieved by decreasing disk contention among datafiles, binary files,
and administrative files which can now reside on separate directories and disks.
Slide 50

Oracle Software and File Locations

Software Files
oracle_base
oradata/
/product db01/
/release_number system01.dbf
/bin control01.ctl
/dbs redo0101.log
/rdbms ...
/sqlplus
db02/
system01.dbf
/admin
control01.ctl
/inst_name redo0101.log
...
/pfile

49

Oracle Software Locations


The directory tree above shows an example of an OFA-complaint database.

Optimal Flexible Architecture:


Another important issue during installation and creation of a database is organizing the file
system so that it is easy to administer growth by adding data into an existing database, adding
users, creating new databases, adding hardware, and distributing input/output (I/O) load
sufficiently across many drives.
Slide 51

Creation Prerequisites

To create a new database, you must have the


following:
A privileged account authenticated by one of the
following:
Operating system
Password file
Sufficient memory to start the instance
Sufficient disk space for the planned database

50

Creation Prerequisites
SYSDBA privileges are required to create a database. These are granted either using operating
system authentication or password file authentication.
Before you create the database, make sure that the memory for the SGA, the Oracle executable,
and the processes is sufficient. Refer to your operating system installation and administration
guides.
Calculate the necessary disk space for the database, including online redo log files, control
files, and datafiles.
Slide 52

Authentication Methods for Database


Administrators

Remote database Local database


administration administration

Yes Yes
Do you Do you want
to use OS Use OS
have a secure
authentication? authentication
connection?

No No
Use a
password file

51

Authentication Methods for Database Administrators


Depending on whether you want to administer your database locally on the same machine on
which the database resides or to administer many different database servers from a single
remote client, you can choose either operating system or password file authentication to
authenticate database administrators.
Slide 53

Using Password File Authentication


Create the password file using the password utility

Set REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
in initialization parameter file
Add users to the password file
$ orapwd
Assignfile=$ORACLE_HOME/dbs/orapwU15
appropriate privileges to each user
password=admin entries=5

GRANT SYSDBA TO HR;

52

How to Use Password File Authentication


Oracle provides a password utility, orapwd, to create a password file. When you connect
using SYSDBA privilege, you are connecting as SYS schema and not the schema associated
with your username. For SYSOPER, you are connected to the PUBLIC schema.

Access to the database using the password file is provided by special GRANT commands issued
by privileged users.
Slide 54

Creating a Database

An Oracle database can be created by:


Oracle Universal Installer
Oracle Database Configuration Assistant
Graphical user interface
Java-based
Launched by the Oracle Universal Installer
Can be used as a standalone
The CREATE DATABASE command

53

Creating a Database
Creating a database can be done one of three ways: automatically created as part of the
Oracle9i installation using the Oracle Universal Installer, using the Oracle Database
Configuration Assistant (DBCA), or by creating a SQL script using the CREATE DATABASE
command.

The Database Configuration Assistant is a graphical user interface that interacts with the Oracle
Universal Installer, or can be used standalone, to simplify the creation of a database. The
DBCA is Java-based and can be launched from any platform with a Java engine.

During the installation of the Oracle Server, DBCA is launched by the Oracle Universal
Installer and can automatically create a starter database for you. You have the option of using
or not using DBCA, and you also have the option to create a starter database. You also have the
option to launch DBCA later as a standalone application to create a database.

You can also migrate or upgrade an existing database if you are using a previous version of
Oracle software.
Slide 55

Operating System Environment


Set the following environment variables:
ORACLE_BASE
ORACLE_HOME
ORACLE_SID
ORA_NLS33
PATH
LD_LIBRARY_PATH

54

Operating System Environment


Before creating a database manually or with the Database Configuration Assistant the
operating system environment must be properly configured.
ORACLE_BASE: Specifies the directory at the top of the Oracle software
Example: /u01/app/oracle
ORACLE_HOME: Specifies the directory where the Oracle software is installed. The OFA-
recommended value is $ORACLE_BASE/product/release
Example: /u01/app/oracle/product/9.1.1
ORACLE_SID: Specifies the instance name and must be unique for Oracle instances running
on the same machine
ORA_NLS33: Required when creating a database with a character set other than US7ASCII
Example: $ORACLE_HOME/ocommon/nls/admin/data
PATH: Specifies the directories that the operating system searches to find executables, such as
SQL*Plus. The Oracle9i executables are located in $ORACLE_HOME/bin and needs to be
added to the PATH variable.
LD_LIBRARY_PATH: Specifies the directories for the operating system and Oracle library
files. Example: $ORACLE_HOME/lib
Slide 56

Database Configuration Assistant


The Database Configuration Assistant allows you to:
Create a database
Configure database options
Delete a database
Manage templates
Create new template using pre-defined template settings
Create new template from an existing database
Delete database template

55

Database Configuration Assistant


Managing templates is new to Oracle9i. Some predefined templates are available for use. You
can also use the existing database as a copy to create a new database or template. The database
parameters are stored in XML format.
Benefits of using templates:

Saves time in creating database


Templates can be shared
Database options can be changed, if necessary
Slide 57

Create a Database Using Database Configuration Assistant

Select type of database to be created from predefined templates


Specify global database name and SID
Select features to use in your database
Identify any scripts to be run after database creation
Select mode you want the database to operate in

56

Create a Database Using Database Configuration Assistant


Launch the Database Configuration Assistant: Programs > Oracle-OraHome90 >
Configuration and Migration Tools > Database Configuration Assistant.
Select the Create a Database option.
Select the type of database you want to create from the list of predefined templates.
Data warehouse
General purpose
New database
Transaction processing
Use the Show Details option to view what will be created. The templates can be created
with or without datafiles.
Without datafiles: Contains only the structure of the database. You can specify and
change all database parameters.
With datafiles: Contains both the structure and the physical datafiles of the database. All
logfiles and control files are automatically created for the database and you can
add/remove control files, log groups, change the destination and name of datafiles.
You cannot add or remove datafiles, tablespaces, or rollback segments. Initialization
parameters cannot be changed.
Slide 58

Create a Database Using Database Configuration Assistant

Specify options for memory, archiving, database sizing, and file locations
Define database storage parameters
Change file location variables as needed
Select a database creation option to complete database creation

57

Create a Database Using Database Configuration Assistant


Specify the following options
Memory
Choose a Typical or Custom Database
Typical creates a database with minimal user input. With typical option,
you can specify one of the following environments to operate the database: Online
Transaction Processing (OLTP), Multipurpose, and Data Warehousing.
Custom allows you customize the creation of your database. This option is
only for database administrators experienced with advanced database creation
procedures.
Archive
This option places the database in ARCHIVELOG mode and enables redo log files to be
archived before being reused.
DB Sizing
This helps to define the block size and sort area size for the database. Data block size of a
database can be specified only at the time of database creation. SORT_AREA_SIZE
is the maximum amount of memory used for sort operations.
Slide 59

Creating a Database Manually

Choose a unique instance and database name.


Choose a database character set.
Set operating system variables.
Create the initialization parameter file.
Start the instance in NOMOUNT stage.
Create and execute the CREATE DATABASE command.
Open the database.
Run scripts to generate the data dictionary and accomplish
post creation steps.
Create additional tablespaces as needed.

58

Creating a Database Manually


Choose a unique instance and database name.
Choose a database character set.
A database character set must be defined. An optional national character set can also be
defined. For example:
Character set AL32UTF16
National character set AL16UTF16

Set operating system variables.


Four environment variables need to be set: ORACLE_HOME, ORACLE_SID, PATH,
LD_LIBRARY_PATH.
ORACLE_HOME: The top directory in which the Oracle9i server is installed.
ORACLE_SID: A user definable name assigned to an instance of a database. Used to
distinguish different database instances running on one machine
PATH: Defines the directories the operating system search to find executables.
LD_LIBRARY_PATH: Defines the directories in which required library files are stored.
Slide 60

Creating the Database


create database orcl
logfile group 1 ('/u01/app/oracle/oradata/orcl/redo1.log') size 10M,
group 2 ('/u01/app/oracle/oradata/orcl/redo2.log') size 10M,
group 3 ('/u01/app/oracle/oradata/orcl/redo3.log') size 10M
character set WE8ISO8859P1
national character set utf8
datafile '/u01/app/oracle/oradata/orcl/system.dbf'
size 50M
autoextend on
next 10M maxsize unlimited
extent management local
sysaux datafile '/u01/app/oracle/oradata/orcl/sysaux.dbf'
size 10M
autoextend on
next 10M
maxsize unlimited
undo tablespace undotbs
datafile '/u01/app/oracle/oradata/orcl/undo.dbf'
size 10M
default temporary tablespace temp
tempfile '/u01/app/oracle/oradata/orcl/temp.dbf'
size 10M;

59

Creating the Database


To create a database, use the following SQL command:
CREATE DATABASE [database]
[CONTROLFILE REUSE]
[LOGFILE [GROUP integer] filespec
[MAXLOGFILES integer]
[MAXLOGMEMBERS integer]
[MAXLOGHISTORY integer]
[MAXDATAFILES integer]
[MAXINSTANCES integer]
[ARCHIVELOG|NOARCHIVELOG]
[CHARACTER SET charset]
[NATIONAL CHARACTER SET charset]
[DATAFILE filespec [autoextend_clause]
Slide 61

Summary

In this lesson, you should have learned to:


Identify the prerequisites for creating a database
Create a database using the Oracle Database
Configuration Assistant
Create a database manually
Create a database using Oracle Managed Files

In this lesson, you should have learned to:


Identify the prerequisites for creating a database
Create a database using the Oracle Database
Configuration Assistant
Create a database manually using CREATE
DATABASE command
After database contains SYS and SYSTEM are two
users after creation
Create a database using Oracle Managed Files (OMF)

60
Slide 62

Lab
1. Installing Oracle Software
2. Using orapwd utility to create the password file.
3. Create the database through DBCA. Check the services which are running after
database creation.
4. Select columns OWNER, TABLE_NAME, and TABLESPACE_NAME from data
dictionary view DBA_TABLES.
5. Spool the initialization parameters to a file.
6. Locate the alert log file.
7. Locate the adump, bdump, cdump directories.
8. List the non default parameters.
9. Shutdown the database. Mount the database. Try to connect to HR schema from
other session. See what happens. Open the database. Connect to HR schema from
other session.
10. Create pfile from spfile. Locate the newly created pfile. Note the naming convention of
pfile.
11. Shutdown the database using immediate option.

61
Slide 63

Lab
12. Increase the sga_max_size parameter by 5 MB and sga_target parameter by 3 MB .
Verify that the values has been changed of not. ( Hint : Make changes in pfile.
Connect to the idle instance. Create spfile from pfile. Startup the database. Use
v$parameter view to verify the values of parameters.)
13. Shutdown the database and open it in read-only mode. Connect as user HR and
insert the following into the REGIONS table.
1. INSERT INTO regions VALUES ( 5, Mars );
See what happens? Put the database back in read-write mode.
14. Make sure the database is started. Keep the two SQL*Plus sessions open, one session as user
SYS AS SYSDBA and one as user HR. As user SYS AS SYSDBA enable restricted session. As
user HR select from the regions table. Is the select successful? In the HR session exit SQL*Plus
then reconnect as HR. What happens? Disable restricted session.
15. Turn off the Cluster Synchronization service.
16. Turn off the Oracle Enterprise Manager service.

62
Slide 64

Managing the the Control File


Slide 65

Objectives

After completing this lesson, you should be able to


do the following:
Explain the uses of the control file
List the contents of the control file
Multiplex and manage the control file
Manage the control file with Oracle Managed Files
(OMF)
Obtain control file information
Explain the purpose of online redo log files
Outline the structure of online redo log files
Control log switches and checkpoints
Multiplex and maintain online redo log files
Manage online redo logs files with OMF

64
Slide 66

Control File
A small binary file
Defines current state of physical database
Maintains integrity of database
Required:
At MOUNT state during database startup
To operate the database
Linked to a single database
Loss may require recovery
Database
Sized initially by CREATE DATABASE
Control
files

65

Control File
The control file is a small binary file necessary for the database to start and operate
successfully. Each control file is associated with only one Oracle database. Before a database is
opened, the control file is read to determine if the database is in a valid state to use.
A control file is updated continuously by the Oracle server during database use, so it must be
available for writing whenever the database is open. The information in the control file can be
modified only by the Oracle server; no database administrator or end user can edit the control
file.
If for some reason the control file is not accessible, the database does not function properly. If
all copies of a databases control files are lost, the database must be recovered before it can be
opened.
Slide 67

Control File Contents


A control file contains the following entries:
Database name and identifier
Time stamp of database creation
Tablespace names
Names and locations of datafiles and redo log files
Current redo log file sequence number
Checkpoint information
Begin and end of undo segments
Redo log archive information
Backup information

66

Control File Contents


The information in the control file includes the following:
Database name is taken from either the name specified by the initialization parameter
DB_NAME or the name used in the CREATE DATABASE statement.
Database identifier is recorded when the database is created.
Time stamp of database creation is also recorded at database creation.
Names and locations of associated datafiles and online redo log files are updated when a
datafile or redo log is added to, renamed in, or dropped from the database.
Tablespace information is updated as tablespaces are added or dropped.
Redo log history is recorded during log switches.
Location and status of archived logs are recorded when archiving occurs.
Location and status of backups are recorded by the Recovery Manager utility.
Current log sequence number is recorded when log switches occur.
Checkpoint information is recorded as checkpoints are made.
Slide 68

Multiplexing the Control File

CONTROL_FILES=
$HOME/ORADATA/u01/ctrl01.ctl, $HOME/ORADATA/u02/ctrl02.ctl

Disk 1 (u01) Disk 2 (u02)

ctrl01.ctl ctrl02.ctl

67

Multiplexing the Control File


To safeguard against a single point of failure of the control file, it is strongly recommended that
the control file be multiplexed, storing each copy on a different physical disk. If a control file is
lost, a multiplexed copy of the control file can be used to restart the instance without database
recovery.
Control files can be multiplexed up to eight times by:
Creating multiple control files when the database is created by including the control file
names and full path in the initialization parameter file:
CONTROL_FILES=$HOME/ORADATA/u01/ctrl01.ctl,
$HOME/ORADATA/u02/ctrl02.ctl
Adding a control file after the database is created
Backing up the control files:
Because the control file records the physical structure of the database, you should immediately
make a backup of your control file after making changes to the physical structure of the
database.
Slide 69

Multiplexing the Control File


When Using SPFILE
1. Alter the SPFILE:

ALTER SYSTEM SET control_files =


'$HOME/ORADATA/u01/ctrl01.ctl',
'$HOME/ORADATA/u02/ctrl02.ctl' SCOPE=SPFILE;
2. Shutdown the database:

shutdown immediate

3. Create additional control files:

cp $HOME/ORADATA/u01/ctrl01.ctl
$HOME/ORADATA/u02/ctrl02.ctl

4. Start the database:


startup

68

Multiplexing the Control File When Using SPFILE

Alter the SPFILE: Using the ALTER SYSTEM SET command alter the SPFILE to
include a list of all control files to be used: main control file and multiplexed copies.

Shutdown the database: Shutdown the database in order to create the additional control
files on the operating system.

Create additional control files: Using the operating system copy command, create the
additional control files as required and verify that the files have been created in the
appropriate directories.

Start the database: When the database is started the SPFILE will be read and the Oracle
server will maintain all the control files listed in the CONTROL_FILES parameter.
Slide 70

Multiplexing the Control File


When Using PFILE
1.Shut down the database:
shutdown immediate

2. Copy the control file to the second loca


cp $HOME/ORADATA/u01/ctrl01.ctl
3.Create additional control files:
$HOME/ORADATA/u02/ctrl02.ctl

4.Add control file names to PFILE:


CONTROL_FILES = (/DISK1/control01.ctl,
/DISK3/control02.ctl)
5.Start the database:

startup

69

Multiplexing the Control File When Using PFILE

Shut down the database: Shutdown the database in order to create the additional control
files on the operating system.

Create additional control files: Using the operating system copy command, create the
additional control files as required and verify that the files have been created in the
appropriate directories.

Add control file names to PFILE: Alter the PFILE to include a listing of all of the control
files.

Start the database: When the database is started the PFILE will be read and the Oracle
server will maintain all the control files listed in the CONTROL_FILES parameter.
Slide 71

Obtaining Control File Information

Information about control file status and locations can


be retrieved by querying the following views.
V$CONTROLFILE: Lists the name and status of all control files
associated with the instance
V$PARAMETER: Lists status and location of all parameters
V$CONTROLFILE_RECORD_SECTION: Provides information about
the control file record sections
SHOW PARAMETER CONTROL_FILES: Lists the name, status, and
location of the control files

70

Obtaining Control File Information


To obtain the location and names of the control files, query the V$CONTROLFILE view.
SELECT name FROM V$CONTROLFILE;
NAME
------------------------------------
/u01/home/db03/ORADATA/u01/ctrl01.ctl
/u01/home/db03/ORADATA/u01/ctrl01.ctl
2 rows selected.
The V$PARAMETER view can also be used.
SELECT name, value from V$PARAMETER
WHERE name = 'control_files';
NAME Value
--------------------------------------------------
control_files /u01/home/db03/ORADATA/u01/ctrl01.ctl
Slide 72

Creating Control File


You can backup the control file using following command.
sql> ALTER DATABASE BACKUP CONTROLFILE TO TRACE;

Command to create the control file.


CREATE CONTROLFILE REUSE DATABASE "INFOSYS" NORESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 '/u01/app/oracle/oradata/infosys/infosys/redo01.log' SIZE 50M,
GROUP 2 '/u01/app/oracle/oradata/infosys/infosys/redo02.log' SIZE 50M,
GROUP 3 '/u01/app/oracle/oradata/infosys/infosys/redo03.log' SIZE 50M
DATAFILE
'/u01/app/oracle/oradata/infosys/infosys/system01.dbf',
'/u01/app/oracle/oradata/infosys/infosys/undotbs01.dbf',
'/u01/app/oracle/oradata/infosys/infosys/sysaux01.dbf',
'/u01/app/oracle/oradata/infosys/infosys/apps01.dbf
CHARACTER SET WE8ISO8859P1
;
71
Slide 73

Summary

In this lesson, you should have learned how to:


Multiplex the control file when using an SPFILE
Multiplex the control file when using an init.ora
Manage the control files using OMF

Control File is a small binary file


It Defines current state of physical database
Control File Contains: Time stamp of database creation, Tablespace
names, Names and locations of datafiles and redo log files, Current redo
log file sequence number, Checkpoint information, Begin and end of
undo segments, Redo log archive information, Backup information
Multiplex the control file to avoid single point failure

72
Slide 74

Lab
1. Where is the existing control file located and what is the name? Hint: Query the dynamic
performance view V$CONTROLFILE or V$PARAMETER, or execute the SHOW
PARAMETER command to display the name and the location of the control file.
2. Multiplex the existing control file and name the new control file control04.ctl.
Hints:
- Before shutting down the database alter the SPFILE (SCOPE=SPILE) to add
the new control file to the initialization file.
- Shut down the database, and copy any of the existing control file to a new file with the
name control04.ctl in the directory where currently all the control files are residing.
- Start up the database.- Query the Dynamic View V$CONTROLFILE or V$PARAMETER, or use
the SHOW PARAMETER command to confirm that both control files are being used.
- create pfile from spfile. What changes you see in pfile?
3. What is the initial sizing of the data file section in your control file?
Hint: Query the Dynamic View
V$CONTROLFILE_RECORD_SECTION.
4. Backup the control file to trace.

73
Slide 75

Maintaining Redo Log Files

74
Slide 76

Using Redo Log Files


Redo log files have the following characteristics:
Record all changes made to data
Provide a recovery mechanism
Can be organized into groups
At least two groups required

Redo
Log
files

75

Using Redo Log Files


Redo log files provide the means to redo transactions in the event of a database failure. Every
transaction is written synchronously to the Redo Log Buffer, then gets flushed to the redo log
files in order to provide a recovery mechanism in case of media failure. (With exceptions such
as direct load inserts in objects with the NOLOGGING clause enabled.) This includes
transactions that have not yet been committed, undo segment information, and schema and
object management statements. Redo log files are used in a situation such as an instance failure
to recover committed data that has not been written to the datafiles. The redo log files are used
only for recovery.
Slide 77

Structure of Redo Log Files

Group 1 Group 2 Group 3


Disk 1

Member Member Member

Member Member Member Disk 2

76

Structure of the Redo Log Files


The database administrator can set up the Oracle database to maintain copies of online redo log
files to avoid losing database information due to a single point of failure.

Online redo log file groups:


A set of identical copies of online redo log files is called an online redo log file group.
The LGWR background process concurrently writes the same information to all online redo
log files in a group.
The Oracle server needs a minimum of two online redo log file groups for the normal
operation of a database.

Online redo log file members:


Each online redo log file in a group is called a member.
Each member in a group has identical log sequence numbers and are of the same size. The
log sequence number is assigned each time that the Oracle server writes to a log group to
uniquely identify each redo log file. The current log sequence number is stored in the
control file and in the header of all datafiles.
Slide 78

How Redo Log Files Work


Redo log files are used in a cyclic fashion.
When a redo log file is full, LGWR will move to the next log
group.
Called a log switch
Checkpoint operation also occurs
Information written to the control file

77

How Redo Log Files Work


The Oracle server sequentially records all changes made to the database in the Redo Log
Buffer. The redo entries are written from the Redo Log Buffer to one of the online redo log file
groups called the current online redo log file group by the LGWR process. LGWR writes under
the following situations:
When a transaction commits
When the Redo Log Buffer becomes one-third full
When there is more than a megabyte of changed records in the Redo Log Buffer
Before the DBWn writes modified blocks in the Database Buffer Cache to the datafiles
Redo log files are used in a cyclic fashion. Each redo log file group is identified by a log
sequence number that is overwritten each time the log is reused.
Log switches:
LGWR writes to the online redo log files sequentially. When the current online redo log file
group is filled, LGWR begins writing to the next group. This is called a log switch.
When the last available online redo log file is filled, LGWR returns to the first online redo log
file group and starts writing again.
Slide 79

Forcing Log Switches and Checkpoints

Forcing a log switch:

ALTER SYSTEM SWITCH LOGFILE;

Checkpoints can be forced by using:


Setting FAST_START_MTTR_TARGET parameter

FAST_START_MTTR_TARGET = 600

ALTER SYSTEM CHECKPOINT command

ALTER SYSTEM CHECKPOINT;

78

Forcing Log Switches and Checkpoints


Log switches and checkpoints are automatically performed at certain points in the operation of
the database, as identified previously. However, a DBA can force a log switch or a checkpoint
to occur.

Forcing checkpoints:
FAST_START_MTTR_TARGET parameter replaces the deprecated parameters:
FAST_START_IO_TARGET
LOG_CHECKPOINT_TIMEOUT
These deprecated parameters must not be used if the parameter
FAST_START_MTTR_TARGET is used.

In the example above, the FAST_START_MTTR_TARGET parameter has been set so that
instance recovery should not take more than 600 seconds. The database will adjust the other
parameters to this goal.
Slide 80

Adding Online Redo Log File Groups

ALTER DATABASE ADD LOGFILE GROUP 3


('$HOME/ORADATA/u01/log3a.rdo',
'$HOME/ORADATA/u02/log3b.rdo')
SIZE 1M;

log1a.rdo log2a.rdo log3a.rdo

log1b.rdo log2b.rdo log3b.rdo

Group 1 Group 2 Group 3

79

Adding Online Redo Log File Groups


In some cases you might need to create additional log file groups. For example, adding groups
can solve availability problems. To create a new group of online redo log files, use the
following SQL command:

ALTER DATABASE [database]


ADD LOGFILE [GROUP integer] filespec
[, [GROUP integer] filespec]...]

You specify the name and location of the members with the file specification. The value of the
GROUP parameter can be selected for each redo log file group. If you omit this parameter, the
Oracle server generates its value automatically.
Slide 81

Adding Online Redo Log File Members

ALTER DATABASE ADD LOGFILE MEMBER


'$HOME/ORADATA/u04/log1c.rdo' TO GROUP 1,
'$HOME/ORADATA/u04/log2c.rdo' TO GROUP 2,
'$HOME/ORADATA/u04/log3c.rdo' TO GROUP 3;

log1a.rdo log2a.rdo log3a.rdo

log1b.rdo log2b.rdo log3b.rdo

log1c.rdo log2c.rdo log3c.rdo

Group 1 Group 2 Group 3

80

Adding Online Redo Log File Members


You can add new members to existing redo log file groups using the following ALTER
DATABASE ADD LOGFILE MEMBER command:
ALTER DATABASE [database]
ADD LOGFILE MEMBER
[ 'filename' [REUSE]
[, 'filename' [REUSE]]...
TO {GROUP integer
|('filename'[, 'filename']...)
}
]...
Use the fully specified name of the log file members; otherwise the files are created in a default
directory of the database server.
If the file already exists, it must have the same size, and you must specify the REUSE option.
You can identify the target group either by specifying one or more members of the group or by
specifying the group number.
Slide 82

Dropping Online Redo


Log File Groups

ALTER DATABASE DROP LOGFILE GROUP 3;

log1a.rdo log2a.rdo log3a.rdo

Group 1 Group 2 Group 3

81

Dropping Online Redo Log File Groups


To increase or decrease the size of online redo log file groups, add new online redo log file
groups (with the new size) and then drop the old ones.
An entire online redo log file group can be dropped with the following ALTER DATABASE
DROP LOGFILE command:
ALTER DATABASE [database]
DROP LOGFILE {GROUP integer|('filename'[, 'filename']...)}
[,{GROUP integer|('filename'[, 'filename']...)}]...

Restrictions:
An instance requires at least two groups of online redo log files.
An active or current group cannot be dropped.
When an online redo log file group is dropped, the operating system files are not deleted.
Slide 83

Dropping Online Redo


Log File Members

ALTER DATABASE DROP LOGFILE MEMBER


'$HOME/ORADATA/u04/log3c.rdo';

log1a.rdo log1a.rdo

log1b.rdo log1b.rdo

log1c.rdo log2c.rdo

Group 1 Group 2

82

Dropping a Redo Log File Members


You may want to drop an online redo log file member because it is invalid. Use the following
ALTER DATABASE DROP LOGFILE MEMBER command if you want to drop one or more
specific online redo log file members:
ALTER DATABASE [database]
DROP LOGFILE MEMBER 'filename'[, 'filename']...
Restrictions:
If the member you want to drop is the last valid member of the group, you cannot drop that
member.
If the group is current, you must force a log file switch before you can drop the member.
If the database is running in ARCHIVELOG mode and the log file group to which the
member belongs is not archived, then the member cannot be dropped.
When an online redo log file member is dropped, the operating system file is not deleted if
you are not using OMF feature.
Slide 84

Relocating or Renaming
Online Redo Log Files
Relocate or rename online redo log files in one of
the two following ways:
ALTER DATABASE CLEAR LOGFILE command
Copy the online redo log files to the new location
Execute the command

Add new members and drop old members

ALTER DATABASE CLEAR LOGFILE


'$HOME/ORADATA/u01/log2a.rdo';

83

Relocating or Renaming Online Redo Log Files


The locations of online redo log files can be changed by renaming the online redo log files.
Before renaming the online redo log files, ensure that the new online redo log file exists. The
Oracle server changes only the pointers in the control files, but does not physically rename or
create any operating system files.

The following ALTER DATABASE RENAME FILE command changes the name of the online
redo log file:
SQL> ALTER DATBASE [database}
2 RENAME FILE filename [,filename]
3 TO filename]
Slide 85

Online Redo Log File Configuration

Group 1
?
Group 2 Group 3

Member Member Member

Member Member

Disk 1 Disk 2
84
Disk 3

Online Redo Log File Configuration


To determine the appropriate number of online redo log files for a database instance, you have
to test different configurations.
In some cases, a database instance may require only two groups. In other situations, a database
instance may require additional groups to guarantee that the groups are always available to
LGWR. For example, if messages in the LGWR trace file or in the alert file indicate that
LGWR frequently has to wait for a group because a checkpoint has not completed or a group
has not been archived, you need to add groups.
Although with the Oracle server multiplexed groups can contain different numbers of
members, try to build up a symmetric configuration. An asymmetric configuration should only
be the temporary result of an unusual situation such as a disk failure.

Location of online redo log files:


When you multiplex the online redo log files, place members of a group on different disks. By
doing this, even if one member is not available but other members are available, the instance
does not shut down.
Separate archive log files and online redo log files on different disks to reduce contention
between the ARCn and LGWR background processes.
Slide 86

Obtaining Group and Member Information

Information about a group and its members can be


obtained by querying the following views:
V$LOG
V$LOGFILE

85

Obtaining Group and Member Information


V$LOG view:
The following query returns information about the online redo log file from the control file:
SQL> SELECT group#, sequence#, bytes, members, status
2 FROM v$log;
GROUP# SEQUENCE# BYTES MEMBERS STATUS
------------------- -------- ------------------ 1 688 1048576 1
CURRENT
2 689 1048576 1 INACTIVE
2 rows selected.
The following items are the most common values for the STATUS column:
UNUSED: Indicates that the online redo log file group has never been written to. This is the
state of an online redo log file that was just added.
CURRENT: Indicates the current online redo log file group. This implies that the online
redo log file group is active.
ACTIVE: Indicates that the online redo log file group is active but is not the current online
redo log file group. It is needed for crash recovery. It may be in use for block recovery. It
may or may not be archived.
Slide 87

Archived Redo Log Files

Filled online redo log files can be archived.


There are two advantages in running the database in ARCHIVELOG
mode and archiving redo log files:
Recovery: A database backup together with online and archived
redo log files can guarantee recovery of all committed
transactions.
Backup: This can be performed while the database is open.
By default, database is created in NOARCHIVELOG mode.

86

Archived Redo Log Files


One of the important decisions that a database administrator (DBA) has to make is whether the
database is configured to operate in ARCHIVELOG mode or in NOARCHIVELOG mode.
NOARCHIVELOG mode:
In NOARCHIVELOG mode, the online redo log files are overwritten each time an online redo
log file is filled, and log switches occur. LGWR does not overwrite a redo log file group until
the checkpoint for that group is completed.
ARCHIVELOG mode:
If the database is configured to run in ARCHIVELOG mode, inactive groups of filled online
redo log files must be archived. Because all changes made to the database are recorded in the
online redo log files, the database administrator can use the physical backup and the archived
online redo log files to recover the database without losing any committed data.
There are two ways in which online redo log files can be archived:
Manually
Automatically (recommended method)
Slide 88

Archived Redo Log Files

Accomplished automatically by ARCn


Accomplished manually through SQL statements
When successfully archived:
An entry in the control file is made
Records: archive log name, log sequence number, and high and
low system change number (SCN)
Filled redo log file cannot be reused until:
A checkpoint has taken place
File has been archived by ARCn
Can be multiplexed
Maintained by the DBA

87

Archived Redo Log Files


Information about archived logs can be obtained from V$INSTANCE.
SQL> SELECT archiver
2 FROM v$instance;
ARCHIVE
---------
STOPPED
1 row selected.
Slide 89

Summary
In this lesson, you should have learned how to:
Explain the use of online redo log files
Obtain redo log file information
Control log switches and checkpoints
Multiplex and maintain online redo log files
Manage online redo log files with OMF

Online redo log files record the changes made to the database
Redo log files are used in a cyclic fashion and entries are recorded
sequentially.
Minimum of 2 redo log groups and 1 member in each redo log group
Redo log groups and members can be added and dropped
Multiplex and maintain online redo log files to avoid single-point failure
Manage online redo log files with OMF
Archive log files are copy of filled redo log files, also called as offline redo log
files.
The two advantages of archiving are Recovery and Backup

88
Slide 90

Lab
1. List the number and location of existing log files and display the number of redo log file groups and
members your database has. Hints: Query the dynamic view V$LOGFILE. Use the dynamic
view V$LOG.
2. Add a redo log member to each group in your database located on u04, using the following naming
conventions: Add member to Group 1: log01b.rdo. Add member to Group 2: log02b.rdo
Verify the result.
Hints: Execute the ALTER DATABASE ADD LOGFILE MEMBER command to add a redo log
member to each group. Query the dynamic performance view V$LOGFILE to verify the result.
3. Add a redo log group in your database with two members with following naming conventions:
Add Group 4: log04a.rdo and log04b.rdo . Verify the result .
Hints: Execute the ALTER DATABASE ADD LOGFILE command to create a new group. Query
the Dynamic View V$LOGFILE to display the name of the new members of the new group. Query
the Dynamic View V$LOG to display the number of redo log file groups and members.
4. Make sure that the newly added online redo log is the current online redo log. [Hint: Perform log
switch]
5. Drop the redo log group created in step 3.
Hints: Perform log switch. Execute the ALTER DATABASE DROP LOGFILE GROUP command
to remove the log group. Query the Dynamic View V$LOG to verify the result. Remove the
operating system files for the group.

89
Slide 91

Managing Tablespaces and


Datafiles
Slide 92

Objectives
After completing this lesson, you should be able to do the following:

Describe about Tablespaces and Datafiles


Types of Tablespaces
Create, alter and drop Tablespaces and Datafiles
Resize tablespaces and datafiles
Space Management in Tablespaces
Taking a Tablespace Offline
Change storage settings

91
Slide 93

Tablespaces and Datafiles

Oracle stores data logically in tablespaces and


Physically in datafiles.
Tablespaces:
Can belong to only one database at a time
Consist of one or more datafiles
Are further divided into logical units of storage
Database
Datafiles:
Can belong to only one Tablespace
tablespace and one database
Are a repository for schema Datafiles
object data

92

Tablespaces and Datafiles


A database is divided into one or more logical storage units called tablespaces.
Tablespaces are divided into logical units of storage called segments, which are further
divided into extents. Extents are a collection of contiguous blocks.
A tablespace in an Oracle database consists of one or more physical datafile(s).
A datafile can be associated with only one tablespace and only one database.
When a datafile is first created, the allocated disk space is formatted but does not contain
any user data. However, Oracle reserves the space to hold the data for future segments of
the associated tablespace--it is used exclusively by Oracle. As the data grows in a
tablespace, Oracle uses the free space in the associated datafiles to allocate extents for the
segment.
The data associated with schema objects in a tablespace is physically stored in one or more
of the datafiles that constitute the tablespace. Note that a schema object does not
correspond to a specific datafile; rather, a datafile is a repository for the data of any
schema object within a specific tablespace. Oracle allocates space for the data associated
with a schema object in one or more datafiles of a tablespace. Therefore, a schema object
can span one or more datafiles. Unless table striping is used (where data is spread across
more than one disk), the database administrator and end users cannot control which
datafile stores a schema object.
Slide 94

Types of Tablespaces
SYSTEM tablespace
Created with the database
Contains the data dictionary
Contains the SYSTEM undo segment
Non-SYSTEM tablespace
Separate segments
Eases space administration
Controls amount of space allocated to a user

93

Types of Tablespaces
The DBA creates tablespaces for increased control and ease of maintenance. The Oracle server
perceives two types of tablespaces: SYSTEM and Non-System (all others).
SYSTEM tablespace:
Every Oracle database contains a tablespace named SYSTEM, which Oracle creates
automatically when the database is created.
The SYSTEM tablespace is always online when the database is open.
The SYSTEM tablespace always contains the data dictionary tables for the entire database.
The data dictionary tables are stored in datafile 1.
It also contains the system UNDO segment .
Non-SYSTEM tablespaces:
It is used to store users data
Separate undo, temporary, application data and application index segments
Separate dynamic and static data
Control the amount of space allocated to the users objects
Slide 95

Creating Tablespaces
A tablespace is created using the command:
CREATE TABLESPACE

CREATE TABLESPACE userdata


DATAFILE '/u01/oradata/userdata01.dbf' SIZE 100M
AUTOEXTEND ON NEXT 5M MAXSIZE 200M;

94

Creating Tablespaces
You create a tablespace with the CREATE TABLESPACE command:
CREATE TABLESPACE tablespace
[DATAFILE clause]
[MINIMUM EXTENT integer[K|M]]
[BLOCKSIZE integer [K]]
[LOGGING|NOLOGGING]
[DEFAULT storage_clause ]
[ONLINE|OFFLINE]
[PERMANENT|TEMPORARY]
[extent_management_clause]
[segment_management_clause]
where:
Tablespace: This is the name of the tablespace to be created
DATAFILE: This specifies the datafile or datafiles that make up the tablespace
MINIMUM EXTENT: This ensures that every used extent size in the tablespace is a multiple
of the integer. Use K or M to specify this size in kilobytes or megabytes.
BLOCKSIZE: BLOCKSIZE specifies a nonstandard block size for the tablespace. In order to
specify this clause, you must have the DB_CACHE_SIZE and at least one
DB_nK_CACHE_SIZE parameter set, and the integer you specify in this clause must
correspond with the setting of one DB_nK_CACHE_SIZE parameter setting.
Slide 96

Space Management in Tablespaces

Locally managed tablespace:


Free extents managed in the tablespace
Bitmap is used to record free extents
Each bit corresponds to a block or group of blocks
Bit value indicates free or used

95

Space Management in Tablespaces


Tablespaces allocate space in extents. Tablespaces can be created to use one of the two
different methods of keeping track of free and used space:
Locally managed tablespaces:
A tablespace that manages its own extents maintains a bitmap in each datafile to keep track of
the free or used status of blocks in that datafile. Each bit in the bitmap corresponds to a block
or a group of blocks. When an extent is allocated or freed for reuse, Oracle changes the bitmap
values to show the new status of the blocks. These changes do not generate rollback
information because they do not update tables in the data dictionary (except for special cases
such as tablespace quota information).
Dictionary-managed tablespaces:
For a tablespace that uses the data dictionary to manage its extents, Oracle updates the
appropriate tables in the data dictionary whenever an extent is allocated or freed for reuse.
Oracle also stores rollback information about each update of the dictionary tables. Because
dictionary tables and rollback segments are part of the database, the space that they occupy is
subject to the same space management operations as all other data.
Starting with Oracle9i, the default for extent management when creating a tablespace is locally
managed. However, you can explicitly specify that you want to create a dictionary-managed
tablespace. For dictionary-managed tablespaces, Oracle updates the appropriate tables in the
data dictionary whenever an extent is allocated, or freed for reuse.
Slide 97

Locally Managed Tablespaces

Reduced contention on data dictionary tables


No undo generated when space allocation or deallocation occurs

CREATE TABLESPACE userdata


DATAFILE '/u01/oradata/userdata01.dbf' SIZE 500M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K;

96

Locally Managed Tablespaces


The LOCAL option of the EXTENT MANAGEMENT clause specifies that a tablespace is to
be locally managed. By default a tablespace is locally managed.
extent_management_clause:
[ EXTENT MANAGEMENT [ DICTIONARY | LOCAL
[ AUTOALLOCATE | UNIFORM [SIZE integer[K|M]] ] ] ]
where:
DICTIONARY: Specifies that the tablespace is managed using dictionary tables.
LOCAL: Specifies that the tablespace is locally managed via bitmaps. If you specify LOCAL,
you cannot specify DEFAULT storage_clause, MINIMUM EXTENT, or TEMPORARY.
AUTOALLOCATE: Specifies that the tablespace is system managed. Users cannot specify an
extent size. This is the default.
UNIFORM: Specifies that the tablespace is managed with uniform extents of SIZE bytes. Use
K or M to specify the extent size in kilobytes or megabytes. The default size is 1 MB.
Slide 98

Undo Tablespace
Used to store undo segments
Cannot contain any other objects
Extents are locally managed
Can only use the DATAFILE and EXTENT MANAGEMENT clauses

CREATE UNDO TABLESPACE undo1


DATAFILE '/u01/oradata/undo01.dbf' SIZE 40M;

97

Undo Tablespace
Oracle strongly recommends operating in automatic undo management mode. The database
server can manage undo more efficiently, and automatic undo management mode is less
complex to implement and manage.
An undo tablespace is used with Automatic Undo Management.
CREATE UNDO TABLESPACE tablespace
[DATAFILE clause]
Slide 99

Temporary Tablespaces

Used for sort operations


Cannot contain any permanent objects
Locally managed extents recommended

CREATE TEMPORARY TABLESPACE temp


TEMPFILE '/u01/oradata/temp01.dbf' SIZE 500M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 4M;

98

Temporary Tablespaces
You can manage space for sort operations more efficiently by designating temporary
tablespaces exclusively for sorts. Doing so effectively eliminates serialization of space
management operations involved in the allocation and deallocation of sort space.

All operations that use sorts--including joins, index builds, ordering (ORDER BY), the
computation of aggregates (GROUP BY), and the ANALYZE statement for collecting
optimizer statistics--benefit from temporary tablespaces. The performance gains are
significant with Oracle9i Real Application Clusters.

CREATE TEMPORARY TABLESPACE Command


Although the ALTER/CREATE TABLESPACE...TEMPORARY command can be used to
create a temporary tablespace, it is recommended that the CREATE TEMPORARY
TABLESPACE command be used.
Slide 100

Default Temporary Tablespace

Specifies a database-wide default temporary tablespace


Eliminates using SYSTEM tablespace for storing
temporary data
Can be created by using:
CREATE DATABASE
ALTER DATABASE

ALTER DATABASE
DEFAULT TEMPORARY TABLESPACE temp;

99

Default Temporary Tablespace


If you have not created a temporary tablespace for a user, Oracle must have somewhere to
store temporary data for that user. Historically, Oracle has used SYSTEM for default temporary
data storage.
In Oracle9i you are encouraged to define a default temporary tablespace when creating the
database. If you do not, SYSTEM will still be used for default temporary storage. However,
you will receive a warning in alert.log saying that a default temporary tablespace is
recommended and will be necessary in future releases.
You specify a default temporary tablespace when you create a database, using the DEFAULT
TEMPORARY TABLESPACE extension to the CREATE DATABASE statement.
You can drop the default temporary tablespace. If you do, the SYSTEM tablespace will be
used as default temporary tablespace. However, in future releases, this might not be allowed.
Once defined, users who are not explicitly assigned to a temporary tablespace are assigned to
the default temporary tablespace.
The default temporary database can be changed at any time by using the ALTER DATABASE
DEFAULT TEMPORARY TABLESPACE command. When the default temporary tablespace
is changed, all users assigned the default temporary tablespace are reassigned to the new
default.
Slide 101

Creating a Default Temporary Tablespace


During database creation:
create database orcl
logfile group 1 ('/u01/app/oracle/oradata/orcl/redo1.log') size
10M,

datafile '/u01/app/oracle/oradata/orcl/system.dbf'
size 50M
autoextend on
....
sysaux datafile '/u01/app/oracle/oradata/orcl/sysaux.dbf'
size 10M
..
undo tablespace undotbs
.
default temporary tablespace temp
tempfile '/u01/app/oracle/oradata/orcl/temp.dbf'
size 10M;

10
0

Creating a Default Temporary Tablespace


During database creation:
When creating a database without a default temporary tablespace the default tablespace, the
SYSTEM tablespace is assigned to any user created without a TEMPORARY TABLESPACE
clause. Also, a warning is placed in the alertSID.log stating that the SYSTEM tablespace is the
default temporary tablespace.

Creating a default temporary tablespace during database creation prevents the SYSTEM
tablespace from being used for temporary space. When a default temporary tablespace is
created with the CREATE DATABASE command, it is of locally managed type.
Slide 102

Creating a Default Temporary Tablespace


After database creation:

ALTER DATABASE
DEFAULT TEMPORARY TABLESPACE default_temp2;

To find the default temporary tablespace for the database query


DATABASE_PROPERTIES
SELECT * FROM DATABASE_PROPERTIES;

10
1

Creating a Default Temporary Tablespace (continued)


After database creation:

A default temporary tablespace can be created and set by:

Using the CREATE TABLESPACE command to create a temporary tablespace


Using the ALTER DATABASE command as shown above.
Slide 103

Restrictions on Default Temporary Tablespace

Default temporary tablespaces cannot be:


Cannot be dropped until after a new default is made available
Cannot be taken offline
Cannot be altered to a permanent tablespace

10
2

Restrictions on Default Temporary Tablespace


Dropping a Default Temporary Tablespace
The default temporary tablespace cannot be dropped until a new default is made available. The
ALTER DATABASE command must be used to change the default temporary tablespace to a
new default. The old default temporary tablespace is then dropped only after a new default
temporary tablespace is made available. Users assigned to the old default temporary tablespace
are automatically reassigned to the new default temporary tablespace.

Changing the Type of a Default Temporary Tablespace


Because a default temporary tablespace must be either the SYSTEM tablespace or a temporary
tablespace, you cannot change the default temporary tablespace to a permanent type.

Taking Default Temporary Tablespace Offline


Tablespaces are taken offline to make that part of the database unavailable to other users (for
example, an offline backup, maintenance, or making a change to an application that uses the
tablespace). Because none of these situations apply to a temporary tablespace, you cannot take
a default temporary tablespace offline.
Slide 104

Read Only Tablespaces


Use the following command to place a tablespace in read only
mode

Causes a checkpoint
Data available only for read operations
Objects can be dropped from tablespace

ALTER TABLESPACE userdata READ ONLY;

10
3

Read Only Tablespaces


The ALTER TABLESPACE [tablespace] READ ONLY command places the tablespace in a
transitional read only mode. In this transitional state, no further write operations can occur in
the tablespace except for the rollback of existing transactions that previously modified blocks
in the tablespace. After all of the existing transactions have been either committed or rolled
back, the read only command completes, and the tablespace is placed in read only mode.

You can drop items, such as tables and indexes, from a read only tablespace, because these
commands affect only the data dictionary. This is possible because the DROP command
updates only the data dictionary, but not the physical files that make up the tablespace. For
locally managed tablespaces, the dropped segment is changed to a temporary segment, to
prevent the bitmap from being updated. To make a read only tablespace writable, all of the
datafiles in the tablespace must be online. Making tablespaces read only causes a checkpoint
on the datafiles of the tablespace.
Slide 105

Taking a Tablespace Offline


Not available for data access
Tablespaces that cannot be taken offline:
SYSTEM tablespace
Tablespaces with active undo segments
Default temporary tablespace

To bring a tablespace offline/online:

ALTER TABLESPACE userdata OFFLINE;

ALTER TABLESPACE userdata ONLINE;

10
4

Offline Tablespaces
A database administrator can bring any tablespace other than the SYSTEM tablespace online
(accessible) or offline (not accessible) whenever the database is open. The SYSTEM
tablespace is always online when the database is open because the data dictionary must always
be available to Oracle.
A tablespace is usually online so that the data contained within it is available to database users.
However, the database administrator can take a tablespace offline for maintenance or backup
and recovery purposes.

Offline Status of a Tablespace


When a tablespace goes offline, Oracle does not permit any subsequent SQL statements to
reference objects contained in that tablespace. Users trying to access objects in a tablespace
that is offline receive an error.
When a tablespace goes offline or comes back online, the event is recorded in the data
dictionary and in the control file. If a tablespace is offline when you shut down a database, the
tablespace remains offline and is not checked when the database is subsequently mounted and
reopened.
Slide 106

Resizing a Tablespace
A tablespace can be resized by:
Changing the size of a datafile:
Automatically using AUTOEXTEND
Manually using ALTER TABLESPACE
Adding a datafile using ALTER TABLESPACE

10
5

Resizing a Tablespace
You can enlarge a tablespace in two ways:
Add a datafile to a tablespace
When you add another datafile to an existing tablespace, you increase the amount of
disk space allocated for the corresponding tablespace.
Increase the size of a datafile
The third option for enlarging a database is to change a datafile's size or let datafiles
in existing tablespaces grow dynamically as more space is needed. You accomplish
this by altering existing files or by adding files with dynamic extension properties.
Slide 107

Enabling Automatic Extension


of Datafiles
Can be resized automatically with the following commands:
CREATE DATABASE
CREATE TABLESPACE
ALTER TABLESPACE ADD DATAFILE
Example:
CREATE TABLESPACE user_data
DATAFILE
'/u01/oradata/userdata01.dbf' SIZE 200M
AUTOEXTEND ON NEXT 10M MAXSIZE 500M;

10
6

Enabling Automatic Extension of Datafiles


Specifying AUTOEXTEND for a New Datafile
The AUTOEXTEND clause enables or disables the automatic extension of datafiles. Files
increase in specified increments up to a specified maximum.
Benefits of using the AUTOEXTEND clause:
Reduces need for immediate intervention when a tablespace runs out of space
Ensures applications will not halt because of failures to allocate extents When a datafile is
created, the following SQL commands can be used to enable automatic extension of the
datafile:
CREATE DATABASE
CREATE TABLESPACE ... DATAFILE
ALTER TABLESPACE ... ADD DATAFILE

Query the DBA_DATA_FILES view to determine whether AUTOEXTEND is enabled.


Slide 108

Manually Resizing a Datafile


Manually increase or decrease a datafile size using ALTER
DATABASE
Resizing a datafile adds more space without adding more
datafiles
Manual resizing of a datafile reclaims unused space in database
Example:

ALTER DATABASE
DATAFILE '/u03/oradata/userdata02.dbf'
RESIZE 200M;

10
7

Manually Resizing a Datafile


Use the ALTER DATABASE command to manually increase or decrease the size of a datafile:
ALTER DATABASE [database]
DATAFILE filename[, filename]...
RESIZE integer[K|M]
where:
Integer: Is the absolute size, in bytes, of the resulting datafile
If there are database objects stored above the specified size, then the datafile size is decreased
only to the last block of the last objects in the datafile.
Slide 109

Adding Datafiles to a Tablespace


Increases the space allocated to a tablespace by adding
additional datafiles
ADD DATAFILE clause is used to add a datafile
Example:

ALTER TABLESPACE user_data


ADD DATAFILE '/u01/oradata/userdata03.dbf'
SIZE 200M;

10
8

Adding Datafiles to a Tablespace


You can add datafiles to a tablespace to increase the total amount of disk space allocated for the
tablespace with the ALTER TABLESPACE ADD DATAFILE command:
ALTER TABLESPACE tablespace
ADD DATAFILE filespec [autoextend_clause]

This illustration shows the SQL statement for adding a datafile to a tablespace:

ALTER TABLESPACE system


ADD DATAFILE 'DATA2.ORA size 100m;

Database size and tablespace size increase with the addition of datafiles.
Slide 110

Methods for Moving Datafiles

For permanent tablespaces


Take the tablespace offline
Copy the datafile to target location
Issue the alter tablespace rename command

ALTER TABLESPACE userdata RENAME


DATAFILE '/u01/oradata/userdata01.dbf'
TO '/u02/oradata/userdata01.dbf';

Take the tablespace online


Remove the older datafile at OS level ( For ex. rm
command on Unix).

10
9

Methods for Moving Datafiles


Depending on the type of tablespace, the database administrator can move datafiles using one
of the following two methods:
The ALTER TABLESPACE Command
The following ALTER TABLESPACE command is applied only to datafiles in a non-SYSTEM
tablespace that does not contain active undo or temporary segments:
ALTER TABESPACE tablespace
RENAME DATAFILE 'filename'[, 'filename']...
TO 'filename'[, 'filename']...
The source filenames must match the names stored in the control file.
Slide 111

Methods for Moving Datafiles


For system tablespace
Database must be mounted
Copy datafile to target location
Issue the alter database rename command

ALTER DATABASE RENAME


FILE '/u01/oradata/system01.dbf'
TO '/u03/oradata/system01.dbf';

Open the database


Remove the older datafile at OS level

11
0

Methods for Moving Datafiles (continued)


The ALTER DATABASE Command
The ALTER DATABASE command can be used to move any type of datafile:
ALTER DATABASE [database]
RENAME FILE 'filename'[, 'filename']...
TO 'filename'[, 'filename']...
Because the SYSTEM tablespace cannot be taken offline, you must use this method to move
datafiles in the SYSTEM tablespace.
Use the following process to rename files in tablespaces that cannot be taken offline:
1. Shut down the database.
2. Use an operating system command to move the files.
3. Mount the database.
4. Execute the ALTER DATABASE RENAME FILE command.
5. Open the database.
Slide 112

Dropping Tablespaces
Cannot drop a tablespace if it:
Is the SYSTEM tablespace
Example:

DROP TABLESPACE userdata


INCLUDING CONTENTS AND DATAFILES;

11
1

Dropping Tablespaces
You can remove a tablespace from the database when the tablespace and its contents are no
longer required with the following DROP TABLESPACE SQL command:
DROP TABLESPACE tablespace
[INCLUDING CONTENTS [AND DATAFILES] [CASCADE CONSTRAINTS]]
where:
tablespace: Specifies the name of the tablespace to be dropped
INCLUDING CONTENTS: Drops all the segments in the tablespace
AND DATAFILES: Deletes the associated operating system files
CASCADE CONSTRAINTS: Drops referential integrity constraints from tables outside the
tablespace that refer to primary and unique keys in the tables in the dropped tablespace
Slide 113

10g New Feature : Sysaux Tablespace


The SYSAUX tablespace is installed as an auxiliary tablespace to the
SYSTEM tablespace

Storage of non-sys-related tables and indexes that traditionally were placed


in the SYSTEM tablespace for example, the tables and indexes that were
previously owned by the system user can now be specified for a SYSAUX
tablespace

If the SYSAUX tablespace becomes unavailable, core database functionality


will remain operational.

Always created during database creation or database upgrade.

11
2
Slide 114

10g New Feature : Bigfile Tablespaces(BFT)


It simplifies large database tablespace management by reducing the number
of datafiles needed.
It simplifies datafile management and Automated Storage Management
(ASM) by eliminating the need for adding new datafiles and dealing with
multiple files.
It allows you to create a bigfile tablespace of up to eight exabytes (eight
million terabytes) in size, and significantly increase the storage capacity of
an Oracle database.
For creating Bigfile Tablespace:
CREATE BIGFILE TABLESPACE BIGT
DATAFILE /U01/BIGT01.DBF SIZE 10G;
For monitoring Bigfile Tablespace:

SELECT TABLESPACE_NAME,BIGFILE
FROM DBA_TABLESPACES;

11
3
Slide 115

Obtaining Tablespace Information

Query the following views to obtain information about tablespaces:


Tablespaces:
DBA_TABLESPACES
V$TABLESPACE
Datafile information:
DBA_DATA_FILES
V$DATAFILE
Temp file information:
DBA_TEMP_FILES
V$TEMPFILE

11
4
Slide 116

Storage and Relationship Structure

Storage and Relationship Structure

11
5
Slide 117

Storage and Relationship Structure

Database
PROD
TABLESPACES
SYSTEM USER_DATA RBS TEMP
DATAFILES
DISK2/ DISK3/ DISK1/ DISK1/
DISK1/SYS1.dbf USER1.dbf USER2.dbf ROLL1.dbf TEMP.dbf
SEGMENTS S_DEPT S_EMP S_DEPT S_EMP RBS1 RBS2 RBS1 RBS2
Temp
(cont'd) FIRST_N (cont'd) (cont'd)
D.D. D.D. AME
Index
Table Index RB RB RB RB
RB Data Data Data Seg Seg Seg Seg Temp
Data Index Seg
Index
Seg Seg Seg Seg Seg
Seg Seg

EXTENTS
1 2 1 2 1 2 1 1 2 2 1 FREE 1 1 2 2 1

Oracle DATA BLOCKS

11
6

Database Architecture

The previous lesson discussed the storage structure of a database, its tablespaces, and its data
files. This lesson continues the discussion of database storage by examining segments, extents,
and data blocks.
Slide 118

Types of Segments

Table Table
partition

Cluster Index

11
7

Types of Segments
Segments are space-occupying objects in a database. They use space in the data files of a
database. This section describes the different types of segments.

Table:
A table is the most common means of storing data within a database. A table segment stores
that data for a table that is neither clustered nor partitioned. Data within a table segment is
stored in no particular order, and the database administrator (DBA) has very little control over
the location of rows within the blocks in a table. All the data in a table segment must be stored
in one tablespace.

Table partition:
Scalability and availability are major concerns when there is a table in a database with high
concurrent usage. In such cases, data within a table may be stored in several partitions, each of
which resides in a different tablespace. The Oracle server currently supports partitioning by a
range of key values, by a hashing algorithm, and by a list of values. If a table is partitioned,
each partition is a segment, and the storage parameters can be specified to control them
independently. Use of this type of segment requires the partitioning option within the Oracle9i
Enterprise Edition.
Slide 119

Types of Segments

Index-organized Index
table partition

Undo Temporary
segment segment

11
8

Types of Segments
Index-organized table:
In an index-organized table, data is stored within the index based on the key value. An index-
organized table does not need a table lookup, because all the data can be retrieved directly from
the index tree.
Index partition:
An index can be partitioned and spread across several tablespaces. In this case, each partition
in the index corresponds to a segment and cannot span multiple tablespaces. The primary use
of a partitioned index is to minimize contention by spreading index input/output (I/O). Use of
this type of segment requires the partitioning option within the Oracle9i Enterprise Edition.
Undo segment:
An undo segment is used by a transaction that is making changes to a database. Before
changing the data or index blocks, the old value is stored in the undo segment. This allows a
user to undo changes made.
Temporary segment:
When a user executes commands such as CREATE INDEX, SELECT DISTINCT, and
SELECT GROUP BY, the Oracle server tries to perform sorts in memory. When a sort needs
more space than the space available in memory, intermediate results are written to the disk.
Temporary segments are used to store these intermediate results.
Slide 120

Types of Segments

LOB Nested table


segment

Bootstrap
segment

11
9

Types of Segments
LOB segment:
One or more columns in a table can be used to store large objects (LOBs) such as text
documents, images, or videos. If the column is large, the Oracle server stores these values in
separate segments known as LOB segments. The table contains only a locator or a pointer to
the location of the corresponding LOB data.
Nested table:
A column in a table may be made up of a user-defined table as in the case of items within an
order. In such cases, the inner table, which is known as a nested table, is stored as a separate
segment.
Bootstrap segment:
A bootstrap segment, also known as a cache segment, is created by the sql.bsq script when a
database is created. This segment helps to initialize the data dictionary cache when the
database is opened by an instance.
The bootstrap segment cannot be queried or updated and does not require any maintenance by
the database administrator.
Slide 121

Undo Management

Undo Management

12
0
Slide 122

Managing Undo Data


There are two methods for managing undo data:
Automatic Undo Management
Manual Undo Management
The term undo was known as rollback in previous versions.

12
1

Managing Undo Data


Automatic Undo Management
Automatic undo management is undo-tablespace based. You allocate space in the form of a few
undo tablespaces, instead of allocating many rollback segments in different sizes.

Manual Undo Management


In earlier releases, undo space management was performed using rollback segments. This
method is now called manual undo management mode. Manual undo management mode is
supported under any compatibility level. Use it when you need to run Oracle9i to take
advantage of some new features, but are not yet not ready to convert to automatic undo
management mode.
Slide 123

Undo Segment

Old image

Table

New
image

Undo segment

Update transaction

12
2

Undo Segment
An undo segment is used to save the old value (undo data) when a process changes data in a
database. It stores the location of the data and the data as it existed before being modified.
The header of an undo segment contains a transaction table where information about the
current transactions using the undo segment is stored.
A serial transaction uses only one undo segment to store all of its undo data.
Many concurrent transactions can write to one undo segment.
Slide 124

Undo Segments: Purpose

Transaction rollback

Transaction Undo segment Read consistency


recovery

12
3

Undo Segments: Purpose


Transaction Rollback
When a transaction modifies a row in a table, the old image of the modified
columns (undo data) is saved in the undo segment. If the transaction is rolled
back, the Oracle server restores the original values by writing the values in the
undo segment back to the row.
2. Transaction Recovery
If the instance fails while transactions are in progress, the Oracle server needs to
undo any uncommitted changes when the database is opened again. This
rollback is part of transaction recovery. Recovery is possible only because
changes made to the undo segment are also protected by the redo log files.
3. Read Consistency
While transactions are in progress, other users in the database should not see
any uncommitted changes made by these transactions. In addition, a statement
should not see any changes that were committed after the statement begins
execution. The old values (undo data) in the undo segments are also used to
provide the readers a consistent image for a given statement.
Slide 125

Read Consistency

SELECT *
Table FROM table

New image
Image at start of statement

12
4

Read Consistency
Read consistency, as supported by Oracle, does the following:
Guarantees that the set of data seen by a statement is consistent with respect to a single point
in time and does not change during statement execution (statement-level read consistency)

Ensures that readers of database data do not wait for writers or other readers of the same data

Ensures that writers of database data do not wait for readers of the same data

Ensures that writers only wait for other writers if they attempt to update identical rows in
concurrent transactions

The simplest way to think of Oracle's implementation of read consistency is to imagine each
user operating a private copy of the database, hence the multiversion consistency model.
Slide 126

Types of Undo Segments

SYSTEM: Used for objects in the SYSTEM tablespace

Non-SYSTEM: Used for objects in other tablespaces:

Auto mode:

Requires an UNDO tablespace


Manual mode:

Private: Acquired by a single instance


Public: Acquired by any instance

Deferred: Used when tablespaces are taken offline immediate, temporary, or for
recovery

12
5

Types of Undo Segments


SYSTEM Undo Segment
The SYSTEM undo segment is created in the SYSTEM tablespace when a database is created.
This undo segment can be used only for changes made to objects in the SYSTEM tablespace.
The SYSTEM undo segment exists and works the same in both manual and auto mode.
Non-SYSTEM Undo Segments
A database that has multiple tablespaces needs at least one non-SYSTEM undo segment for
manual mode or one UNDO tablespace for auto mode.
Manual Mode
In manual undo management mode, undo space is allocated externally as rollback segments.
In this mode, rollback information, referred to as undo, is stored in an undo tablespace rather
than rollback segments and is managed by Oracle. If you want to create and name a specific
tablespace for the undo tablespace, you can include the UNDO TABLESPACE clause at
database creation time. If you omit this clause, and automatic undo management is specified,
Oracle creates a default undo tablespace named SYS_UNDOTBS.
Private
Private undo segments are segments that are brought online by an instance because they are
listed in the parameter file. However, they can be brought online explicitly by issuing an
ALTER ROLLBACK SEGMENT command.
Slide 127

Automatic Undo Management: Concepts

Undo data is managed using an UNDO tablespace.

You allocate one UNDO tablespace per instance with


enough space for the workload of the instance.

The Oracle server automatically maintains undo data


within the UNDO tablespace.

12
6

Automatic Undo Management: Concepts


Automatic undo management is undo-tablespace based. You allocate space in the form of a
few undo tablespaces, instead of allocating many rollback segments in different sizes.
Automatic undo management lets you explicitly control undo retention. Through the use of
a system parameter (UNDO_RETENTION), you can specify the amount of committed
undo information to retain in the database. You specify the parameter as clock time (for
example, 30 seconds). With retention control, you can configure your system to enable long
queries to run successfully.
Undo tablespaces are special tablespaces used solely for storing undo information. You
cannot create any other segment types (for example, tables or indexes) in undo tablespaces.
Each database contains zero or more undo tablespaces. In automatic undo management
mode, each Oracle instance is assigned one (and only one) undo tablespace. Undo data is
managed within an undo tablespace using undo segments that are automatically created
and maintained by Oracle.
When the first DML operation is run within a transaction, the transaction is bound
(assigned) to an undo segment (and therefore to a transaction table) in the current undo
tablespace. In rare circumstances, if the instance does not have a designated undo
tablespace, the transaction binds to the system undo segment.
Slide 128

Automatic Undo Management:


Configuration
Configure two parameters in the initialization file:
UNDO_MANAGEMENT
UNDO_TABLESPACE
Create at least one UNDO tablespace.

undo1db01.dbf

Initialization UNDO tablespace


file

12
7

Automatic Undo Management: Configuration


If only one UNDO tablespace exists in the database and UNDO_MANAGEMENT is set to
AUTO, then the UNDO_TABLESPACE parameter is optional; the Oracle server will
automatically choose the UNDO tablespace.

UNDO_MANAGEMENT specifies which undo space management mode the system should
use. When set to AUTO, the instance starts in automatic undo management mode.
Slide 129

Automatic Undo Management:


Initialization Parameters
UNDO_MANAGEMENT: Specifies whether the system should
use AUTO or MANUAL mode

UNDO_TABLESPACE: Specifies a particular UNDO tablespace


to be used

UNDO_MANAGEMENT=AUTO
UNDO_TABLESPACE=UNDOTBS

12
8

Automatic Undo Management: Initialization Parameters


UNDO_MANAGEMENT parameter:
You must include the following initialization parameter if you want to operate your database in
automatic undo management mode:
UNDO_MANAGEMENT=AUTO

UNDO_TABLESPACE parameter:
UNDO_TABLESPACE specifies the undo tablespace to be used when an instance starts up. If
this parameter is specified when the instance is in manual undo management mode, an error
will occur and startup will fail.
If the UNDO_TABLESPACE parameter is omitted, the first available undo tablespace in the
database is chosen. If no undo tablespace is available, the instance will start without an undo
tablespace. In such cases, user transactions will be executed using the SYSTEM rollback
segment. You should avoid running in this mode under normal circumstances.
You can replace an undo tablespace with another undo tablespace while the instance is running.
Specifies the UNDO tablespace to be used. This parameter can be set in the initialization files
or altered dynamically using the ALTER SYSTEM command.
SQL> ALTER SYSTEM SET undo_tablespace = UNDOTBS;
Slide 130

Automatic Undo Management:


UNDO Tablespace

Create the UNDO tablespace with the database by adding a clause in the CREATE DATABASE
command

CREATE DATABASE db01


. . .
Or create it later by using the CREATE UNDO TABLESPACE command
UNDO TABLESPACE undo1
DATAFILE '/u01/oradata/undoldb01.dbf' SIZE 20M
AUTOEXTEND ON

CREATE UNDO TABLESPACE undo1


DATAFILE '/u01/oradata/undo1db01.dbf'
SIZE 20M;

12
9

Automatic Undo Management: UNDO Tablespace

There are two methods of creating an undo tablespace.


The first method creates the undo tablespace when the CREATE DATABASE
statement is issued. This occurs when you are creating a new database, and the instance
is started in automatic undo management mode (UNDO_MANAGEMENT = AUTO).
The second method is used with an existing database. It uses the CREATE UNDO
TABLESPACE statement.

You can create a specific undo tablespace using the UNDO TABLESPACE clause of
the CREATE DATABASE statement. But, this clause is not required. If the UNDO
TABLESPACE clause is not specified and the CREATE DATABASE statement is
executed in automatic undo management mode, a default undo tablespace is
created with the name SYS_UNDOTBS. This tablespace is allocated from the
default set of files used by the CREATE DATABASE statement and its attributes are
determined by Oracle. The initial size is 10M, and it is autoextensible. This method
of creating an undo tablespace is only recommended to users who do not have
any specific requirements for allocation of undo space.
Slide 131

Automatic Undo Management:


Altering an UNDO Tablespace

The ALTER TABLESPACE command can make changes to


UNDO tablespaces.

The following example adds another data file to the UNDO


tablespace:

ALTER TABLESPACE undotbs


ADD DATAFILE '/u01/oradata/undotbs2.dbf'
SIZE 30M
AUTOEXTEND ON;

13
0

Automatic Undo Management: Altering an UNDO Tablespace


Undo tablespaces are altered using the ALTER TABLESPACE statement. However, since most
aspects of undo tablespaces are system managed, you need only be concerned with the
following actions:
Adding a datafile
Renaming a datafile
Bringing a datafile online or taking it offline
Beginning or ending an open backup on a datafile
These are also the only attributes you are permitted to alter.
If an undo tablespace runs out of space, or you want to prevent it from doing so, you can add
more files to it or resize existing datafiles.
The following example adds another datafile to undo tablespace undotbs_01:
ALTER TABLESPACE undotbs_01 ADD DATAFILE '/u01/oracle/rbdb1/undo0102.dbf'
AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED; You can use the ALTER
DATABASE ... DATAFILE statement to resize or extend a datafile.
Slide 132

Automatic Undo Management:


Switching UNDO Tablespaces

You may switch from using one UNDO tablespace to another.


Only one UNDO tablespace can be in assigned to a database at a
time.
More than one UNDO tablespace may exist within an instance, but
only one can be active.
Use the ALTER SYSTEM command for dynamic switching between
UNDO tablespaces.

ALTER SYSTEM SET UNDO_TABLESPACE=UNDOTBS2;

13
1

You can switch from using one undo tablespace to another. Because the
UNDO_TABLESPACE initialization parameter is a dynamic parameter, the ALTER
SYSTEM SET statement can be used to assign a new undo tablespace.
The following statement effectively switches to a new undo tablespace:
ALTER SYSTEM SET UNDO_TABLESPACE = undotbs_02;

Assuming undotbs_01 is the current undo tablespace, after this command


successfully executes, the instance uses undotbs_02 in place of undotbs_01 as its
undo tablespace.

If any of the following conditions exist for the tablespace being switched to, an
error is reported and no switching occurs:
The tablespace does not exist,
The tablespace is not an undo tablespace
The tablespace is already being used by another instance

The database is online while the switch operation is performed, and user
transactions can be executed while this command is being executed. When the
switch operation completes successfully, all transactions started after the switch
operation began are assigned to transaction tables in the new undo tablespace.
Slide 133

Automatic Undo Management:


Dropping an UNDO Tablespace

The DROP TABLESPACE command drops an UNDO


tablespace.

An UNDO tablespace can only be dropped if it is


currently not in use by any instance.

DROP TABLESPACE UNDOTBS2;

To drop an active UNDO tablespace:


Switch to a new UNDO tablespace
Drop the tablespace after all current transactions are
complete

13
2

Automatic Undo Management: Dropping an UNDO Tablespace


Use the DROP TABLESPACE statement to drop an undo tablespace. The following example
drops the undo tablespace undotbs_01:
SQL> DROP TABLESPACE undotbs_01;
An undo tablespace can only be dropped if it is not currently used by any instance. If the undo
tablespace contains any outstanding transactions (for example, a transaction died but has not
yet been recovered), the DROP TABLESPACE statement fails. However, since DROP
TABLESPACE drops an undo tablespace even if it contains unexpired undo information
(within retention period), you must be careful not to drop an undo tablespace if undo
information is needed by some existing queries.
DROP TABLESPACE for undo tablespaces behaves like DROP TABLESPACE ...
INCLUDING CONTENTS. All contents of the undo tablespace are removed.

You can drop UNDO tablespace after all transactions within the tablespace are complete. To
determine whether any active transactions exists use the following query:
SQL> SELECT a.name,b.status
2 FROM v$rollname a, v$rollstat b
3 WHERE a.name IN ( SELECT segment_name
4 FROM dba_segments
5 WHERE tablespace_name = 'UNDOTBS )
6 AND a.usn = b.usn;

NAME STATUS
-----------------------------------------------
_SYSSMU4$ PENDING OFFLINE
Slide 134

Automatic Undo Management:


Other Parameters

UNDO_SUPPRESS_ERRORS parameter:
Set to TRUE, this parameter suppresses errors while
attempting to execute manual operations in AUTO mode.

UNDO_RETENTION parameter:
This parameter controls the amount of undo data to retain
for consistent read.
By default UNDO_RETENTION is 15 minutes in 10g.

13
3

Automatic Undo Management: Other Parameters


UNDO_SUPPRESS_ERRORS Parameter
UNDO_SUPPRESS_ERRORS enables users to suppress errors while executing manual undo
management mode operations (for example, ALTER ROLLBACK SEGMENT ONLINE) in
automatic undo management mode. Setting this parameter enables users to use the undo
tablespace feature before all application programs and scripts are converted to automatic undo
management mode. For example, if you have a tool that uses SET TRANSACTION USE
ROLLBACK SEGMENT statement, you can add the statement "ALTER SESSION SET
UNDO_SUPPRESS_ERRORS = true" to the tool to suppress the ORA-30019 error.

If you want to run in automatic undo management mode, ensure that your tools or applications
are updated to run in automatic undo management mode.

UNDO_RETENTION Parameter
Retention is specified in units of seconds, for example 500 seconds. It is persistent and can
survive system crashes. That is, undo generated before an instance crash, is retained until its
retention time has expired even across restarting the instance. When the instance is recovered,
undo information will be retained based on the current setting of the UNDO_RETENTION
initialization parameter.
Slide 135

Undo Data Statistics

SELECT end_time,begin_time,undoblks
FROM v$undostat;
END_TIME BEGIN_TIME UNDO
------------------ ------------------ -----
22-JAN-01 13:44:18 22-JAN-01 13:43:04 19
22-JAN-01 13:43:04 22-JAN-01 13:33:04 1474
22-JAN-01 13:33:04 22-JAN-01 13:23:04 1347
22-JAN-01 13:23:04 22-JAN-01 13:13:04 1628
22-JAN-01 13:13:04 22-JAN-01 13:03:04 2249
22-JAN-01 13:03:04 22-JAN-01 12:53:04 1698
22-JAN-01 12:53:04 22-JAN-01 12:43:04 1433
22-JAN-01 12:43:04 22-JAN-01 12:33:04 1532
22-JAN-01 12:33:04 22-JAN-01 12:23:04 1075

13
4

Undo Data Statistics


V$UNDOSTAT view:
V$UNDOSTAT displays a histogram of statistical data to show how well the system is
working. The available statistics include undo space consumption, transaction concurrency, and
length of queries executed in the instance. You can use this view to estimate the amount of
undo space required for the current workload. Oracle uses this view to tune undo usage in the
system. The view returns null values if the system is in manual undo management mode.

Each row in the view keeps statistics collected in the instance for a 10-minute interval. The
rows are in descending order by the BEGIN_TIME column value. Each row belongs to the
time interval marked by (BEGIN_TIME, END_TIME). Each column represents the data
collected for the particular statistic in that time interval. The first row of the view contains
statistics for the (partial) current time period. The view contains a total of 1008 rows, spanning
a 7 day cycle.
Slide 136

Automatic Undo Management:


Sizing an UNDO Tablespace

Determining a size for the UNDO tablespace requires


three pieces of information:
(UR) UNDO_RETENTION in seconds
(UPS) Number of undo data blocks generated per second
(DBS) Overhead varies based on extent and file size
(db_block_size)

UndoSpace = [UR * (UPS * DBS)] + (DBS * 24)

13
5

Automatic Undo Management: Sizing an UNDO Tablespace


Sizing an UNDO tablespace requires three pieces of data. Two can be obtained from the
initialization file: UNDO_RETENTION and DB_BLOCK_SIZE. The third piece of the
formula requires a query against the database. The number of undo blocks generated per
second can be acquired from V$UNDOSTAT. The following formula calculates the total
number of blocks generated and divides it by the amount of time monitored, in seconds:
SQL> SELECT (SUM(undoblks) / SUM) ((end_time - begin_time) * 86400)FROM
v$undostat;
Column END_TIME and BEGIN_TIME are DATE data types. When DATE data types are
subtracted, the result is in days. To convert days to seconds, you multiply by 86400, the
number of seconds in a day.
The result of the query returns the number of undo blocks per second. This value needs to be
multiplied by the size of an undo block, which is the same size as the database block defined in
DB_BLOCK_SIZE. The following query calculates the number of bytes needed:
Slide 137

Automatic Undo Management:


Undo Quota

Long transactions and improperly written transactions can


consume valuable resources.
With undo quota, users can be grouped and a maximum undo
space limit can be assigned to the group.
UNDO_POOL, a Resource Manager directive, defines the
amount of space allowed for a resource group.
When a group exceeds its limit, no new transactions are
possible for the group, until undo space is freed by current
transactions which are either completing or aborting.

13
6

Automatic Undo Management: Undo Quota


In automatic undo management mode, the system controls exclusively the assignment of
transactions to undo segments, and controls space allocation for undo segments. An ill-behaved
transaction can potentially consume much of the undo space, thus paralyzing the entire system.
In manual undo management mode, you can control such possibilities by limiting the size of
rollback segments with small MAXEXTENTS values. However, you then have to explicitly
assign long running transactions to larger rollback segments, using the SET TRANSACTION
USE ROLLBACK SEGMENT statement. This approach has proven to be cumbersome.
The Resource Manager directive UNDO_POOL is a more explicit way to control large
transactions. This lets database administrators group users into consumer groups, with each
group assigned a maximum undo space limit. When the total undo space consumed by a group
exceeds the limit, its users cannot make further updates until undo space is freed up by other
member transactions ending.
The default value of UNDO_POOL is UNLIMITED, where users are allowed to consume as
much undo space as the undo tablespace has. Database administrators can limit a particular
user by using the UNDO_POOL directive.

ORA-30027: "Undo quota violation - failed to get %s (bytes)"


Cause: The amount of undo assigned to the consumer group of this session
has been exceeded.
Action: Ask DBA to increase undo quota, or wait until other
transactions commit before proceeding.
Slide 138

10g New Feature : Flashback Query


SELECT *
FROM emp
AS OF TIMESTAMP(
TO_TIMESTAMP(03-DEC-08 02:00:00 PM,DD-MON-YY HH:MI:SS PM));

This simple query displays rows from the emp table as of


the specified Timestamp.

13
7
Slide 139

10g New Feature : Flashback Versions Query


select employee_id,LAST_NAME,SALARY from hr.employees
versions between timestamp (to_timestamp('26-FEB-2008
12:22:00 PM','DD-MON-YYYY HH:MI:SSXFF PM'))
and (to_timestamp('26-FEB-2008 12:46:00 PM','DD-MON-YYYY
HH:MI:SSXFF PM'))
where last_name='Fox'

EMPLOYEE_ID LATST_NAME SALARY


----------- ---------- ------
101 FOX 12345
101 FOX 1234
101 FOX 123
101 FOX 12
101 FOX 1

13
8
Slide 140

10g New Feature : Flashback Versions Query


Flashback Versions Query giving Transaction Id and
SCNS
select versions_xid xid,versions_startscn startscn,
versions_endscn endscn,
employee_id,LAST_NAME,SALARY
from hr.employees
versions between timestamp (to_timestamp('25-FEB-2008
07:15:00 PM','DD-MON-YYYY HH:MI:SSXFF PM'))
and (to_timestamp('25-FEB-2008 07:20:00 PM','DD-MON-YYYY
HH:MI:SSXFF PM'))
where last_name='Fox'

13
9
Slide 141

10g New Feature : Flashback Transactions Query


It is used to overcome the logical data corruption which
has occurred due to a transaction.

Use the data dictioanry view


flashback_transaction_query

select * from flashback_transaction_query where


logon_user=HR';

Flashback Transaction query also displays the


UNDO_SQL column which can be used to undo the
complete transaction.

14
0
Slide 142

10g New Feature : Recycle bin (Flashback drop)


Restoring the dropped table
sql> drop table test;

Use DBA_RECYCLEBIN data dictionary to view dropped objects.

sql> select owner,original_name,object_name,ts_name,droptime


from dba_recyclebin;

OWNER ORIGINAL_NAME OBJECT_NAME TS_NAME


DROPTIME
---------- --------------- ------------------------------ --------
-- -------------------
HR EMP BIN$WJFel8jkcPLgRAADuimKdQ==$0 APPS
2008-10-06:12:39:52

Restoring dropped table

sql> FLASHBACK TABLE test to before drop;

14
1
Slide 143

10g New Feature : Recycle bin (Flashback drop)


sql> FLASHBACK TABLE BIN$WJFel8jkcPLgRAADuimKdQ==$0 TO
BEFORE DROP;

Rename while restoring

sql> FLASHBACK TABLE BIN$WJFel8jkcPLgRAADuimKdQ==$0 TO


BEFORE DROP rename to new_employees;

To permanently delete the table use PURGE clause.

sql> drop table test purge;

RECYCLEBIN=ON/OFF initialization parameter controls flashback drop


capability.
( By default RECYCLEBIN=ON)

14
2
Slide 144

10g New Feature : Guaranteed Undo Retention


When you create the undo tablespace, you can now specify an undo retention
"guarantee.
CREATE UNDO TABLESPACE UNDO_TS1
DATAFILE '/u01/oradata/proddb01/undo_ts1_01.dbf SIZE 1024M
RETENTION GUARANTEE;

ALTER TABLESPACE UNDO_TS2 RETENTION GUARANTEE;

If your undo tablespace is too small to accommodate all the active transactions that
are using it, the following will happen:
Oracle will issue an automatic tablespace warning alert when the undo tablespace is
85 percent full (if you havent disabled the automatic tablespace alert feature
(DBMS_SERVER_ALERT Package)).
Oracle will also issue an automatic tablespace critical alert when the undo tablespace
is 97 percent full.
All DML statements will be disallowed and will receive an out-of-space error.
DDL statements will continue to be allowed.

14
3
Slide 145

10g New Feature : Flashback Database


You can use Flashback Database in the following situations

To retrieve a dropped schema


When a user error affects the entire database
When you truncate a table in error
When a batch job performs only partial changes

Flashback Database feature uses Flashback database logs which are stored in
new flash recovery area.

Database should be in archivelog mode.

Only used for logical error recovery and not for media recovery.

14
4
Slide 146

10g New Feature : Flashback Database


When Flashback Database logs are enabled, a new background
process RVWR ( Recovery Writer) is also enabled.
RVWR writes the before-image of each altered block to the
flashback database logs.
Oracle doesnt guarantee that you can flashback your database as
far as the time set in the FLASHBACK_RETENTION_TARGET.
Priority is given to archive log files and backup files.

14
5
Slide 147

10g New Feature : Flashback Database


Enabling Flashback Database

sql> ARCHIVE LOG LIST;


sql> ALTER SYSTEM SET
DB_FLASHBACK_RETENTION_TARGET=1440;
[ 1440 minutes = 1 Day ]

sql> SHUTDOWN IMMEDIATE;


sql> STARTUP MOUNT;
sql> ALTER DATABASE FLASHBACK ON;
sql> ALTER DATABASE OPEN;

sql> SELECT FLASHBACK_ON FROM V$DATABASE;

FLASHBACK_ON
------------------
YES

14
6
Slide 148

10g New Feature : Flashback Database


Using Flashback Database
sql> SELECT CURRENT_SCN FROM V$DATABASE;
CURRENT_SCN
---------------
3467898

sql> SELECT
OLDEST_FLASHBACK_SCN,OLDEST_FLASHBACK_TIME
FROM V$FLASHBACK_DATABASE_LOG;
sql> SHUTDOWN IMMEDIATE;
sql> STARTUP MOUNT;
sql> FLASHBACK DATABASE TO SCN 5964663;
sql> ALTER DATABASE OPEN RESETLOGS;

14
7
Slide 149

Restore Points
create restore point P1;

create restore point P2 guarantee flashback


database;

SQL> select * from v$restore_point;

When you want to flashback the database to that restore point, you
could simply issue:

flashback database to restore point P2;

If you examine the alert log, it will show a line similar to:
Media Recovery Applied UNTIL CHANGE 1429814

14
8
Slide 150

Rollback Monitoring
sql> select * from v$session_longops where sid = 9;
SID : 9
SERIAL# : 68
OPNAME : Transaction Rollback
TARGET :
TARGET_DESC : xid:0x000e.01c.00000067
SOFAR : 10234
TOTALWORK : 20554
UNITS : Blocks
START_TIME : 07-dec-2003 21:20:07
LAST_UPDATE_TIME : 07-dec-2003 21:21:24
TIME_REMAINING : 77
ELAPSED_SECONDS : 77
CONTEXT : 0
MESSAGE : Transaction Rollback: xid:0x000e.01c.00000067 :
10234 out of 20554 Blocks done
USERNAME : SYS
SQL_ADDRESS : 00000003B719ED08
SQL_HASH_VALUE : 1430203031
SQL_ID : 306w9c5amyanr
QCSID : 0

14
9
Slide 151

Summary

In this lesson, you should have learned how to:


Configure Automatic Undo Management
Create an UNDO tablespace
Properly size an UNDO tablespace

There are two methods for managing undo data: Automatic Undo Management and
Manual Undo Management
The main purpose of undo segments are Transaction Rollback, Transaction
Recovery, Read Consistency
Types of undo segment: System and Non-System
Automatic Undo Management: Undo data is managed using an UNDO tablespace.
UNDO_TABLESPACE and UNDO_MANAGEMENT initialization prameters have to be
set.
At least one UNDO tablespace must be created
Only one UNDO tablespace can be in assigned to a database at a time.
More than one UNDO tablespace may exist within an instance, but only one can be
active.
To drop an active UNDO tablespace: 1. Switch to a new UNDO tablespace 2.Drop the
tablespace after all current transactions are complete
Use alter SYSTEM SET command to switch undo tablespaces dynamically

15
0

Practice Overview

Note: Practice can be accomplished using SQL*Plus or using Oracle Enterprise Manager
and SQL*Plus Worksheet.
Slide 152

Lab
1. Create a new undo tablespace with name name UNDO02. Make it the default undo tablespace.
Keep the autoextend clause off.
2. Drop the original undo tablepsace.
3. Check the current value of undo retention parameter. Increase the undo_retention parameter
to 3 Hours.
4. Login to HR schema. Create a table similar to employees table and name it as test_employees.
5. Note down the current timestamp ( T1) of the database.
6. Insert 3 new rows in the test_employees table. Commit the data.
7. Use the flashback query to view the contents of the table as they were at timestamp T1.
8. Update any row and use the flashback versions query to view versions of that updated row.
9. Use flashback transaction query. Observe the XID,START_SCN,UNDO_SQL column. [ Hint: Use
flashback_transaction_query view.]
10. Login into HR schema create a test table similar to employees. Drop the table. Login as sysdba
user through other session and query the dba_recyclebin dictionary. Observe the
OBJECT_NAME AND ORIGINAL_NAME columns. Restore the dropped table to HR schema. From
HR sqlplus session verify that the table is restored.
11. Query a data dictionary to check whether the flashback is on or not.[Hint: Use v$database]
12. Ensure that the database is in archive log mode.
13. Enable the flashback database. Verify the flashback database is enabled.
14. Create a new test table similar to employees table in HR schema. Note down the current_scn
from v$database.

15
1
Slide 153

Lab
15. Insert few new rows in the test table and commit. Note down the current_scn from
v$database.
16. Flashback the database to the SCN noted down in step 14. Open the database with resetlogs
option.
17. Check whether the rows inserted in test table in Step 15 exist or not.
18. Query the V$FLASHBACK_DATABASE_LOG. Observe the OLDEST_FLASHBACK _SCN column.

15
2
Slide 154

Database Audit Concepts

Database Audit Concepts

15
3
Slide 155

Auditing Privileges
Use of privileges can be audited, for example

sql> audit create any trigger;

sql> audit select any table by session;

sql> audit select any table by access;

By default, auditing will generate an audit record for every


occurrence of the event. This is equivalent to appending BY
ACCESS to the AUDIT command.
Appending the keywords BY SESSION to the AUDIT command will
restrict the audit output to only one record per logon, no matter how
often the audit condition is met.

15
4

Auditing Options

Statement auditing:
Statement auditing is the selective auditing of related groups of statements that fall into two
categories:
DDL statements, regarding a particular type of database structure or schema object,
but not a specifically named structure or schema object (for example, AUDIT
TABLE audits all CREATE and DROP TABLE statements)
DML statements, regarding a particular type of database structure or schema object,
but not a specifically named structure or schema object (for example, AUDIT
SELECT TABLE audits all SELECT ... FROM TABLE/VIEW statements,
regardless of the table or view)

Statement auditing can be broad or focused, auditing the activities of all database users or the
activities of only a select list of database users.
Slide 156

Auditing Objects
Use of objects can be audited, for example

sql> audit insert on hr.emp whenever successful;

sql> audit all on hr.emp;

The first of these examples will generate audit records whenever a


row is inserted in the named table.
The WHENEVER SUCCESSFUL keywords restrict audit records to
those where the operation succeeded; the alternative syntax is
WHENEVER NOT SUCCESSFUL.
By default, all operations (successful or not) are audited.
The second example will audit all DML statements executed against
the named table.

15
5

Auditing Options

Statement auditing:
Statement auditing is the selective auditing of related groups of statements that fall into two
categories:
DDL statements, regarding a particular type of database structure or schema object,
but not a specifically named structure or schema object (for example, AUDIT
TABLE audits all CREATE and DROP TABLE statements)
DML statements, regarding a particular type of database structure or schema object,
but not a specifically named structure or schema object (for example, AUDIT
SELECT TABLE audits all SELECT ... FROM TABLE/VIEW statements,
regardless of the table or view)

Statement auditing can be broad or focused, auditing the activities of all database users or the
activities of only a select list of database users.
Slide 157

Auditing Sessions
Use of objects can be audited, for example
sql> audit session whenever not successful;

Session auditing records each connection to the


database.
The NOT SUCCESSFUL keywords restrict the output to
only failed attempts.
This can be particularly useful recording failures will
indicate if attempts are being made to break into the
database.

15
6

Auditing Options

Statement auditing:
Statement auditing is the selective auditing of related groups of statements that fall into two
categories:
DDL statements, regarding a particular type of database structure or schema object,
but not a specifically named structure or schema object (for example, AUDIT
TABLE audits all CREATE and DROP TABLE statements)
DML statements, regarding a particular type of database structure or schema object,
but not a specifically named structure or schema object (for example, AUDIT
SELECT TABLE audits all SELECT ... FROM TABLE/VIEW statements,
regardless of the table or view)

Statement auditing can be broad or focused, auditing the activities of all database users or the
activities of only a select list of database users.
Slide 158

10g New Features


New Columns added to DBA_AUDIT_TRAIL
EXTENDED_TIMESTAMP : This column records the timestamp of the audit
record in the TIMESTAMP format, which records time in Greenwich Mean
Time with seconds up to 9 places after the decimal point and with the Time
Zone information.
2004-3-13 18.10.13.123456000 -5:0
GLOBAL_UID and PROXY_SESSIONID : When an identity management
component such as Oracle Internet Directory is used for authentication.
INSTANCE_NUMBER : In an Oracle Real Application Clusters (RAC)
environment, it might be helpful to know to which specific instance the user
was connected while making the changes.
OS_PROCESS : In Oracle9i and below, only the SID values are recorded in
the audit trail; not the operating system process id. However, the OS
process id of server process may be necessary later in order to cross-
reference a trace file. In 10g, this value is also recorded in this column.
TRANSACTIONID: Transaction id is recorded. Used perform join with
FLASHBACK_TRANSACTION_QUERY

15
7
Slide 159

10g New Features


Extended DB Auditing
audit_trail = db_extended
This parameter will enable recording of SQL text and the values of
the bind variables, if used, in the columns. This value was not
available in the earlier versions.
Uniform Audit Trail
Because FGA and standard auditing capture similar types of
information, they provide a lot of significant information when used
together. Oracle Database 10g combines the trails to a common trail
known as DBA_COMMON_AUDIT_TRAIL, which is a UNION ALL
view of the views
DBA_AUDIT_TRAIL and DBA_FGA_AUDIT_TRAIL.

15
8
Slide 160

Fined Grained Auditing


Database auditing can capture all accesses to a table,
whether SELECT or DML operations.
But it cannot distinguish between rows, even though it
might well be that only some rows contain sensitive
information.
FGA is configured with the package DBMS_FGA.

15
9
Slide 161

Obtaining Audit Records Information


Information about auditing records can be obtained
by querying the following views:
DBA_AUDIT_TRAIL
DBA_AUDIT_EXISTS
DBA_AUDIT_OBJECT
DBA_AUDIT_SESSION
DBA_AUDIT_STATEMENT

16
0

Obtaining Audit Records Information

Listing audit records:


The database audit trail (SYS.AUD$) is a single table in each Oracle databases dictionary.
Several predefined views are available. Some of the views are listed in the slide. These views
are created by the DBA.

Data Dictionary View Description


-------------------- -------------------------------
DBA_AUDIT_TRAIL All audit trail entries
DBA_AUDIT_EXISTS Records for AUDIT EXISTS/NOT EXISTS
DBA_AUDIT_OBJECT Records concerning schema objects
DBA_AUDIT_SESSION All connect and disconnect entries
DBA_AUDIT_STATEMENT Statement auditing records
Slide 162

Summary

In this lesson, you should have learned how to:


Outline auditing needs
Enable and disable auditing
Identify and use the various auditing options

Auditing is the monitoring and recording of selected user database


actions
Auditing is the monitoring of selected user database actions, and is
used to: Investigate suspicious database activity and Gather
information about specific database activities.
Auditing can be by session or access.
Auditing Categories are Audited by default, Database auditing and
Value-based or application auditing
Auditing Options are Statement auditing, Privilege auditing, Schema
object auditing and Fine-grained auditing

16
1
Slide 163

Lab
Connect to your database as SYSDBA using SQL*Plus.
Set the AUDIT_TRAIL instance parameter to enable auditing to the data dictionary. As this is a
static parameter, you must use the SCOPE clause and restart the instance.
sql> conn sys/passwod@sid as sysdba
sql> alter system set audit_trail=db scope=spfile;
sql> startup force;
Connect to your database as user SYSTEM using SQL*Plus.
Create a table and insert some rows as follows:
sql> create table audit_test(name varchar2(10),salary number);
sql> insert into audit_test values('McGraw',100);
sql> insert into audit_test values('Hill',200);
Enable database auditing of access to the table.
sql> audit select, update on system.audit_test;
Execute some statements against the table.
sql> select * from audit_test;
sql> update audit_test set salary=50 where name='McGraw';
Query the DBA_AUDIT_TRAIL view to see the results of the auditing.
sql> select username, userhost, os_username, ses_actions, obj_name from dba_audit_trail;

16
2
Slide 164

Lab
Create an FGA policy to capture all SELECTs against the AUDIT_TEST table that read the
SALARY column, if the salary retrieved is greater than 100, with this procedure call:
sql> exec dbms_fga.add_policy(-
> object_schema=>'system',-
> object_name=>'audit_test',-
> policy_name=>'high_sal',-
> audit_condition=>'salary > 100',-
> audit_column=>'salary',-
> statement_types=>'select');
Run some queries against the table.
ocp10g> select * from audit_test;
ocp10g> select salary from audit_test where name='Hill';
ocp10g> select salary from audit_test where name='McGraw';
ocp10g> select name from audit_test;

16
3
Slide 165

Lab
Query the fine-grained audit trail.
sql> select os_user,db_user,sql_text from dba_fga_audit_trail;
OS_USER DB_USER SQL_TEXT
----------------------- -------------- -------------------------------------
ORA10G\Guest SYSTEM select * from audit_test
ORA10G\Guest SYSTEM select salary from audit_test where name='Hill'
Note that only the first and second queries from previous step generated audit records, and that
the actual statement used can be retrieved.
Tidy up by canceling the database auditing, dropping the FGA policy, and dropping the table.
sql> noaudit select,update on system.audit_test;
Sql> exec dbms_fga.drop_policy(object_name=>'audit_test, policy_name=>'high_sal');
sql> drop table audit_test;

16
4
Slide 166

Managing Password Security


Slide 167

Objectives
After completing this lesson, you should be able to do
the following:
Manage passwords using profiles
Administer profiles
Control use of resources using profiles
Obtain information about profiles, password
management, and resources

16
6
Slide 168

Profiles
A profile is a named set of password and resource
limits.
Profiles are assigned to users by the CREATE USER
or ALTER USER command.
Profiles can be enabled or disabled.
Profiles can relate to the DEFAULT profile.

16
7

Profiles
A profile is a named set of resource limits. A user's profile limits database usage and instance
resources as defined in the profile. You can assign a profile to each user, and a default profile to
all users who do not have specific profiles. For profiles to take effect, resource limits must be
turned on for the database as a whole.

A profile has been created, the database administrator can assign it to each user. If resource
limits are enabled, the Oracle server limits the database usage and resources to the defined
profile of the user.

The Oracle server automatically creates a DEFAULT profile when the database is created.

The users who have not been explicitly assigned a specific profile conform to all the limits of
the DEFAULT profile. All limits of the DEFAULT profile are initially unlimited. However, the
database administrator can change the values so that limits are applied to all users by default.
Slide 169

Password Management

Password Account
history locking

User Setting up
profiles

Password Password
expiration verification
and aging
16
8

Password Management
For greater control over database security, Oracle password management is controlled by
database administrators with profiles.

This lesson describes the available password management features:


Account locking: Enables automatic locking of an account when a user fails to log in to the
system in the specified number of attempts
Password aging and expiration: Enables the password to have a lifetime, after which it
expires and must be changed
Password history: Checks the new password to ensure that the password is not reused for a
specified amount of time or a specified number of password changes
Password complexity verification: Performs a complexity check on the password to verify
that it is complex enough to provide protection against intruders who might try to break
into the system by guessing the password
Slide 170

Enabling Password Management

Set up password management by using profiles and


assigning them to users.
Lock, unlock, and expire accounts using the CREATE
USER or ALTER USER command.
Password limits are always enforced.
To enable password management, run the
utlpwdmg.sql script as the user SYS.

16
9

Enabling Password Management

Create the profile to limit password settings, and assign the profile to the user by using the
CREATE USER or ALTER USER command.

Password limit settings in profiles are always enforced.

When password management is enabled, the user account can be locked or unlocked by using
the CREATE USER or ALTER USER command.
Slide 171

Password Account Locking

Parameter Description
FAILED_LOGIN_ATTEMPTS Number of failed login attempts
before lockout of the account

PASSWORD_LOCK_TIME Number of days the account is


locked after the specified number
of failed login attempts

17
0

Password Account Locking


Oracle can lock a user's account if the user fails to login to the system within a specified
number of attempts. Depending on how the account is configured, it can be unlocked
automatically after a specified time interval or it must be unlocked by the database
administrator.

The CREATE PROFILE statement configures the number of failed logins a user can attempt
and the amount of time the account remains locked before automatic unlock.

The database administrator can also lock accounts manually. When this occurs, the account
cannot be unlocked automatically but must be unlocked explicitly by the database
administrator.
Slide 172

Password Expiration and Aging

Parameter Parameter
PASSWORD_LIFE_TIME Lifetime of the password in days
after which the password expires

PASSWORD_GRACE_TIME Grace period in days for changing


the password after the first
successful login after the password
has expired

17
1

Password Expiration and Aging


Password lifetime and expiration options allow the database administrator to specify a lifetime
for passwords, after which time they expire and must be changed before a login to the account
can be completed. On first attempt to login to the database account after the password expires,
the user's account enters the grace period, and a warning message is issued to the user every
time the user tries to login until the grace period is over.

The user is expected to change the password within the grace period. If the password is not
changed within the grace period, the account is locked and no further logins to that account are
allowed without assistance by the database administrator.

The database administrator can also set the password state to expired. When this happens, the
user's account status is changed to expired, and the user or the database administrator must
change the password before the user can log in to the database..
Slide 173

Password History

Parameter Description
PASSWORD_REUSE_TIME Number of days before a
password can be reused

PASSWORD_REUSE_MAX Maximum number of times a


password can be reused

17
2

Password History
The password history option checks each newly specified password to ensure that a
password is not reused for the specified amount of time or for the specified number of
password changes. The database administrator can configure the rules for password reuse
with CREATE PROFILE statements.
Slide 174

Password Verification

Parameter Description
PASSWORD_VERIFY_FUNCTION PL/SQL function that performs a
password complexity check
before a password is assigned

17
3

Password Verification
Complexity verification checks that each password is complex enough to provide
reasonable protection against intruders who try to break into the system by guessing
passwords.
The Oracle default password complexity verification routine requires that each password:
Be a minimum of four characters in length
Not equal the userid
Include at least one alphabet character, one numeric character, and one punctuation mark
Not match any word on an internal list of simple words like welcome, account, database,
user, and so on
Differ from the previous password by at least three characters
Slide 175

User-Provided Password Function

This function must be created in the SYS schema and must


have the following specification:

function_name(
userid_parameter IN VARCHAR2(30),
password_parameter IN VARCHAR2(30),
old_password_parameter IN VARCHAR2(30))
RETURN BOOLEAN

17
4

User-Provided Password Function


When a new password verification function is added, the database administrator must consider
the following restrictions:

The procedure must use the specification indicated in the slide.


The procedure returns the value TRUE for success and FALSE for failure.
If the password function raises an exception, then an error is returned and the ALTER USER
or CREATE USER command is terminated.
The password function is owned by SYS.
If the password function becomes invalid, then an error message is returned and the ALTER
USER or CREATE USER command is terminated.
Slide 176

Password Verification Function


VERIFY_FUNCTION
Minimum length is four characters.
Password should not be equal to username.
Password should have at least one alphabetic, one
numeric, and one special character.
Password should differ from the previous password
by at least three letters.

17
5

Password Verification Function


The Oracle server provides a complexity verification function, in the form of a default PL/SQL
function called VERIFY_FUNCTION of the utlpwdmg.sql script, which must be run in the
SYS schema.
During the execution of the utlpwdmg.sql script, the Oracle server creates
VERIFY_FUNCTION and changes the DEFAULT profile with the following ALTER
PROFILE command:
SQL> ALTER PROFILE DEFAULT LIMIT
2 PASSWORD_LIFE_TIME 60
3 PASSWORD_GRACE_TIME 10
4 PASSWORD_REUSE_TIME 1800
5 PASSWORD_REUSE_MAX UNLIMITED
6 FAILED_LOGIN_ATTEMPTS 3
7 PASSWORD_LOCK_TIME 1/1440
8 PASSWORD_VERIFY_FUNCTION verify_function;
Slide 177

Creating a Profile:

Password Settings

CREATE PROFILE grace_5 LIMIT


FAILED_LOGIN_ATTEMPTS 3
PASSWORD_LOCK_TIME UNLIMITED
PASSWORD_LIFE_TIME 30
PASSWORD_REUSE_TIME 30
PASSWORD_VERIFY_FUNCTION verify_function
PASSWORD_GRACE_TIME 5;

17
6
Slide 178

Altering a Profile: Password Setting

Use ALTER PROFILE to change password limits

ALTER PROFILE default LIMIT


FAILED_LOGIN_ATTEMPTS 3
PASSWORD_LIFE_TIME 60
PASSWORD_GRACE_TIME 10;

17
7

Altering a Profile
Use the ALTER PROFILE command to change the password limits assigned to a profile:
ALTER PROFILE profile LIMIT
[FAILED_LOGIN_ATTEMPTS max_value]
[PASSWORD_LIFE_TIME max_value]
[ {PASSWORD_REUSE_TIME
|PASSWORD_REUSE_MAX} max_value]
[PASSWORD_LOCK_TIME max_value]
[PASSWORD_GRACE_TIME max_value]
[PASSWORD_VERIFY_FUNCTION
{function|NULL|DEFAULT} ]
If you want to set the password parameters to less than a day:
1 hour: PASSWORD_LOCK_TIME = 1/24
10 minutes: PASSWORD_LOCK_TIME = 10/1400
5 minutes: PASSWORD_LOCK_TIME = 5/1440
Slide 179

Dropping a Profile: Password Setting

Drop the profile using DROP PROFILE command.


DEFAULT profile cannot be dropped.
CASCADE revokes the profile from user to whom
assigned

DROP PROFILE developer_prof;

DROP PROFILE developer_prof CASCADE;

17
8

Dropping a Profile: Password Setting


Drop a profile using the DROP PROFILE command:
DROP PROFILE profile [CASCADE]
where:
profile: Is the name of the profile to be dropped
CASCADE: Revokes the profile from users to whom it is assigned (The Oracle server
automatically assigns the DEFAULT profile to such users. Specify this option to drop a profile
that is currently assigned to users.)

Guidelines:
The DEFAULT profile cannot be dropped.
When a profile is dropped, this change applies only to sessions that are created subsequently
and not to the current sessions.
Slide 180

Summary
In this lesson, you should have learned how to:
Administer passwords
Administer profiles
In this lesson, you should have learned how to:
A profile is a named set of password and resource limits.
Profiles are assigned to users by the CREATE USER or ALTER USER
command.
Password management features are Password Account Locking, Password
Expiration and Aging, Password History and Password Verification
Limit usage of resources with profiles
Enable resource limits with the:
RESOURCE_LIMIT initialization parameter
ALTER SYSTEM command
Resource Management: Manage resources using Database Resource
Manager
It provides the Oracle server with more control over resource management
decisions
The Resource Plan Directives provides several means of allocating
resources like CPU method, Active session pool and queuing, Degree of
parallelism limit, Automatic consumer group switching, Maximum estimated
execution time, Undo quota

17
9

Practice Overview
Note: Practice can be accomplished using SQL*Plus or using Oracle Enterprise Manager
and SQL*Plus Worksheet.
Slide 181

Lab
1. Create user Jeff with password pass and default tablespace users. Enable password management
by running script @$ORACLE_HOME/rdbms/admin/utlpwdmg.sql.
2. Try to change the password of user Jeff to Jeff. What happens? Try changing the password for
Jeff to follow the password management format. [Hint: Password should contain at least one digit,
one character, and one punctuation. ]
3. Alter the DEFAULT profile to ensure the following applies to users assigned the DEFAULTprofile:
After two login attempts, the account should be locked. The password should expire after 30 days.
The same password should not be reused for at least one minute. The account should have a
grace period of five days to change an expired password. Ensure that the requirements given
have been implemented. [Hints: Use the ALTER PROFILEcommand to change the default profile
limits. Query the data dictionary view DBA_PROFILESto verify the result.
4. Log in to user Jeff supplying an invalid password. Try this twice, then log in again, this time
supplying the correct password. What happens? Why?
5. Using data dictionary view DBA_USERS verify user Jeff is locked. Unlock the account for the user
Jeff. After unlocking user Jeff connect as Jeff. [Hint: Execute the ALTER USERcommand to unlock
the account. ]
6. Disable password checks for the DEFAULTprofile. [Hint: Execute the ALTER PROFILEcommand
to disable the password checks. ]
7. Log in to user Jeff supplying an invalid password. Try this twice, then log in again, this time
supplying the correct password. What happens? Why?

18
0
Slide 182

Managing Users
Slide 183

Objectives
After completing this lesson, you should be able to do
the following:
Create new database users
Alter and drop existing database users
Monitor information about existing users

18
2
Slide 184

Users and Security

Account Default
locking tablespace

Authentication Temporary
mechanism tablespace
Security
domain
Role Tablespace
privileges quotas
Direct Resource
privileges
limits

18
3

Users and Security


Security domain:
The database administrator defines the names of the users who are allowed to access a
database. A security domain defines the settings that apply to the user.

Authentication mechanism:
A user who requires access to the database can be authenticated by one of the following:
Data dictionary
Operating system
Network

The means of authentication is specified at the time the user is defined in the database and can
be altered later. This lesson covers authentication by database and by operating system only.
Slide 185

Database Schema

A schema is a named Schema Objects


collection of objects. Tables
A user is created, and a Triggers
corresponding schema Constraints
is created. Indexes
A user can be Views
associated only with Sequences
one schema. Stored program units
Synonyms
Username and schema
User-defined data types
are often used
Database links
interchangeably.

18
4

Database Schema
A schema is a collection of logical structures of data, or schema objects. A schema is owned
by a database user and has the same name as that user. Each user owns a single schema.
Schema objects can be created and manipulated with SQL and include the following types
of objects:
Clusters
Database links
Database triggers
Dimensions
External procedure libraries
Indexes and index types
Java classes, Java resources, and Java sources
Materialized views and materialized view logs
Object tables, object types, and object views
Operators
Sequences
Stored functions, procedures, and packages
Synonyms
Tables and index-organized tables
Views
Slide 186

Checklist for Creating Users

Identify tablespaces in which the user needs to


store objects.
Decide on quotas for each tablespace.
Assign a default tablespace and temporary
tablespace.
Create a user.
Grant privileges and roles to the user.

18
5
Slide 187

Creating a New User:


Database Authentication
Set the initial password:

CREATE USER aaron


IDENTIFIED BY soccer
DEFAULT TABLESPACE data
DEFAULT TEMPORARY TABLESPACE temp
QUOTA 15M ON data
QUOTA 10M ON users
PASSWORD EXPIRE;

18
6

Creating a New User: Database Authentication


If you choose database authentication for a user, administration of the user account
including authentication of that user is performed entirely by Oracle. To have Oracle
authenticate a user, specify a password for the user when you create or alter the user. Users
can change their password at any time. Passwords are stored in an encrypted format. Each
password must be made up of single-byte characters, even if your database uses a
multibyte character set.

To enhance security when using database authentication, Oracle recommends the use of
password management, including account locking, password aging and expiration,
password history, and password complexity verification.

Use the CREATE USER statement to create and configure a database user, or an account
through which you can log in to the database and establish the means by which Oracle
permits access by the user.
Slide 188

Creating a New User:Operating System


Authentication
The OS_AUTHENT_PREFIX initialization parameter
specifies the format of the usernames.
Defaults to OPS$.

CREATE USER aaron


IDENTIFIED EXTERNALLY
DEFAULT TABLESPACE USERS
TEMPORARY TABLESPACE temp
QUOTA 15m ON data
PASSWORD EXPIRE;

18
7

Creating a New User: Operating System Authentication


Operating system authentication:
By default, Oracle only allows operating system authenticated logins over secure connections.
Therefore, if you want the operating system to authenticate a user, by default that user cannot
connect to the database over Oracle Net. This means the user cannot connect using a shared
server configuration, since this connection uses Oracle Net. This default restriction prevents a
remote user from impersonating another operating system user over a network connection.
If you are not concerned about remote users impersonating another operating system user over
a network connection, and you want to use operating system user authentication with network
clients, set the initialization parameter REMOTE_OS_AUTHENT (default is FALSE) to
TRUE in the database's initialization parameter file. Setting the initialization parameter
REMOTE_OS_AUTHENT to TRUE allows the RDBMS to accept the client operating system
user name received over a nonsecure connection and use it for account access. The change take
effect the next time you start the instance and mount the database.
Generally, user authentication through the host operating system offers the following benefits:
Users can connect to Oracle faster and more conveniently without specifying a separate
database user name or password.
User entries in the database and operating system audit trails correspond.
Slide 189

Changing User Quota on Tablespaces

A users tablespace quotas may be modified for any


the following situations:
Tables owned by a user exhibit unanticipated
growth.
An application is enhanced and requires
additional tables or indexes.
Objects are reorganized and placed in different
tablespaces.
To modify a users tablespace quota:
ALTER USER aaron
QUOTA 0 ON USERS;

18
8

Changing User Quota on Tablespaces


You can assign each user a tablespace quota for any tablespace (except a temporary
tablespace). Assigning a quota does two things:
Users with privileges to create certain types of objects can create those objects in the
specified tablespace.
Oracle limits the amount of space that can be allocated for storage of a user's objects within
the specified tablespace to the amount of the quota.

By default, a user has no quota on any tablespace in the database. If the user has the
privilege to create a schema object, you must assign a quota to allow the user to create
objects. Minimally, assign users a quota for the default tablespace, and additional quotas
for other tablespaces in which they can create objects.

Use the following command to modify tablespace quotas or to reassign tablespaces:


ALTER USER user
[ DEFAULT TABLESPACE tablespace]
[ TEMPORARY TABLESPACE tablespace]
[ QUOTA {integer [K | M] | UNLIMITED } ON tablespace
[ QUOTA {integer [K | M] | UNLIMITED } ON tablespace ] ...]
Slide 190

Dropping a User
Use the CASCADE clause to drop all objects in the
schema if the schema contains objects.

Users who are currently connected to the Oracle


server cannot be dropped.

DROP USER aaron;

DROP USER aaron CASCADE;

18
9

Dropping a User
DROP USER user [CASCADE]

Guidelines:
The CASCADE option drops all objects in the schema before dropping the user. This must
be specified if the schema contains any objects.
A user who is currently connected to the Oracle server cannot be dropped.
Slide 191

Obtaining User Information

Information about users can be obtained by querying the


following views:
DBA_USERS
DBA_TS_QUOTAS

19
0

Obtaining User Information


Use the following query to find the default_tablespace for all users.
SQL> SELECT username, default_tablespace FROM dba_users;

USERNAME DEFAULT_TABLESPACE
--------- ------------------------------
SYS SYSTEM
SYSTEM SYSTEM
OUTLN SYSTEM
DBSNMP SYSTEM
HR SAMPLE
OE SAMPLE
Slide 192

Summary

In this lesson, you should have learned how to:


Create users by specifying the appropriate password
mechanism
Control usage of space by users

A schema is a named collection of objects.


Create users identified by password, or by operating
system, or identified globally
Limit users on space in Tablespaces by using Quotas
Assign profiles to users
Grant necessary privileges and roles to the user
Drop user and all objects within his schema using
cascade
Users who are currently connected to the Oracle server
cannot be dropped.

19
1
Slide 193

Lab
1. Create user Bob with a password of CRUSADER. Make sure that any objects and
temporary segments created by Bob are not created in the system tablespace. Also,
ensure that Bob can log in and create objects up to one megabyte in size in USERS
and INDX tablespaces. [Hint: Ensure that the temporary tablespace temp is
assigned. Grant Bob the ability to create sessions.
2. Create a user Emi with a password of MARY. Make sure that any objects and sort
segments created by Emi are not created in the system tablespace.
3. Display the information on Bob and Emi from the data dictionary. [Hint: This can be
obtained by querying DBA_USERS. ]
4. From the data dictionary, display the information on the amount of space that Bob can
use in tablespaces. [Hint: This can be obtained by querying DBA_TS_QUOTAS. ]
5. As user BOB change his temporary tablespace. What happens? Why? As Bob,
change his password to SAM. As SYSTEM, remove Bobs quota on his default
tablespace. Remove Emis account from the database. [ Hint: Because Emi owns
tables, you need to use the CASCADE option.]
6. Bob has forgotten his password. Assign him a password OLINK and require that Bob
change his password the next time he logs on.

19
2
Slide 194

Managing Privileges
Slide 195

Objectives
After completing this lesson, you should be able to do
the following:
Identify system and object privileges
Grant and revoke privileges

19
4
Slide 196

Managing Privileges
Two types of Oracle user privileges:
System: Enables users to perform particular actions
in the database
Object: Enables users to access and manipulate a
specific object

19
5

Privileges
A user privilege is a right to execute a particular type of SQL statement, or a right to access
another user's object. Oracle also provides shortcuts for grouping privileges that are commonly
granted or revoked together.
This section describes Oracle user privileges, and contains the following topics:
System Privileges
Object Privileges

System privileges:
There are over 100 distinct system privileges. Each system privilege allows a user to perform a
particular database operation or class of database operations. System privileges can be very
powerful, and should be granted only when necessary to roles and trusted users of the database.
Slide 197

System Privileges
There are more than 100 distinct system privileges.
The ANY keyword in privileges signifies that users
have the privilege in any schema.
The GRANT command adds a privilege to a user or a
group of users.
The REVOKE command deletes the privileges.

19
6

System Privileges

A system privilege is the right to perform a particular action, or to perform an action on


any schema objects of a particular type. For example, the privileges to create tablespaces
and to delete the rows of any table in a database are system privileges. There are over 60
distinct system privileges.

Grant and Revoke System Privileges


You can grant or revoke system privileges to users and roles. If you grant system privileges
to roles, then you can use the roles to manage system privileges. For example, roles permit
privileges to be made selectively available.

Note: In general, you grant system privileges only to administrative personnel and
application developers. End users normally do not require the associated capabilities

Use either of the following to grant or revoke system privileges to users and roles:
The Grant System Privileges/Roles dialog box and Revoke System Privileges/Roles dialog
box of Oracle Enterprise Manager
The SQL statements GRANT and REVOKE
Slide 198

System Privileges: Examples


Category Examples
INDEX CREATE ANY INDEX
ALTER ANY INDEX
DROP ANY INDEX
TABLE CREATE TABLE
CREATE ANY TABLE
ALTER ANY TABLE
DROP ANY TABLE
SELECT ANY TABLE
UPDATE ANY TABLE
DELETE ANY TABLE
SESSION CREATE SESSION
ALTER SESSION
RESTRICTED SESSION
TABLESPACE CREATE TABLESPACE
ALTER TABLESPACE
DROP TABLESPACE
UNLIMITED TABLESPACE

19
7

System Privileges: Examples

There is no CREATE INDEX privilege.


CREATE TABLE includes the CREATE INDEX and the ANALYZE commands. The user
must have a quota for the tablespace or must have been granted UNLIMITED
TABLESPACE.
Privileges such as CREATE TABLE, CREATE PROCEDURE, or CREATE CLUSTER
include dropping these objects.
UNLIMITED TABLESPACE cannot be granted to a role.
The DROP ANY TABLE privilege is necessary to truncate a table in another schema.
Slide 199

Granting System Privileges

Use the GRANT command to grant system privileges.


Grantee can further grant the system privilege with ADMIN
option.

GRANT CREATE SESSION TO emi;

GRANT CREATE SESSION TO emi WITH ADMIN OPTION;

19
8

Granting System Privileges


Who Can Grant or Revoke System Privileges?
Only users who have been granted a specific system privilege with the ADMIN OPTION or
users with the GRANT ANY PRIVILEGE system privilege can grant or revoke system
privileges to other users.

GRANT {system_privilege|role}
[, {system_privilege|role} ]...
TO {user|role|PUBLIC}
[, {user|role|PUBLIC} ]...
[WITH ADMIN OPTION]

where:
system_privilege: Specifies the system privilege to be granted
role: Specifies the role name to be granted
PUBLIC: Grants system privilege to all users
WITH ADMIN OPTION: Enables the grantee to further grant the privilege or role to other
users or roles
Slide 200

SYSDBA and SYSOPER Privileges

Category Examples

SYSOPER STARTUP
SHUTDOWN
ALTER DATABASE OPEN | MOUNT

ALTER DATABASE BACKUP CONTROLFILE TO


RECOVER DATABASE

ALTER DATABASE ARCHIVELOG


RESTRICTED SESSION

SYSDBA SYSOPER PRIVILEGES WITH ADMIN


OPTION
CREATE DATABASE

ALTER TABLESPACE BEGIN/END BACKUP


RESTRICTED SESSION

RECOVER DATABASE UNTIL

19
9

SYSDBA and SYSOPER Privileges

Database startup and shutdown are powerful administrative options and are restricted to
users who connect to Oracle with administrator privileges.
Depending on the operating system, one of the following conditions establishes
administrator privileges for a user:

The user's operating system privileges allow him or her to connect using administrator
privileges.
The user is granted the SYSDBA or SYSOPER privileges, and the database uses password
files to authenticate database administrators.

When you connect with SYSDBA privileges, you are placed in the schema owned by SYS.
When you connect as SYSOPER, you are placed in the public schema. SYSOPER
privileges are a subset of SYSDBA privileges.
Slide 201

System Privilege Restrictions


O7_DICTIONARY_ACCESSIBILITY parameter
Controls restrictions on SYSTEM privileges
If set to TRUE, allows access to objects in SYS
schema
The default is FALSE: ensures that system privileges
that allow access to any schema do not allow access
to SYS schema

20
0

System Privilege Restrictions


System Privilege Restrictions
Because system privileges are so powerful, Oracle recommends that you configure your
database to prevent regular (non-DBA) users exercising ANY system privileges (such as
UPDATE ANY TABLE) on the data dictionary. In order to secure the data dictionary, ensure
that the O7_DICTIONARY_ACCESSIBILITY initialization parameter is set to FALSE. This
feature is called the dictionary protection mechanism.

If you enable dictionary protection (O7_DICTIONARY_ACCESSIBILITY is FALSE), access


to objects in the SYS schema (dictionary objects) is restricted to users with the SYS schema.
These users are SYS and those who connect as SYSDBA. System privileges providing access
to objects in other schemas do not give other users access to objects in the SYS schema. For
example, the SELECT ANY TABLE privilege allows users to access views and tables in other
schemas, but does not enable them to select dictionary objects (base tables of dynamic
performance views, views, packages, and synonyms). These users can, however, be granted
explicit object privileges to access objects in the SYS schema.
Slide 202

Revoking System Privileges


Use the REVOKE command to remove a system
privilege from a user.
Users with ADMIN OPTION for system privilege can
revoke system privileges.
Can only revoke privileges granted with a GRANT
command.

REVOKE CREATE TABLE FROM emi;

20
1

Revoking System Privileges

Prerequisites
To revoke a system privilege or role, you must have been granted the privilege with the
ADMIN OPTION.
To revoke a role, you must have been granted the role with the ADMIN OPTION. You can
revoke any role if you have the GRANT ANY ROLE system privilege.
To revoke an object privilege, you must have previously granted the object privileges to each
user and role.
The REVOKE statement can revoke only privileges and roles that were previously granted
directly with a GRANT statement. You cannot use this statement to revoke:
Privileges or roles not granted to the revokee
Roles or object privileges granted through the operating system
Privileges or roles granted to the revokee through roles
Slide 203

Revoking System Privileges


with the ADMIN OPTION
DBA Jeff Emi

GRANT

DBA Jeff Emi


REVOKE

20
2

Revoking System Privileges


Cascading Effects of Revoking Privileges

There are no cascading effects when revoking a system privilege related to DDL
operations, regardless of whether the privilege was granted with or without the
ADMIN OPTION. For example, assume the following:

The security administrator grants the CREATE TABLE system privilege to jfee with the ADMIN
OPTION.
jfee creates a table.
jfee grants the CREATE TABLE system privilege to tsmith.
tsmith creates a table.
The security administrator revokes the CREATE TABLE system privilege from jfee.
jfee's table continues to exist. tsmith still has the table and the CREATE TABLE system privilege.
Cascading effects can be observed when revoking a system privilege related to a
DML operation. If SELECT ANY TABLE is revoked from a user, then all procedures
contained in the users schema relying on this privilege will fail until the privilege
is reauthorized.
Slide 204

Object Privileges

Object priv. Table View Sequence Procedure


ALTER
DELETE
EXECUTE
INDEX
INSERT
REFERENCES
SELECT
UPDATE

20
3

Object Privileges
A schema object privilege is a privilege or right to perform a particular action on a specific
schema object:
Table
View
Sequence
Procedure
Function
Package
Different object privileges are available for different types of schema objects. For example, the
privilege to delete rows from the DEPT table is an object privilege.
Some schema objects, such as clusters, indexes, triggers, and database links, do not have
associated object privileges. Their use is controlled with system privileges. For example, to
alter a cluster, a user must own the cluster or have the ALTER ANY CLUSTER system
privilege.
A schema object and its synonym are equivalent with respect to privileges. That is, the object
privileges granted for a table, view, sequence, procedure, function, or package apply whether
referencing the base object by name or using a synonym.
Slide 205

Granting Object Privileges

Use the GRANT command to grant object privileges.


Grant must be in grantors schema or grantor must have
GRANT OPTION.

GRANT EXECUTE ON dbms_output TO jeff;

GRANT UPDATE ON emi.customers TO jeff WITH


GRANT OPTION;

20
4

Granting Object Privileges

Schema object privileges can be granted to and revoked from users and roles. If you grant
object privileges to roles, you can make the privileges selectively available. Object
privileges for users and roles can be granted or revoked using the following:
The SQL statements GRANT and REVOKE, respectively
The Add Privilege to Role/User dialog box and the Revoke Privilege from Role/User dialog
box of Oracle Enterprise Manager.

GRANT { object_privilege [(column_list)]


[, object_privilege [(column_list)] ]...
|ALL [PRIVILEGES]}
ON [schema.]object
TO {user|role|PUBLIC}[, {user|role|PUBLIC} ]...
[WITH GRANT OPTION]
Slide 206

Revoking Object Privileges

Use the REVOKE command to revoke object


privileges.
User revoking the privilege must be the original
grantor of the object privilege being revoked.

REVOKE SELECT ON emi.orders FROM jeff;

20
5

Revoking Object Privileges

Specify the object on which the object privileges are to be revoked. This object can be:
A table, view, sequence, procedure, stored function, or package, materialized view
A synonym for a table, view, sequence, procedure, stored function, package, or materialized
view
A library, indextype, or user-defined operator

If you do not qualify object with schema, Oracle assumes the object is in your own schema.

If you revoke the SELECT object privilege (with or without the GRANT OPTION) on the
containing table or materialized view of a materialized view, Oracle invalidates the
materialized view.

If you revoke the SELECT object privilege (with or without the GRANT OPTION) on any of
the master tables of a materialized view, Oracle invalidates both the materialized view and its
containing table or materialized view.
Slide 207

Revoking Object Privileges


WITH GRANT OPTION

Bob Jeff Emi


GRANT

REVOKE Bob Jeff Emi

20
6

Revoking Object Privileges

Revoking an object privilege can have cascading effects that should be investigated before
issuing a REVOKE statement.

Object definitions that depend on a DML object privilege can be affected if the DML object
privilege is revoked. For example, assume the procedure body of the test procedure
includes a SQL statement that queries data from the emp table. If the SELECT privilege
on the emp table is revoked from the owner of the test procedure, the procedure can no
longer be executed successfully.

When a REFERENCES privilege for a table is revoked from a user, any foreign key
integrity constraints defined by the user that require the dropped REFERENCES
privilege are automatically dropped. For example, assume that the user jward is granted
the REFERENCES privilege for the deptno column of the dept table and creates a foreign
key on the deptno column in the emp table that references the deptno column. If the
references privilege on the deptno column of the dept table is revoked, the foreign key
constraint on the deptno column of the emp table is dropped in the same operation.
Slide 208

Obtaining Privileges Information


Information about privileges can be obtained by
querying the following views:
DBA_SYS_PRIVS
SESSION_PRIVS
DBA_TAB_PRIVS
DBA_COL_PRIVS

20
7

Obtaining Privileges Information


DBA_SYS_PRIVS: Lists system privileges granted to users and roles
SESSION_PRIVS: Lists the privileges that are currently available to the user
DBA_TAB_PRIVS: Lists all grants on all objects in the database
DBA_COL_PRIVS: Describes all object-column grants in the database
Slide 209

Summary

In this lesson, you should have learned how to:


Identify system and object privileges
Grant and revoke privileges

A system privilege is the right to perform a particular action, or to


perform an action on any schema objects of a particular type.
Grantee can further grant the system privilege with ADMIN option
A object privilege is a privilege or right to perform a particular action
on a specific schema object.
Grantee can further grant the object privilege with GRANT option
Privileges are granted using grant command to user and roles
The user is granted the SYSDBA or SYSOPER privileges, and the
database uses password files to authenticate database administrators.
When you connect with SYSDBA privileges, you are placed in the
schema owned by SYS.
When you connect as SYSOPER, you are placed in the public
schema. SYSOPER privileges are a subset of SYSDBA privileges.
Privileges are revoked using revoke command from user and roles

20
8

Practice Overview
Note: Practice can be accomplished using SQL*Plus or using Oracle Enterprise Manager
and SQL*Plus Worksheet.
Slide 210

Lab
1. As SYSTEM, create user Emi and give her the capability to log on to the database and create
objects in her schema. Connect as Emi, and create tables CUSTOMERS and ORDERS. Connect
as SYSTEM and copy the data from OE.CUSTOMERS to Emis CUSOMTERS table. Verify that
records have been inserted. As SYSTEM give Bob the ability to select from Emi's CUSTOMERS
table.
2. Reconnect as Emi and give Bob the ability to select from Emi's CUSTOMERS table. Also, enable
Bob to give the select capability to other users. Examine the data dictionary views that record
these actions.
3. Create user Trevor with the capability to log on to the database. As Bob, enable Trevor to access
Emis CUSTOMERS table. Give Bob the new password sam. As Emi, remove Bobs privilege to
read Emis CUSTOMERS table. As Trevor, query Emis CUSTOEMRS table. What happens and
why?

4. Enable Emi to create tables in any schema. As Emi, create the table ORDERS in Bobs schema as
a copy of EMI.ORDERS. What happened and why?

5. As SYSTEM, examine the data dictionary view DBA_TABLES.


6. Enable Emito start up and shut down the database without the ability to create a new database.

20
9
Slide 211

Managing Roles
Slide 212

Objectives
After completing this lesson, you should be able to
do the following:
Create and modify roles
Control availability of roles
Remove roles
Use predefined roles
Display role information from the data dictionary

21
1
Slide 213

Roles

Users
A B C

Roles HR_MGR HR_CLERK

Privileges
SELECT ON INSERT ON
JOBS JOBS

CREATE CREATE UPDATE


TABLE SESSION ON JOBS

21
2

Introduction to Roles
What is a Role?
Oracle provides for easy and controlled privilege management through roles. Roles are
named groups of related privileges that you grant to users or other roles. Roles are
designed to ease the administration of end-user system and schema object privileges.
However, roles are not meant to be used for application developers, because the privileges
to access schema objects within stored programmatic constructs need to be granted
directly.
These properties of roles allow for easier privilege management within a database:

Reduced privilege administration


Rather than granting the same set of privileges explicitly to several users, you can grant the
privileges for a group of related users to a role, and then only the role needs to be granted to
each member of the group.

Dynamic privilege management


If the privileges of a group must change, only the privileges of the role need to be modified.
The security domains of all users granted the group's role automatically reflect the changes
made to the role.
Slide 214

Benefits of Roles
Easier privilege management
Dynamic privilege management
Selective availability of privileges
Can be granted through the operating system

21
3

Benefits of Roles
Easier privilege management:
Use roles to simplify privilege management. Rather than granting the same set of privileges
to several users, you can grant the privileges to a role, and then grant that role to each
user.

Dynamic privilege management:


If the privileges associated with a role are modified, all the users who are granted the role
acquire the modified privileges automatically and immediately.

Selective availability of privileges:


Roles can be enabled and disabled to turn privileges on and off temporarily. Enabling a role
can also be used to verify that a user has been granted that role.

Can be granted through the operating system:


Operating system commands or utilities can be used to assign roles to users
in the database.
Slide 215

Creating Roles
Roles with ADMIN option:
Not identified:

By password:
Identified externally:
CREATE ROLE oe_clerk;

CREATE ROLE hr_clerk


IDENTIFIED BY bonus;

CREATE ROLE hr_manager


IDENTIFIED EXTERNALLY;

21
4

Creating Roles

Purpose
Use the CREATE ROLE statement to create a role, which is a set of privileges that can be
granted to users or to other roles. You can use roles to administer database privileges. You
can add privileges to a role and then grant the role to a user. The user can then enable the
role and exercise the privileges granted by the role.

A role contains all privileges granted to the role and all privileges of other roles granted to
it. A new role is initially empty. You add privileges to a role with the GRANT statement.

When you create a role that is NOT IDENTIFIED or is IDENTIFIED EXTERNALLY or


BY password, Oracle grants you the role with ADMIN OPTION. However, when you create
a role IDENTIFIED GLOBALLY, Oracle does not grant you the role.
Slide 216

Predefined Roles

Role Name Description


CONNECT, These roles are provided
RESOURCE, DBA for backward compatibility
EXP_FULL_DATABASE Privileges to export the
database
IMP_FULL_DATABASE Privileges to import the
database
DELETE_CATALOG_ROLE DELETE privileges on
data dictionary tables
EXECUTE_CATALOG_ROLE EXECUTE privilege on
data dictionary packages
SELECT_CATALOG_ROLE SELECT privilege on data
dictionary tables

21
5

Predefined Roles

The following roles are defined automatically for Oracle databases:


CONNECT
RESOURCE
DBA
EXP_FULL_DATABASE
IMP_FULL_DATABASE

These roles are provided for backward compatibility to earlier versions of


Oracle and can be modified in the same manner as any other role in an Oracle
database.
Slide 217

Modifying Roles

Use ALTER ROLE to modify the authentication


method.
Requires the ADMIN option or ALTER ANY ROLE
privilege.
ALTER ROLE oe_clerk
IDENTIFIED BY order;

ALTER ROLE hr_clerk


IDENTIFIED EXTERNALLY;

ALTER ROLE hr_manager


NOT IDENTIFIED;

21
6

Modifying Roles

Purpose
Use the ALTER ROLE statement to change the authorization needed to enable a role.

Prerequisites
You must either have been granted the role with the ADMIN OPTION or have ALTER ANY
ROLE system privilege.
Before you alter a role to IDENTIFIED GLOBALLY, you must:
Revoke all grants of roles identified externally to the role and
Revoke the grant of the role from all users, roles, and PUBLIC.
The one exception to this rule is that you should not revoke the role from the user who is
currently altering the role.
Slide 218

Assigning Roles

Use GRANT command to assign a role

GRANT oe_clerk TO scott;

GRANT hr_clerk TO hr_manager;

GRANT hr_manager TO scott WITH ADMIN OPTION;

21
7

Assigning Roles
Grant Roles
You grant roles from users or other roles using the following options:
The Grant System Privileges/Roles dialog box and Revoke System Privileges/Roles dialog
box of Oracle Enterprise Manager
The SQL statement GRANT
Privileges are granted to roles using the same options. Roles can also be granted to from
users using the operating system that executes Oracle, or through network services.

Who Can Grant or Revoke Roles?


Any user with the GRANT ANY ROLE system privilege can grant or revoke any role
except a global role to or from other users or roles of the database. You should grant this
system privilege conservatively because it is very powerful.

Any user granted a role with the ADMIN OPTION can grant or revoke that role to or from
other users or roles of the database. This option allows administrative powers for roles on a
selective basis.
Slide 219

Establishing Default Roles

A user can be assigned many roles.


A user can be assigned a default role.
Limit the number of default roles for a user.

ALTER USER scott


DEFAULT ROLE hr_clerk, oe_clerk;

ALTER USER scott DEFAULT ROLE ALL;

ALTER USER scott DEFAULT ROLE ALL EXCEPT


hr_clerk;

ALTER USER scott DEFAULT ROLE NONE;

21
8

Default Roles
When a user logs on, Oracle enables all privileges granted explicitly to the user and all
privileges in the user's default roles.

A user's list of default roles can be set and altered using the ALTER USER statement. The
ALTER USER statement allows you to specify roles that are to be enabled when a user
connects to the database, without requiring the user to specify the roles' passwords. The
user must have already been directly granted the roles with a GRANT statement. You
cannot specify as a default role any role managed by an external service including a
directory service (external roles or global roles).

The following example establishes default roles for user jane:


ALTER USER jane DEFAULT ROLE payclerk, pettycash;
You cannot set a user's default roles in the CREATE USER statement. When you first
create a user, the user's default role setting is ALL, which causes all roles subsequently
granted to the user to be default roles. Use the ALTER USER statement to limit the user's
default roles.
Slide 220

Application Roles

Application roles can be enabled only by authorized PL/SQL


packages.
The USING package clause creates an application role.

CREATE ROLE admin_role


IDENTIFIED USING hr.employee;

21
9

Application Roles
You grant a secure application role all privileges necessary to run a given database application.
Then, you grant the secure application role to other roles or to specific users. An application
can have several different roles, with each role assigned a different set of privileges that allow
for more or less data access while using the application.
SQL> CREATE ROLE admin_role IDENTIFIED USING hr.employees;

In this example, admin_role is an application role and the role can be enabled only by modules
that are defined inside the hr.employee PL/SQL package.
Slide 221

Enabling and Disabling Roles


Disable a role to revoke the role from a user
temporarily.
Enable a role to grant it temporarily.
The SET ROLE command enables and disables
roles.
Default roles are enabled for a user at login.
A password may be required to enable a role.

22
0

Enabling and Disabling Roles


When Do Grants and Revokes Take Effect?
Depending on what is granted or revoked, a grant or revoke takes effect at different times:
All grants/revokes of system and object privileges to anything (users, roles, and PUBLIC) are
immediately observed.
All grants/revokes of roles to anything (users, other roles, PUBLIC) are only observed when a
current user session issues a SET ROLE statement to re-enable the role after the grant/revoke,
or when a new user session is created after the grant/revoke.
You can see which roles are currently enabled by examining the SESSION_ROLES data
dictionary view.

The SET ROLE Statement


During the session, the user or an application can use the SET ROLE statement any number of
times to change the roles currently enabled for the session. You must already have been granted
the roles that you name in the SET ROLE statement.
Slide 222

Enabling and Disabling Roles


SET ROLE hr_clerk;

SET ROLE oe_clerk IDENTIFIED BY order;

SET ROLE ALL EXCEPT oe_clerk;

22
1

Enabling and Disabling Roles


Syntax:
The SET ROLE command turns off any other roles granted to the user.
SET ROLE {role [ IDENTIFIED BY password ]
[, role [ IDENTIFIED BY password ]]...
| ALL [ EXCEPT role [, role ] ...]
| NONE }
where:
role: Is the name of the role
IDENTIFIED BY password: Provides the password required when enabling the role
ALL: Enables all roles that are granted to the current user, except those listed in the EXCEPT
clause (You cannot use this option to enable roles with passwords.)
EXCEPT role: Does not enable these roles
NONE: Disables all roles for the current session (Only privileges granted directly to the user
are active.)
The ALL option without the EXCEPT clause works only when every role that is enabled does
not have a password.
Slide 223

Revoking Roles from Users

Revoking roles from users requires the ADMIN


OPTION or GRANT ANY ROLE privilege.
To revoke a role:

REVOKE oe_clerk FROM scott;

REVOKE hr_manager FROM PUBLIC;

22
2

Removing Roles from Users


Specify the role to be revoked.

If you revoke a role from a user, Oracle makes the role unavailable to the user. If the role is
currently enabled for the user, the user can continue to exercise the privileges in the role's
privilege domain as long as it remains enabled. However, the user cannot subsequently
enable the role.

If you revoke a role from another role, Oracle removes the revoked role's privilege domain
from the revokee role's privilege domain. Users who have been granted and have enabled
the revokee role can continue to exercise the privileges in the revoked role's privilege
domain as long as the revokee role remains enabled. However, other users who have been
granted the revokee role and subsequently enable it cannot exercise the privileges in the
privilege domain of the revoked role.

If you revoke a role from PUBLIC, Oracle makes the role unavailable to all users who have
been granted the role through PUBLIC. Any user who has enabled the role can continue
to exercise the privileges in its privilege domain as long as it remains enabled. However,
users cannot subsequently enable the role. The role is not revoked from users who have
been granted the role directly or through other roles.
Slide 224

Removing Roles

Dropping a role:
Removes it from all users and roles it was
granted
Removes it from the database
Requires the ADMIN OPTION or DROP ANY
ROLE privilege
To drop a role:

DROP ROLE hr_manager;

22
3

Dropping Roles
In some cases, it may be appropriate to drop a role from the database. The security
domains of all users and roles granted a dropped role are immediately changed to reflect
the absence of the dropped role's privileges. All indirectly granted roles of the dropped role
are also removed from affected security domains. Dropping a role automatically removes
the role from all users' default role lists.

Because the creation of objects is not dependent on the privileges received through a role,
tables and other objects are not dropped when a role is dropped.

You can drop a role using the SQL statement DROP ROLE. To drop a role, you must have
the DROP ANY ROLE system privilege or have been granted the role with the ADMIN
OPTION.
The following statement drops the role CLERK:
DROP ROLE clerk;
Slide 225

Guidelines for Creating Roles

Users

User
roles
HR_CLERK HR_MANAGER PAY_CLERK

Application
roles BENEFITS PAYROLL

Application
privileges

Benefits privileges Payroll privileges

22
4

Guidelines for Creating Roles


Because a role includes the privileges that are necessary to perform a task, the role name is
usually an application task or a job title. The example in the slide uses both application tasks
and job titles for role names. Use the following steps to create, assign, and grant users roles:

Create a role for each application task. The name of the application role corresponds to a task
in the application, such as PAYROLL.
Assign the privileges necessary to perform the task to the application role.
Create a role for each type of user. The name of the user role corresponds to a job title, such
as PAY_CLERK.
Grant application roles to users roles.
Grant users roles to users.

If a modification to the application requires that new privileges are needed to perform the
payroll task, then the DBA only needs to assign the new privileges to the application role,
PAYROLL. All of the users that are currently performing this task will receive the new
privileges.
Slide 226

Guidelines for Using Passwords


and Default Roles

Password protected Default role


(not default)

PAY_CLERK PAY_CLERK_RO

INSERT, UPDATE, DELETE, Select privileges


and SELECT privileges

22
5

Guidelines for Using Passwords and Default Roles


Passwords provide an additional level of security when enabling a role. For example, the
application might require a user to enter a password when enabling the PAY_CLERK role,
because this role can be used to issue checks.

Passwords allow a role to be enabled only through an application. This technique is shown in
the example in the slide.

The DBA has granted the user two roles, PAY_CLERK and PAY_CLERK_RO.
The PAY_CLERK has been granted all of the privileges that are necessary to perform the
payroll clerk function.
The PAY_CLERK_RO (RO for read only) has been granted only SELECT privileges on the
tables required to perform the payroll clerk function.
The user can log in to SQL*Plus to perform queries, but cannot modify any of the data,
because the PAY_CLERK is not a default role, and the user does not know the password
for PAY_CLERK.
When the user logs in to the payroll application, it enables the PAY_CLERK by providing
the password. It is coded in the program; the user is not prompted for it.
Slide 227

Obtaining Role Information

Information about roles can be obtained by querying the


following views:
DBA_ROLES: All roles that exist in the database
DBA_ROLES_PRIVS: Roles granted to users and roles
ROLE_ROL_PRIVS: Roles that are granted to roles
DBA_SYS_PRIVS: System privileges granted to users
and roles
ROLE_SYS_PRIVS: System privileges granted to roles
ROLE_TAB_PRIVS: Object privileges granted to roles
SESSION_ROLES: Roles that the user currently has
enabled

22
6

Query Role Information


Many of the data dictionary views that contain information on privileges that are granted to
users also contain information about whether the role requires a password.
SQL> SELECT role, password_required
2 FROM dba_roles;
ROLE PASSWORD
------------------------------ -----------
CONNECT NO
RESOURCE NO
DBA NO
SELECT_CATALOG_ROLE NO
EXECUTE_CATALOG_ROLE NO
DELETE_CATALOG_ROLE NO
IMP_FULL_DATABASE NO
EXP_FULL_DATABASE NO
SALES_CLERK YES
HR_CLERK EXTERNAL
Slide 228

Summary

In this lesson, you should have learned how to:


Create roles
Assign privileges to roles
Assign roles to users or roles
Establish default roles

Create roles using CREATE ROLE command


Assign privileges or roles to roles with GRANT command
Enable and disable default roles using SET command
The following roles are defined automatically for Oracle databases:
CONNECT, RESOURCE, DBA, EXP_FULL_DATABASE,
IMP_FULL_DATABASE
When a user logs on, Oracle enables all privileges granted explicitly
to the user and all privileges in the user's default roles.
Revoke a role using revoke command
Drop roles with drop role command
Dropping a role: Removes it from all users and roles it was granted,
Removes it from the database and Requires the ADMIN OPTION or
DROP ANY ROLE privilege

22
7

Practice Overview
Note: Practice can be accomplished using SQL*Plus or using Oracle Enterprise Manager
and SQL*Plus Worksheet.
Slide 229

Lab
1. Examine the data dictionary view and list the system privileges of the RESOURCE
role.
2. Create a role called DEV, which enables a user to create a table, create a view and
select from OEs CUSTOMERS table.
3. Create user Bob. Assign the RESOURCE and DEV roles to Bob, but make only the
RESOURCE role to be automatically enabled when he logs on.
4. Give Bob the ability to read all the data dictionary information.
5. Bob needs to check the undo segments that are currently used by the instance.
6. Connect as Bob and list the undo segments used.
7. As SYSTEM, try to create a view CUST_VIEW on OEs CUSTOMERS table. What
happens and why?
8. As user OE grant select on customers to SYSTEM. As SYSTEM try to create view
CUST_VIEW on OEs CUSTOMERS table. What happens and why?

22
8
Slide 230

Managing database objects

Managing database objects

22
9
Slide 231

Overview of Managing Objects

Before developing an application the following tasks need to be done


create tables,
Create Indexes,
other database objects in a schema as per requirement
A schema is a collection of database objects. A schema is owned
by a database user and has the same name as that user, such as
the HR schema.
Schema objects are logical structures created by users.
Objects can define areas of the database to hold data, such as
tables, or can consist of just a definition, such as views.

23
0
Slide 232

Managing database objects


Managing Tables
Managing Indexes
Managing Views
Managing Sequences
Managing Synonyms

23
1
Slide 233

Managing Tables

Creating a Table
Adding a Column To a Table
Modifying a Column In a Table
Dropping a Column From a Table
Adding a Check Constraint
Adding a Unique Constraint
Adding a Primary Key Constraint
Adding a Foreign Key Constraint
Viewing Existing Constraints
Disabling and Enabling a Constraint
Dropping a Constraint
Adding Data to a Table
Modifying Data in a Table
Removing a Row in a Table
Dropping a Table

23
2
Slide 234

Managing Tables

Creating a Table

Syntax
CREATE TABLE "table_name"
("column 1" "data_type_for_column_1",
"column 2" "data_type_for_column_2",
... )

Example
Create table emp
(EmpNo. Number,
EmpName Varchar2[100]);

23
3
Slide 235

Managing Tables
Altering a Table
Once a table is created in the database, there are many occasions
where one may wish to change the structure of the table. Typical
cases include the following:
Add a column
Drop a column
Change a column name
Change the data type for a column
Syntax
ALTER TABLE "table_name"
[alter specification]

23
4
Slide 236

Managing Tables
The ALTER TABLE statement allows you to rename an
existing table. It can also be used to add, modify, or drop
a column from an existing table.
Renaming a table
Syntax
ALTER TABLE table_name
RENAME TO new_table_name;
Example:
ALTER TABLE emp
RENAME TO employee
This will rename the emp table to employee

23
5
Slide 237

Managing Tables
Adding column(s) to a table

Syntax
ALTER TABLE table_name
ADD column_name column-definition;

Example:
ALTER TABLE emp
ADD dept_name varchar2(50);

This will add a column called dept_name to the emp table.

23
6
Slide 238

Managing Tables
Modifying column(s) in a table

Syntax
ALTER TABLE table_name
MODIFY column_name column_type;

Example
ALTER TABLE emp
MODIFY dept_name varchar2(100) not null;

This will modify the column called dept_name to be a data type of


varchar2(100) and force the column to not allow null values.

23
7
Slide 239

Drop column(s) in a table

Syntax
ALTER TABLE table_name
DROP COLUMN column_name;

Example

ALTER TABLE emp


DROP COLUMN dept_name;
This will drop the column called dept_name from the table called emp.

23
8
Slide 240

Managing Tables
Rename column(s) in a table

Syntax
ALTER TABLE table_name
RENAME COLUMN old_name to new_name;

Example

ALTER TABLE emp


RENAME COLUMN dept_name to dname;

This will rename the column called dept_name to dname.

23
9
Slide 241

Managing Indexes

Create an Index
Display the index on a table
Dropping an index

24
0
Slide 242

Creating an Index
Create Index

SYNTAX

CREATE INDEX <Index Name> on <table name>(column1,column2)

Example

Create Index idx_empno on emp(empno);

This will create index idx_empno on the column empno of table emp.

24
1
Slide 243

Displaying the Indexes on a Table


The following query can be used to display the indexes on a table

SELECT INDEX_NAME,INDEX_TYPE FROM DBA_INDEXES


WHERE TABLE_NAME=<table_name>

Example

SELECT INDEX_NAME,INDEX_TYPE FROM DBA_INDEXES WHERE


TABLE_NAME=EMP

This will show indexes on the table emp.

24
2
Slide 244

Dropping an Index
SYNTAX

DROP INDEX <index name>

Example

DROP INDEX IDX_EMP;

This drops the index idx_emp

24
3
Slide 245

Managing Views

Views are customized presentations of data in one or more tables or


other views. You can think of them as stored queries. Views do not
actually contain data, but instead derive their data from the tables
upon which they are based. These tables are referred to as the base
tables of the view.

We will be knowing how to do the following in the next slides


Creating a View
Displaying a View
Dropping a View

24
4
Slide 246

Creating View

CREATE VIEW

SYNTAX

CREATE VIEW <view name> AS


SELECT COLUMN1,COLUMN2,FROM TABLE_NAME
[WHERE CLAUSE]

EXAMPLE

CREATE VIEW EMP_VIEW AS


SELECT EMPNO,EMPNAME FROM EMP WHERE DEPT=10;

24
5
Slide 247

Displaying records from a View


A view can be used as any other table. To display records from a
view we can use the same SQL we use to select records from a
table

EXAMPLE

SELECT * FROM EMP_VIEW;

24
6
Slide 248

Dropping a view
DROP VIEW

SYNTAX

DROP VIEW <view name>

24
7
Slide 249

Managing Synonyms

A synonym is an alias for any schema object such as a table or view


Synonyms provide an alternative name for a database object and can be
used to simplify SQL statements for database users.
For Example, You can create a synonym named emps as an alias for the
employees table in the HR schema.

If a table in an application has changed, such as the personnel table has


replaced the employees table, you can use the employees synonym to refer
to the personnel table so that the change is transparent to the application
code and the database users.

24
8
Slide 250

Managing Synonyms
CREATE SYNONYM

SYNTAX

CREATE SYNONYM [Synonym_schema.] synonym FOR object

EXAMPLE

CREATE SYNONYM emp for HR.EMPLOYEE

Creates synonym emp for HR.EMPLOYEE table

24
9
Slide 251

Managing Synonyms
Display information of Synonyms

DBA_SYNONYMS data dictionary view can be used to view the


information on Synonyms.

Dropping a synonym

DROP SYNONYM [schema.]synonym

EXAMPLE

DROP SYNONYM EMP;


This will drop the synonym emp on HR.EMPLOYEE

25
0
Slide 252

Summary

In this lesson you understood how to manage the following database objects
Tables
Indexes
Views
Synonyms
LAB
1) Create table emp in HR schema with two columns empno and empname
2) Now add two columns salary and deptname to the table emp
3) Drop column deptname from the table emp
4) Create index emp_idx on empno column of emp table
5) Check the index emp_idx in the dba_indexes
6) Create synonym of emp table in the SCOTT schema
7) Check the synonym name in the dba_sequences
8) Create view on the table emp so as to not select salary column.

25
1
Slide 253

Networking Overview
Slide 254

Objectives

After completing this lesson, you should be able


to do the following:
Identify how the listener responds to incoming
connections
Describe Dynamic Service Registration
Configure the listener by using Oracle Net
Manager
Control the listener by using the Listener Control
Utility (lsnrctl)
Configure the listener for IIOP and HTTP
connections

25
3
Slide 255

The Listener Process

Client
Server

Listener

tnsnames.ora

sqlnet.ora listener.ora

sqlnet.ora

25
4

Characteristics of the Listener Process


The database server receives an initial connection from a client application through the listener.
The listener is a process running on a node that listens for incoming connections on behalf of a
database or a number of databases. The following are the characteristics of a listener:
A listener process can listen for more than one database
Multiple listeners can listen on behalf of a single database to perform load balancing
The listener can listen for multiple protocols
The default name of the listener in Oracle Net is LISTENER
The name of the listener must be unique per listener.ora file
Note: Oracle9i databases require an Oracle9i listener. Previous versions of the listener are not
supported. However, it is possible to use an Oracle9i listener with databases created with an
earlier version of Oracle.
Using Oracle Enterprise Manager
The Listeners page under the Nodes folder in the Navigator displays listener properties for a
node selected in the Navigator. Information includes a list of all databases monitored by the
listener as well as the TNS address of the listener. You can determine the operational status of
the listener by clicking Listener Status.
Slide 256

Connection Methods

When a connection request is made by a


client to a server, the listener performs one of
the following:
Spawns a server process and bequeaths
(passes) the connection to it
Hands off the connection to a dispatcher in an
Oracle Shared Server configuration
Redirects the connection to a dispatcher or
server process

25
5

Connection Methods
Spawn and Bequeath
The listener passes or bequeaths the connection to a spawned process. This method is used in a
dedicated server configuration only.
Direct Hand-Off
The listener will hand off a connection to a dispatcher when an Oracle Shared Server is used.
This method is not possible with dedicated server processes.
Redirected (Message)
A connection may be redirected by the listener to a dispatcher if a Shared Server is used.
Note: Each of the connection types is covered in more detail later in the lesson.
Transparency of Direct Hand Off and Redirect
Whether a connection session is bequeathed, handed off, or redirected to an existing process,
the session is transparent to the user. It can be detected only by turning on tracing and
analyzing the resulting trace file.
Slide 257

Spawn and Bequeath and Direct Hand-Off


Connections

Client Spawned Server


server
process
5

3
4
2

1
listener

25
6

Spawn and Bequeath and Direct Hand-Off Connections


The listener may spawn dedicated server processes as connection requests are received and
bequeath (or pass) the connections to the server processes. The use of this method is dependant
on the ability of the underlying operating system to support inheritance of network endpoints.
When the listener forks a dedicated server process and bequeaths the connection to the server
process, it is called a bequeath session. The following sequence of events occurs:
1. The client establishes a connection to the listener using the configured protocol and
sends the listener a CONNECT packet.
2. The listener checks that the SID is defined. If it is, the listener will fork or spawn a
new process to deal with the connection. A bequeath connection is then established
between the listener and the new server process to pass process initialisation
information. The bequeath connection is then closed. Please note that the TCP socket
is inherited by the new server process.
3. The server process sends a RESEND packet back to the client.
4. A new CONNECT packet is then sent to the newly forked dedicated server process
5. The dedicated server process accepts the incoming connection and forwards an
ACCEPT message back to the client.
Slide 258

Redirected Session

Client Server
Server or
dispatcher
6 process
5 3
port

4 2
1 port
Listener

25
7

The Redirected Session


When conditions do not support the establishment of a bequeath or direct hand-off connection,
a redirect session will be established. The steps below outline the mechanics of this type of
connection:
1. The client establishes a connection to the listener using the configured protocol and
sends the listener a CONNECT packet.
2. The listener checks that the SID is defined. If it is, the listener will spawn a new thread
or process to service the new connection. An IPC connection is then established
between the listener and the new process/thread.
3. The new process/thread selects a new TCP/IP port from the list of free user defined
ports and passes this information back to the listener.
4. The listener inserts this new port into a REDIRECT packet and sends it back to the
client and the original TCP socket between the client and the listener is then reset.
5. A new TCP connection is established to the redirect address specified in the
REDIRECT packet and a CONNECT packet is then forwarded to the dedicated server
process.
Slide 259

Service Configuration and Registration

The listener can be configured in two ways:


Dynamic service registration
Does not require configuration in listener.ora
file
The listener relies on the PMON process
Static service configuration
Used for Oracle8 and earlier releases
Requires listener.ora configuration
Required for Oracle Enterprise Manager and other
services

25
8

Configuring the Listener


Dynamic Service Registration
An Oracle9i instance uses service registration to inform the listener about its database services.
Service registration relies on the PMON process to register instance information with the
listener. PMON also informs the listener about the current state and load of the instance and
Shared Server dispatchers.
If Oracle9i JVM is installed, HTTP and IIOP listening endpoints can be registered dynamically
with the listener.
When an instance is started, initialization parameters about the listener are read from the
initialization parameter file by which PMON registers information with the listener. If a
listener is not up when the instance starts, PMON will not register information with the listener.
PMON will continue attempting to contact the listener. The listener will reject any connections
made to an unregistered service.
Static Service Registration
In order for a listener to accept client requests from an Oracle8 or earlier release database, the
listener.ora file must be configured.
Note: The static configuration is also required for Oracle Enterprise Manager (OEM) and other
services such as external procedures and Heterogeneous Services.
Slide 260

Static Service Registration:


The listener.ora File
When the Oracle software is installed, the listener.ora file is created for the
starter database with the following default settings:
Listener name LISTENER
Port 1521
Protocols TCP/IP and IPC
SID name Default instance
Host name Default host name

25
9

The listener.ora File


The listener.ora file is used to configure the listener for static service registration. The
listener.ora file must reside on the machine or node on which the listener is to reside.
The listener.ora file contains configuration information for the following:
The listener name
The listener address
Databases that use the listener
Listener parameters
Slide 261

Static Service Registration:


The listener.ora File

1. LISTENER =
2. (ADDRESS_LIST =
3. (ADDRESS= (PROTOCOL= TCP)(Host= stc-
sun02)(Port= 1521)))
4. SID_LIST_LISTENER =
5. (SID_LIST =
6. (SID_DESC =
7. (ORACLE_HOME= /home/oracle)
8. (GLOBAL_DBNAME = ORCL.us.oracle.com)
9. (SID_NAME = ORCL)))

26
0

listener.ora File Contents


The default listener.ora file contains the following parameters:
1. The name of the listener. The default name is LISTENER.
2. The ADDRESS_LIST parameter contains a block of addresses at which the listener
listens for incoming connections. Each of the addresses defined in this block represents a
different way by which a listener receives a connection.
3. The TCP address identifies incoming TCP connections from clients on the network
attempting to connect to port 1521. The clients use the port defined in their tnsnames.ora
file to connect to this listener. Based on the SID_LIST defined for this listener, the
listener specifies the database to which to connect. Please note that is possible to
configure multiple listeners here as long as they have unique names and unique ports on
the node where they are configured. Each listener configured will have its own
SID_LIST but a single database can be serviced by multiple listeners.
4. A listener can listen for more than one database on a machine. The
SID_LIST_listener_name block or parameter is where these SIDs are defined.
5. The SID_LIST parameter is defined if more than one SID is defined.
Slide 262

Dynamic Service Registration:


Configure Registration
To ensure that service registration is functional, the
following initialization parameters must be configured:
SERVICE_NAMES
INSTANCE_NAME

26
1

Configuring Service Registration


Dynamic service registration is used by Oracle9i or Oracle8i instances. The registration is
performed by the PMON process of each database instance that has the necessary configuration
in the database initialization parameter file. Dynamic service registration does not require any
configuration in the listener.ora file.
Dynamic service registration is configured in the database initialization file. Listener
configuration must be synchronized with the information in the database initialization file.
The following initialization parameters must be configured for service registration to work:
SERVICE_NAMES: Specifies one or more names for the database service to which this
instance connects. You can specify multiple service names in order to distinguish among
different uses of the same database.
INSTANCE_NAME: Specifies the instance name. In a single-instance database system, the
instance name is usually the same as the database name.
Examples
SERVICE_NAMES=sales.us.oracle.com
INSTANCE_NAME=salesdb
Slide 263

Dynamic Service Registration:


Registering Information with the Listener

By default, PMON registers with a local listener


on the server on the default local address of
TCP/IP, port 1521.
PMON can register with a non default listener if:
LOCAL_LISTENER initialization parameter is
defined
LISTENERS attribute of the DISPATCHERS
initialization parameter is defined for Oracle
Shared Server

26
2

Service Registration
By default, the PMON process registers service information with its local listener on the
default local address of TCP/IP, port 1521.
Using a Nondefault Listener
You can force PMON to register with a local listener on the server that does not use TCP/IP or
use port 1521 by configuring the LOCAL_LISTENER parameter in the initialization parameter
file as follows:
LOCAL_LISTENER=listener_alias
listener_alias must be resolved to the listener protocol address through a naming
method such as tnsnames.ora. An example entry in the tnsnames.ora follows:
listener_name=
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=sales-server)(PORT=1421)))
Slide 264

Listener Control Utility (LSNRCTL)

Listener Control Utility commands can be


issued from the command-line or from the
LSNRCTL prompt.
UNIX command-line syntax:

$ lsnrctl <command name>

Prompt syntax:
LSNRCTL> <command name>

Control a non-default listener


LSNRCTL> set current_listener listener02

26
3

Listener Control Utility


When the lsnrctl command is issued, the command will work against the default listener
listener unless the SET LISTENER command is executed. Another way to control different
listeners is to use the listener name as a command modifier:
$ lsnrctl start listener02
Windows NT Platform Command Line Syntax
On the Windows NT operating system, use the following command to start the Listener
Control utility:
C:\> lsnrctl <command name>
Slide 265

LSNRCTL Commands

Use the following commands to control the listener:


START [listener_name]
STOP [listener_name]

26
4

LSNRCTL Commands
Starting the Listener
You can use the START command to start the listener from the Listener Control utility. Any
manual changes to the listener.ora file must be made when the listener is shut down. The
argument for the START command is the name of the listener, and if no argument is specified,
the current listener is started. If a current listener is not defined, LISTENER is started.
LSNRCTL> START [listener_name] or
$ lsnrctl start [listener_name]
On Windows NT, the listener can also be started through the Control Panel:
1. Double-click the Services icon in the Control Panel window.
2. Select the Oraclehome_nameTNSListener service (the service name if you are using
the default listener name LISTENER) or Oraclehome_nameTNSListenerlsnr, where
lsnr is the nondefault listener name.
3. Click Start to start the service.
4. In the Services window, click Close.
Slide 266

LSNRCTL SET and SHOW Modifiers

Change listener parameters with SET:

LSNRCTL> SET trc_level ADMIN

Display the values of parameters with SHOW:

LSNRCTL> SHOW trc_directory

26
5

SET and SHOW Modifiers


The SET modifier is used to change listener parameters in the Listener Control utility
environment.
The SHOW modifier is used to display the values of the parameters set for the listener.
Slide 267

Summary

In this lesson, you should have learned how to:


Control the listener by using the Listener Control
Utility (lsnrctl)

26
6
Slide 268

Naming Method Configuration


Slide 269

Objectives
After completing this lesson, you should be able to do the
following:
Describe the difference between host naming and local
service name resolution
Use Oracle Net Configuration Assistant to configure:
Host Naming method
Local naming method
Net service names
Perform simple connection troubleshooting

26
8
Slide 270

Overview of Naming Methods


Naming methods are used by a client application to
resolve a connect identifier to a connect descriptor
when attempting to connect to a database service.
Oracle Net provides five naming methods:
Host naming
Local naming
Directory naming
Oracle Names
External naming

26
9

Naming Methods Overview


Oracle Net provides five naming methods:
Host naming: Enables users in a TCP/IP environment to resolve names through their existing
name resolution service
Local naming: Locates network addresses by using information configured and stored on
each individual client's tnsnames.ora file
Directory naming: Resolves a database service or net service name to a connect descriptor,
stored in a central directory server
Oracle Names: Oracle directory service made up of a system of Oracle Names servers that
provide name-to-address resolution for each service on the network
External naming: Uses a supported third-party naming service
For a small organization with only a few databases, use host naming to store names in an
existing names resolution service, or local naming to store names in tnsnames.ora file on
the clients.
For large organizations with several databases, use directory naming to store names in a
centralized LDAP-compliant directory server.
In this lesson you will learn more about host naming and local naming.
Slide 271

Host Naming
Clients can connect to a server using a host name
under the following conditions:
Connecting to an Oracle database service using
Oracle Net Services Client software
Client and server are connecting using TCP/IP
protocol
Host names are resolved through an IP address
translation mechanism such as DNS or a local
/etc/hosts file
No advanced features such as Connection Manager or
security options are used

27
0

Host Naming Method


The host naming method provides the following advantages:
Requires minimal user configuration. The user need only provide the name of the host to
establish a connection.
Eliminates the need to create and maintain a local names configuration file
(tnsnames.ora).
Eliminates the need to understand Oracle Names or Oracle Internet Directory administration
procedures.
Host naming can only be used to identify one SID per node although other SIDs can be
identified using other naming methods.
Multiple global names can be aliased to the same IP address in the hosts file and host naming
can be used to connect to any of these databases even if they are on the same node.
Slide 272

Host Naming: Client Side

Client Server

TCP/IP


names.directory_path = (HOSTNAME)
sqlnet.ora

listener.ora

27
1

Client-Side Requirements
If you are using the host naming method, you must have TCP/IP installed on your client
machine. In addition you must install Oracle Net Services and the TCP/IP protocol adaptor.
The host name is resolved through an IP address translation mechanism such as Domain Name
Services (DNS), Network Information Services (NIS), or a centrally maintained TCP/IP host
file. This should be configured from the client side before attempting to use the host naming
method.
Slide 273

Host Naming: Server Side

Client Server

TCP/IP
1521

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = stc-sun02.us.oracle.com)
(ORACLE_HOME = /u03/ora9i/rel12)
(SID_NAME = TEST)

sqlnet.ora listener.ora

27
2

Server-Side Requirements
If you are using the host naming method, you must have TCP/IP installed on your server as
well as your client. You also need to install Oracle Net Services and the TCP/IP protocol
adaptor on the server side.
In Oracle8i and Oracle9i information about the database is automatically registered with the
listener, including the global database name when one of the following is true:
The default listener named LISTENER running on TCP/IP on port 1521 is running
The LOCAL_LISTENER parameter is set in the initialization file
In earlier versions, database information is registered with the listener through the
listener.ora file. You must statically configure the SID_LIST_listener_name
section to include the GLOBAL_DBNAME parameter. The global database name is comprised of
the database name and database domain name. You can obtain the GLOBAL_DBNAME value
from the SERVICE_NAMES parameter, or from the DB_NAME and DB_DOMAIN parameters in
the initialization parameter file.
The host name must match the connect string you specify from your client. The additional
information included is the database to which you want to connect.
Slide 274

Host Naming Example


listener.ora file

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = stc-sun02.us.oracle.com)
(ORACLE_HOME = /u03/ora9i/rel12)
(SID_NAME
Connecting =from the client
TEST)

sqlplus system/manger@stc-sun02.us.oracle.com

27
3

Host Naming Example


Example
If all of the requirements are met on the client and server side, you can issue the connection
request from the client, and this connects you to the instance TEST as follows:
sqlplus system/manager@stc-sun02.us.oracle.com
SQL*Plus:Release 9.0.1.0.0-Production on Thu Nov 15 13:46:24 2001
(c) Copyright 2001 Oracle Corporation. All rights reserved.
Connected to:
Oracle9i Enterprise Edition Release 9.0.1.0.0 Production
SQL>
Slide 275

Naming Methods Configuration

27
4

Naming Methods Configuration


You can use Oracle Net Configuration Assistant or Oracle Net Manager to perform naming
methods configuration. Oracle Net Configuration Assistant is used in these examples.
Because Oracle Net Configuration Assistant is implemented in Java and is packaged with the
Java Runtime Environment, you can run it on any platform where Oracle Net Services is
installed.
Using Oracle Net Configuration Assistant
To start Oracle Net Configuration Assistant:
On UNIX, run netca from $ORACLE_HOME/bin.
On Windows NT, select Start > Programs > Oracle - HOME_NAME > Network Administration
> Oracle Net Configuration Assistant.
Select the Naming Methods Configuration option button and click Next.
Slide 276

Selecting the Host Naming Method

27
5

Selecting the Host Naming Method


Make sure that Host Name is listed in the Selected Naming Methods window. If other methods
are also chosen, make sure Host Name appears first. Click Next to finish. Your specifications
will be written to the sqlnet.ora file:
# SQLNET.ORA Network Configuration File:
/u03/ora9i/rel12/network/admin/sqlnet.ora
# Generated by Oracle configuration tools.
NAMES.DEFAULT_DOMAIN = us.oracle.com
NAMES.DIRECTORY_PATH= (HOSTNAME)
Slide 277

Local Naming

Client Server

sqlnet.ora

tnsnames.ora listener.ora

27
6

Local Naming Method


Advantages of local naming:
Provides a relatively straightforward method for resolving service name addresses.
Resolves service names across networks running different protocols.
Can easily be configured using a graphical configuration tool
The local naming method requires net service names be stored in the tnsnames.ora file. It
is not recommended that this file be edited. However, adding net service names is easy using
the Oracle Net Configuration Assistant.
Slide 278

Selecting the Local Naming Method

27
7

Selecting the Local Naming Method


Available naming methods appear in the left-hand window and selected naming methods
appear in the right-hand window. By default, Local, Host Name and Oracle Names are
preselected. If for some reason Local is not already selected then select it from the left-hand
window and click the right arrow button to promote it to the Selected Naming Methods
window. Click Next to continue. Your specifications will be written to the sqlnet.ora file:
# SQLNET.ORA Network Configuration File:
/u03/ora9i/rel12/network/admin/sqlnet.ora
# Generated by Oracle configuration tools.
NAMES.DEFAULT_DOMAIN = us.oracle.com
NAMES.DIRECTORY_PATH = (LOCAL , HOSTNAME)
Slide 279

Configuring Local Net Service Names

27
8

Net Service Name Configuration


After selecting Local as the Naming Method, net service names can now be configured by
selecting the Local Net Service Name Configuration option button from the Oracle Net
Services Configuration Assistant.
A net service name is a short, convenient name that is mapped to a network address contained
in a connect descriptor that is stored in the tnsnames.ora file. Users need know only the
appropriate service name to make a connection and not the full connect descriptor.
Slide 280

Working with Net Service Names

27
9

Add a Net Service Name


You use the next window to create, reconfigure, delete, rename, or test a net service name. In
this example, the Add option button is chosen.
Slide 281

Specify the Oracle Database Version

28
0

Specifying the Database Version


Specify whether the database or service is Oracle8i or later. Earlier Oracle versions require
extra configuration on the listener side while Oracle8i or 9i databases and services do not.
Slide 282

Database Service Name

28
1

Specify the Service Name


For your Oracle8i or Oracle9i database, you must next enter the database service name that
identifies the database service. The character length can be no longer than nine (9) characters.
The service name is typically the global database name, which is a combination of the database
name (DB_NAME) and domain (DB_DOMAIN). The global database name is the default
service name of database, as specified by the SERVICE_NAMES parameter in the initialization
file .
Slide 283

Network Protocol

28
2

Select the Network Protocol


The network protocol to be used by the connection must now be specified. The protocols
available in the configuration assistant reflect only those protocols that have been previously
installed. Uninstalled protocols are not present in the protocol list presented by the Oracle Net
Service Configuration Assistant.
Note: With the introduction of Oracle9i, SPX is no longer a supported protocol.
Slide 284

Host Name and Listener Port

28
3

Configuring the Host Name and Port Number


Enter the host name and the port number, and click Next.
Host Name
Enter the fully qualified name of the machine on which the database you want to connect to
and communicate with resides.
Port Number
Enter the number of the port on which the Oracle Net listener monitors connection requests to
the server (host). By default, the Configuration Assistant sets the listener port to 1521. If
required, an alternative port number can be specified.
Slide 285

Testing the Connection

28
4

Test the Service Information


The connection information can now be tested. Select the Yes, perform a test option button
then click Finish to proceed.
Slide 286

Connection Test Result

28
5

Test Result
If the data entered is correct, the connection should be made successfully. If not, the Details
window should provide useful diagnostic information to troubleshoot the connection. Please
note that the default username used for the connection is scott. If you have no such user you
should click Change Login and enter a valid username and password combination then retry
the connection.
If the connection is successful, click Next to continue. Do not click Cancel because the service
information is not yet saved.
Note: After the tnsnames.ora file has been saved, the net service name can also be tested
from the command line by using the tnsping utility. For example:
$ tnsping U01
TNS Ping Utility for Solaris: Version 9-Production
Used parameter files:
/u01/user01/NETWORK/ADMIN/sqlnet.ora
/u01/user01/NETWORK/ADMIN/tnsnames.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (ADDRESS=(PROTOCOL=TCP)(HOST=stc-
sun02)(PORT=1701))
OK (0 msec)
Slide 287

Net Service Name

28
6

Choosing the Net Service Name


Enter a name for the net service name next. The Configuration Assistant defaults the name to
the database service name that was entered initially. A more meaningful or descriptive name
can be entered if you want. Click Next to continue.
Slide 288

Save the Net Service Name

28
7

Saving the Net Service Name


When you select the No option button and click on Next, the service name is saved by default
to the tnsnames.ora file located in the $ORACLE_HOME/network/admin directory.
Slide 289

Generated Files: tnsnames.ora

# TNSNAMES.ORA Network Configuration


# File:/u03/ora9i/rel12/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.
MY_SERVICE.US.ORACLE.COM =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS =
(PROTOCOL = TCP)(HOST = stc-sun02.us.oracle.com)(PORT =
1521))
)
(CONNECT_DATA =
(SERVICE_NAME = TEST.us.oracle.com)
)
)

28
8

The tnsnames.ora File


The tnsnames.ora file is used to store net service names. The default location is
$ORACLE_HOME/network/admin on UNIX and %ORACLE_HOME%\network\
admin on Windows NT.
Slide 290

Generated Files: sqlnet.ora

# SQLNET.ORA Network Configuration File:


/u03/ora9i/rel12/network/admin/sqlnet.ora
# Generated by Oracle configuration tools.
NAMES.DEFAULT_DOMAIN = us.oracle.com
NAMES.DIRECTORY_PATH= (TNSNAMES, HOSTNAME)
SQLNET.EXPIRE_TIME=0

sqlplus system/manager@MY_SERVICE
SQL*Plus:Release 9.0.1.0.0-Production on Thu Nov 15 13:46:24 2001
(c) Copyright 2001 Oracle Corporation. All rights reserved.
Connected to:
Oracle9i Enterprise Edition Release 9.0.1.0.0 - Production
JServer Release 9.0.1.0.0 - Production
SQL>

28
9

The sqlnet.ora File


The sqlnet.ora file controls the behavior of Oracle Net Services.
The default location is $ORACLE_HOME/network/admin on UNIX and
%ORACLE_HOME%\network\admin on Windows NT. The default location can be
overridden by defining the TNS_ADMIN environment variable.
The NAMES.DIRECTORY_PATH parameter controls how Oracle Net Services resolves net
service names into connect descriptors. Multiple methods can be represented as a comma-
separated list enclosed by parentheses. Net services attempts to resolve net service names using
each method listed working from left to right.
Once the naming methods and net service names have been configured and tested successfully,
you can connect to the server from the client by using any Oracle client tool.
Slide 291

Troubleshooting the Client Side

The following error codes are related to problems on the


client side:

ORA-12154 TNS:could not resolve service name


ORA-12198 TNS:could not find path to destination
ORA-12203 TNS:unable to connect to destination
ORA-12533 TNS:illegal ADDRESS parameters
ORA-12541 TNS:no listener

29
0

Troubleshooting
The following describes common errors and how they can be resolved.
ORA-12154: TNS:could not resolve service name
Cause: Oracle Net Services cannot locate the connect descriptor specified in the
tnsnames.ora configuration file.
Action
1. Verify that a tnsnames.ora file exists and that it is accessible.
2. Verify that the tnsnames.ora file is in the location specified by the TNS_ADMIN
environment variable.
3. In your tnsnames.ora file, verify that the service name specified in your connection
string is mapped to a connect descriptor in the tnsnames.ora file. Also, verify that
there are no syntax errors in the file.
4. Verify that there are no duplicate copies of the sqlnet.ora file.
5. If you are connecting from a login dialog box, verify that you are not placing an at
symbol (@) before your connection service name.
Slide 292

Summary
In this lesson, you should have learned how to:
Describe the difference between host naming and local
service name resolution
Use Oracle Net Configuration Assistant to configure:
Host naming method
Local naming method
Net service names
Perform simple connection troubleshooting

29
1
Slide 293

Understanding Oracle DB Utilities

Understanding Oracle DB
Utilities

29
2
Slide 294

Oracle DATA PUMP : Objectives

Provide an overview of Oracle Data Pump


Give a quick-start prier for users of original Export/Import
Highlight useful features that differ from those offered in original
Export/Import

29
3
Slide 295

Data Pump Overview: Usage Scenarios

Typical uses for Data Pump Export/Import


Logical backup of schema/table
Refresh test system from production
Upgrade (either cross-platform, or with storage reorg)
Move data from production to offline usage (e.g. data warehouse,
ad-hoc query)

29
4
Slide 296

Mechanics of Data Pump

Master Process (DMnn) :


Creates and deletes the master table at the time of export and import.
Master table contains the job state and object information.
Creates the Worker Process.
Worker Process (DWnn) :
It performs the actual heavy duty work of loading and unloading of data.
It maintains the information in master table.
Shadow Process : When client logs in to an Oracle Server the database
creates and Oracle process to service Data Pump API.
Client Process : The client process calls the Data pump API.

29
5
Slide 297

How to create directory objects


The directory must exist at OS level.
Any user with CREATE ANY DIRECTORY privilege can create a directory
object.
CREATE DIRECTORY dppump_dir1 AS /u01/app/dump_dir;

To grant privileges to specific user e.g. scott


GRANT READ,WRITE ON DIRECTORY dppump_dir1 TO scott;

User must have EXP_FULL_DATABASE, IMP_FULL_DATABASE privileges to


use Data pump utilities by default.

29
6
Slide 298

Export Example

$ expdp ananda/abc123 tables=CASES directory=DPDATA1


dumpfile=expCASES.dmp job_name=CASES_EXPORT

Tables : Identifies a list of tables to export - one schema only.


Directory : Directory object to be used for dumpfiles and logfiles.
Dumpfile : List of destination dump files (expdat.dmp)
Job_name : Name of export job to create.

29
7
Slide 299

Data Pump Quick Start: Syntax Changes

New command line clients


expdp/impdp instead of exp/imp
Parameter changes: a few examples
Data Pump Parameter Original Exp/Imp Parameter
SCHEMAS OWNER
REMAP_SCHEMA TOUSER
CONTENT=METADATA_ONLY ROWS=N
EXCLUDE=TRIGGER TRIGGERS=N
Note: A full mapping of parameters from original
Export/Import to Data Pump Export/Import can be found
in the Oracle Database Utilities manual.

29
8
Slide 300

New Features of Oracle Data Pump

Network Mode
Restartability
Parallelization
Include/Exclude
SQLFILE

29
9
Slide 301

New Features: Network Mode Export

$ expdp scott/tiger network_link=db1 tables=emp


dumpfile=scott.dmp directory= dppump_dir1;

Produces a local dump file set using the contents of a remote


database
Only way to export from a write locked database (e.g., a
standby database or a read-only database)
Requires a local, writeable database to act as an agent
May be parallelized
Will generally be significantly slower than exporting to a file on
a local device

30
0
Slide 302

New Features: Network Mode Import

$ impdp hr/hr tables=employees directory=dpump


schemas=scott network_link=finance@prod1
It connects directly to the source database and transfers data to the
target database.
No dump file is created.
May be parallelized
Primarily a convenience: will generally be slower than exporting to a
file, copying the file over the network, and importing to the target

30
1
Slide 303

New Features: Restartability

Data Pump jobs may be restarted without loss of data and


with only minimal loss of time
Restarts may follow:
System failure (e.g., loss of power)
Database shutdown
Database failure
User stop of Data Pump job
Internal failure of Data Pump job
Exceeding dumpfile space on export

30
2

Great when trying to fit a database migration within a window.


Slide 304

New Features: Restartability - Export

$ expdp system/manager attach=myjob


Export> start_job

Export writes out objects based upon object type


On restart, any uncompleted object types are removed from
the dump file and the queries to regenerate them are repeated
For data, incompletely written data segments (i.e., partitions or
unpartitioned tables) are removed and the data segments are
totally rewritten when the job continues.

30
3
Slide 305

New Features: Restartability Import

$ impdp system/manager attach=myjob


Import> start_job

Restart is based upon the state of the individual objects


recorded in the master table
If object was completed, it is ignored on restart
If object was not completed, it is reprocessed on restart
If object was in progress and its creation time is consistent
with the previous run, it is reprocessed, but duplicate object
errors are ignored

30
4
Slide 306

New Features: Parallelization

Multiple threads of execution may be used within a Data Pump job


Jobs complete faster, but use more database and system resources
Only available with Enterprise Edition
Speedup will not be realized if there are bottlenecks in I/O bandwidth,
memory, or CPU
Speedup will not be realized if bulk of job involves work that is not
parallelizable

30
5
Slide 307

New Features: Parallel Export

$ expdp hr/hr directory= dppump_dir1


dumpfile=a%u.dmp parallel=2

There should be at least one file available per degree of


parallelism. Wildcarding filenames (%u) is helpful
All metadata is exported in a single thread of execution

30
6
Slide 308

New Features: Parallel Import

$ impdp hr/hr directory= dppump_dir1


dumpfile=a%u.dmp parallel=6

Degree of parallelization in import does not have to match degree


of parallelization used for export
Processing of user data is split among the workers as is done for
export
Creation of package bodies is parallelized by splitting the
definitions of packages across multiple parallel workers
Index building is parallelized by temporarily specifying a degree
clause when an index is created

30
7
Slide 309

New Features: Include/Exclude

$ impdp hr/hr directory= dppump_dir1 dumpfile=mydump


exclude=index

Fine grained object selection is allowed for both expdp and impdp
Objects may be either excluded or included
List of object types and a short description of them may be found in the
following views:
DATABASE_EXPORT_OBJECTS
SCHEMA_EXPORT_OBJECTS
TABLE_EXPORT_OBJECTS

30
8

Old Export/Import had limited object selection through the indexes, constraints, statistics, and
triggers parameter
A Data Pump job works upon object classes.
Slide 310

New Features: Exclude

$ expdp hr/hr directory= dppump_dir1 dumpfile=mydump


exclude=index.trigger

Objects described by the Exclude parameter are omitted from


the job
Objects that are dependent upon an excluded object are also
excluded. (e.g., grants and statistics upon an index are excluded
if an index is excluded)
Multiple object types may be excluded in a single job

30
9
Slide 311

New Features: Include

$ impdp hr/hr directory= dppump_dir1 dumpfile=mydump


include=procedure.function

Objects described by the Include parameter are the only objects


included in the job
Objects that are dependent upon an included object are also
included. (e.g., grants upon a function are included if the
function is included)
Multiple object types may be included in a single job

Note: Include and Exclude parameters may not be mixed on the


same command

31
0
Slide 312

New Features: SQLFILE


Specifies a file into which the DDL that would have been executed in the
import job will be written
Actual import is not performed, only the DDL file is created
Can be combined with EXCLUDE/INCLUDE to tailor the contents of the
SQLFILE
Example: to get a SQL script that will create just the tables and indexes
contained in a dump file:

$ impdp hr/hr directory= dppump_dir1


dumpfile=mydump.dmp INCLUDE=TABLE,INDEX
SQLFILE=create_tables.sql

Output of SQLFILE is executable, but will not include passwords

31
1

In original Import, the INDEXFILE parameter generated a text file which contained the
SQL commands necessary to recreate tables and indexes that you could then edit to
get a workable DDL script.
Slide 313

FLASHBACK_SCN

The FLASHBACK_SCN parameter specifies the system change number (SCN)


that Data Pump Export will use to enable the Flashback utility.
If you specify this parameter, the export will be consistent as of this SCN.
The following example shows how you can export the user HRs schema up to
the SCN 150222.
$ expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr_exp.dmp
FLASHBACK_SCN=150222

31
2
Slide 314

FLASHBACK_TIME
The FLASHBACK_TIME parameter is similar to the FLASHBACK_SCN
parameter. The only difference is that here you use a time, instead of an SCN, to
limit the export.
Oracle finds the SCN that most closely matches the time you specify, and uses
this SCN to enable the Flashback utility.
The Data Pump Export operation will be consistent as of this SCN. Heres an
example:
$ expdp system/sammyy1 DIRECTORY=dpump_dir1
DUMPFILE=hr_time.dmp FLASHBACK_TIME="TO_TIMESTAMP('25-05-
2005 17:22:00', 'DD-MM-YYYY HH24:MI:SS)

31
3
Slide 315

Frequently Asked Questions

Can original Export dump files be used with Data Pump?


No. The dump file formats for original exp/imp and Data Pump Export/Import
are not compatible.
Can Data Pump work with 9i databases?
No. Data Pump works with Oracle Database 10g and later.
Can Data Pump work over a network link to an earlier version database?
No, you cannot use network links to use Data Pump Export on a database
earlier than version 10g
Can I use Enterprise Manager with Data Pump?
Yes, there is an EM interface for Data Pump
What will happen to original Export/Import?
Original Export will no longer be supported for general use after Oracle
Database 10g Release 2
Original Import will be supported indefinitely, to handle existing legacy Export
files

31
4
Slide 316

Frequently Asked Questions

Can I use Enterprise Manager with Data Pump?


Yes, there is an EM interface for Data Pump
How do I pipe a Data Pump job through gzip?
This compression technique cannot be used with Data Pump, because
Data Pump cannot support named pipes
In Oracle Database 10g Release 2, Data Pump compresses metadata by
default
Stay tuned for a data compression solution in a future release
What will happen to original Export/Import?
Original Export will no longer be supported for general use after Oracle
Database 10g Release 2
Original Import will be supported indefinitely, to handle existing legacy
Export files

31
5

How can I monitor my Data Pump jobs to see what is going on? :

DBA_DATAPUMP_JOBS - All active Data Pump jobs and the


state of each job.

USER_DATAPUMP_JOBS Summary of the users active


Data Pump jobs.

DBA_DATAPUMP_SESSIONS all active user sessions that


are attached to a Data pump job

V$SESSION_LONGOPS Shows all progress on each active


Data Pump job
Slide 317

Transportable Tablespace Feature :

Enhancement in 10g
V$transportable_platform
Endian Formats and RMAN convert command

SQL> COLUMN platform_name FORMAT A32


SQL> SELECT * FROM v$transportable_platform
ORDER BY endian_format, platform_name
The v$database data dictionary view also adds two columns

SQL> select name, platform_id, platform_name


from v$database;

31
6
Slide 318

Is my tablespace transportable ?
Enterprise Edition is required to generate a transportable tablespace set.
It should be self contained tablespace.
Must use same characterset and national characterset.
To import a transportable tablespace set into an Oracle Database on a
different platform, both databases must have compatibility set to at least
10.0.
SQL> select * from v$transportable_platform order by platform_id;
PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT
----------- ----------------------------------- --------------
1 Solaris[tm] OE (32-bit) Big
2 Solaris[tm] OE (64-bit) Big
3 HP-UX (64-bit) Big
4 HP-UX IA (64-bit) Big
5 HP Tru64 UNIX Little
6 AIX-Based Systems (64-bit) Big
7 Microsoft Windows IA (32-bit) Little
8 Microsoft Windows IA (64-bit) Little
9 IBM zSeries Based Linux Big
10 Linux IA (32-bit) Little
11 Linux IA (64-bit) Little
12 Microsoft Windows 64-bit for AMD Little
13 Linux 64-bit for AMD Little
15 HP Open VMS Little
16 Apple Mac OS Big

31
7

Source Database
ALTER TABLESPACE READ ONLY
Exporting TTS Metadata
Convert Datafile(s) within RMAN, if needed.
ALTER TABLESPACE READ WRITE
Copy to Target Database
TTS Metadata
Datafile(s)

Target Database
Move Datafile(s) to database directory
Import TTS Metadata
Test the plugged tablespace
Slide 319

Endian Format
Operating systems, including Windows, store multi-byte binary data with
the least significant byte in the lowest memory address; therefore, the
system is called Little Endian.
Other OSs, including Solaris, store the most significant byte in the
lowest memory address, hence the term Big Endian.

31
8
Slide 320

Transporting Tablespace : Same Platform

ALTER TABLESPACE READ ONLY


alter tablespace users read only;

Export Metadata of TTS

exp tablespaces=users transport_tablespace=y


file=exp_ts_users.dmp
The file exp_ts_users.dmp contains only metadata, not the contents of the
tablespace USERSso it will be very small.

Copy the files exp_ts_users.dmp and users_01.dbf to the host TGT1.

Plug the tablespace into the database.


imp tablespaces=users transport_tablespace=y
file=exp_ts_users.dmp datafiles='users_01.dbf'

31
9

Exporting TTS Metadata On Source Database Server

SQL> ALTER TABLESPACE zipcodes READ ONLY;


Tablespace altered.
SQL> host
Microsoft Windows 2000 [Version 5.00.2195]
(C) Copyright 1985-2000 Microsoft Corp.

C:\Documents and Settings\cyo\Local Settings\Temp>set


NLS_LANG=AMERICAN_AMERICA.AL32UTF8

C:\Documents and Settings\cyo\Local Settings\Temp>exp '/ as sysdba


TRANSPORT_TABLESPACE=Y TABLESPACES=zipcodes TTS_FULL_CHECK=Y
file=zipcodes.dump log=zipcodes.log

Export: Release 10.2.0.2.0 - Production on Mon Feb 12 01:51:15 2007


Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 -
Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining Scoring
Engine options
Export done in KO16MSWIN949 character set and AL16UTF16 NCHAR character set
server uses AL32UTF8 character set (possible charset conversion)
Note: table data (rows) will not be exported
About to export transportable tablespace metadata...
For tablespace ZIPCODES ...
. exporting cluster definitions
. exporting table definitions
. . exporting table ZC_COREA_ENGLISH
. . exporting table ZC_COREA_COREAN
. exporting referential integrity constraints
. exporting triggers
. end transportable tablespace metadata export
Export terminated successfully without warnings.

C:\Documents and Settings\cyo\Local Settings\Temp>

Endian Format Conversion using RMAN On Source Database Server :

C:\Documents and Settings\cyo\Local Settings\Temp>rman target=/

Recovery Manager: Release 10.2.0.2.0 - Production on Mon Feb 12 00:16:10 2007


Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: TA20U3 (DBID=1455682162)

RMAN> convert tablespace 'ZIPCODES'


2> to platform 'AIX-Based Systems (64-bit)'
3> db_file_name_convert='D:\ORACLE\DATA\TA20U3\zipcodes_01.DBF','C:\Documents
and Settings\cyo\Local
Settings\Temp\zipcodes_01.dbf';

Starting backup at 12-FEB-07


using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=148 devtype=DISK
channel ORA_DISK_1: starting datafile conversion
input datafile fno=00005 name=D:\ORACLE\DATA\TA20U3\ZIPCODES_01.DBF
converted datafile=C:\DOCUMENTS AND SETTINGS\CYO\LOCAL
SETTINGS\TEMP\ZIPCODES_01.DBF
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:01
Finished backup at 12-FEB-07

RMAN>
Slide 321

Transporting Tablespace: Differing Endianness


of Platforms

Give the following command on the source database before


copying the datafies.
RMAN> convert tablespace users
to platform 'HP-UX (64-bit)
db_file_name_convert
'/usr/oradata/dw10/dw10','/home/oracle/rman_bkups ;

32
0

Convert Datafiles using RMAN


You do not need to convert the datafile to transport a tablespace from an AIX-based platform to a
Sun platform, since both platforms use a big endian.
However, to transport a tablespace from a Sun platform (big endian) to a Linux platform
(little endian), you need to use the CONVERT command in the RMAN utility to convert the byte
ordering. This can be done on either the source platform or the target platform.

RMAN> CONVERT TABLESPACE USERS TO PLATFORM = Linux IA (32-bit)


DB_FILE_NAME_CONVERT = /u02/oradata/grid/users01.dbf,
/dba/recovery_area/transport_linux;
Slide 322

Lab

1. Create a datapump directory. Grant read,write access to user HR.


2. Export the HR schema objects using datapump.
3. Export only employees table from HR schema.
4. Create a new user. Assume appropriate name and tablesapce.
5. Use datapump to import the data in new schema, which was exported in step 2.
6. Drop the department table from new schema.
7. Use datapump to import only employees table in new schema, which was exported in step 2.

32
1
Slide 323

Overview of DBVERIFY

32
2
Slide 324

DBVERIFY Command-Line Interface

External command-line utility


Used to ensure that a backup database or datafile is valid
before a restore
Can be a helpful diagnostic aid when data corruption
problems are encountered

%dbv file=/ORADATA/u03/users01.dbf logfile=dbv.log

32
3

DBVERIFY Command Line Interface


You invoke the DBVERIFY utility using a command-line interface. You use this utility
primarily to ensure that a backup database (or datafile) is valid before it is restored or as a
diagnostic aid when you have encountered data-corruption problems.
Example
To verify the integrity of the users01.dbf datafile, starting with block 1 and ending with block
500, you execute the following command:
$ dbv file=/ORADATA/u03/users01.dbf start=1 end=500
Note: The name and location of DBVERIFY is dependent on your operating system. See the
operating system-specific Oracle documentation for the location of DBVERIFY for your
system.
Slide 325

Loading Data into a Database using SQL


LOADER

32
4
Slide 326

Objectives
After completing this lesson, you should be able to
do the following:
Describe the usage of SQL*Loader
Perform basic SQL*Loader operations
List guidelines for using SQL*Loader and Direct load

32
5
Slide 327

SQL*Loader
Loader control file
Input datafiles

Parameter file
SQL*Loader Rejected
(optional)
Field processing
Discarded Accepted
Record selection Bad
file
Selected

Oracle server
Discard file
Inserted Rejected
(optional)

Log file

Database datafiles

32
6

SQL*Loader Features
SQL*Loader loads data from external files into tables in an Oracle database. SQL*Loader has
the following features:
SQL*Loader can use one or more input files.
Several input records can be combined into one logical record for loading.
Input fields can be of fixed or variable lengths.
Input data can be in any format: character, binary, packed decimal, date, and zoned decimal.
Data can be loaded from different types of media such as disk, tape, or named pipes.
Data can be loaded into several tables in one run.
Options are available to replace or to append to existing data in the tables.
SQL functions can be applied on the input data before the row is stored in the database.
Column values can be auto generated based on rules. For example, a sequential key value can
be generated and stored in a column.
Data can be loaded directly into the table, bypassing the database buffer cache.
Slide 328

Using SQL*Loader

$sqlldr hr/hr \
> control=case1.ctl \
> log=case1.log direct=Y

case1.ctl

SQL*Loader

EMPLOYEES table

case1.log

32
7

Using SQL*Loader
Command line:
When you invoke SQL*Loader, you can specify parameters that establish session
characteristics. Parameters can be entered in any order, optionally separated by commas. You
can specify values for parameters, or in some cases, you can accept the default without entering
a value.
If you invoke SQL*Loader without specifying any parameters, SQL*Loader displays a Help
screen that lists the available parameters and their default values.

Files Used by SQL*Loader


SQL*Loader uses the following files:
Loader control file: Specifies the input format, output tables, and optional conditions that can
be used to load only part of the records found in the input datafiles
Input datafiles: Contain the data in the format defined in the control file
Parameter file: Is an optional file that can be used to define the command line parameters for
the load
Log file: Is created by SQL*Loader and contains a record of the load
Bad file: Is used by the utility to write the records that are rejected during the load (This can
occur during input record validation by the utility or during record insertion by the Oracle
server.)
Discard file: Is a file that can be created, if necessary, to store all records that did not satisfy
the selection criteria
Slide 329

Example

Text file text.txt

1,amar,consultant
2,akbar,se
3,anthony,analyst

Expected data in Employees Table


Emp_id name Designation
1 amar consultant
2 akbar se
3 anthony analyst

32
8

Using SQL*Loader
Command line:
When you invoke SQL*Loader, you can specify parameters that establish session
characteristics. Parameters can be entered in any order, optionally separated by commas. You
can specify values for parameters, or in some cases, you can accept the default without entering
a value.
If you invoke SQL*Loader without specifying any parameters, SQL*Loader displays a Help
screen that lists the available parameters and their default values.

Files Used by SQL*Loader


SQL*Loader uses the following files:
Loader control file: Specifies the input format, output tables, and optional conditions that can
be used to load only part of the records found in the input datafiles
Input datafiles: Contain the data in the format defined in the control file
Parameter file: Is an optional file that can be used to define the command line parameters for
the load
Log file: Is created by SQL*Loader and contains a record of the load
Bad file: Is used by the utility to write the records that are rejected during the load (This can
occur during input record validation by the utility or during record insertion by the Oracle
server.)
Discard file: Is a file that can be created, if necessary, to store all records that did not satisfy
the selection criteria
Slide 330

Example
Prepare a controlfile as control.ctl as follows

load data
infile 'c:\Text.txt'
into table employees
fields terminated by ',
(emp_id,name,designation)

Use the following Command to invoke the SQL Loader

C:\>sqlldr hr/hr@k control="c:\control.ctl"


log="c:\log.log" discard="c:\discard.txt"

32
9

Using SQL*Loader
Command line:
When you invoke SQL*Loader, you can specify parameters that establish session
characteristics. Parameters can be entered in any order, optionally separated by commas. You
can specify values for parameters, or in some cases, you can accept the default without entering
a value.
If you invoke SQL*Loader without specifying any parameters, SQL*Loader displays a Help
screen that lists the available parameters and their default values.

Files Used by SQL*Loader


SQL*Loader uses the following files:
Loader control file: Specifies the input format, output tables, and optional conditions that can
be used to load only part of the records found in the input datafiles
Input datafiles: Contain the data in the format defined in the control file
Parameter file: Is an optional file that can be used to define the command line parameters for
the load
Log file: Is created by SQL*Loader and contains a record of the load
Bad file: Is used by the utility to write the records that are rejected during the load (This can
occur during input record validation by the utility or during record insertion by the Oracle
server.)
Discard file: Is a file that can be created, if necessary, to store all records that did not satisfy
the selection criteria
Slide 331

Conventional Load and Direct Path Load


Conventional Load Direct path Load
Default Not Default
Sql Engine is Used Sql mechanism is by passed
Rows are inserted in space available Directly Loads the data above high
below high water mark. water mark.
NOLOGGING mode can not be used. NOLOGGING mode can be used so
Redo is generated that no redo is generated.
Indexed need not be rebuild. All indexed for table are become
unusable and all indexes has to be
rebuild.
Insert triggers are not disabled. All insert triggers are diabled.

33
0
Slide 332

Summary

In this lesson, you should have learned how to:


Describe the usage of SQL*Loader
Perform basic SQL*Loader operations
Create Control file

Use of SQL*Loader to load data from external files


Control file includes the specification for loading data into database from an
external table
Data

33
1

Practice Overview

Note: Practice can be accomplished using SQL*Plus or using Oracle Enterprise Manager
and SQL*Plus Worksheet.
Slide 333

Lab
Using a text editor (such as Windows Notepad), create a SQL*Loader controlfile, called
STREAMIN.CTL, as follows:

load data
infile 'STREAMIN.DAT' "str \n"
append
into table dp_test
fields terminated by ','
(username,user_id)

Note that the append keyword allows SQL*Loader to insert into a table that already contains rows.
Using a text editor as before, create the input datafile, to be called STREAMIN.DAT, as follows:
John,100
Damir,200
McGraw,9999
From an operating system prompt, issue this command:
sqlldr userid=system/oracle control=STREAMIN.CTL
Log in to your instance with SQL*Plus, and confirm that the three new rows have been inserted into
the table:
select * from dp_test

33
2
Slide 334

Using the Database Resource Manager


Slide 335

Unit Objectives
After this session you must be able to -
Create Resource Consumer Groups
Create Resource plans
Assign resource allocation methods to Resource Plans
Use Database Resource Manager

33
4
Slide 336

Database Resource Manager

Gives the Oracle Database server more control over


resource management decisions
Addresses the following problems when the resource
allocation decisions are left to Operating system
Excessive overhead
Inefficient scheduling
Inappropriate allocation of resources
Inability to manage database-specific resources,
such as parallel execution servers and active
sessions

33
5

The main goal of the Database Resource Manager is to give the Oracle Database server more
control over resource management decisions, thus circumventing problems resulting from
inefficient operating system management.

When database resource allocation decisions are left to the operating system, you may encounter
the following problems:
Excessive overhead
Excessive overhead results from operating system context switching between
Oracle Database server processes when the number of server processes is high.
Inefficient scheduling
The operating system deschedules database servers while they hold latches, which
is inefficient.
Inappropriate allocation of resources
The operating system distributes resources equally among all active processes and
is unable to prioritize one task over another.

Inability to manage database-specific resources, such as parallel execution servers


and active sessions
Slide 337

Database Resource Manager


Using the Database Resource Manager, you can:
Guarantee certain users a minimum amount of
processing resources regardless of the load on the
system and the number of users
Distribute available processing resources by allocating
percentages of CPU time to different users and
applications. In a data warehouse, a higher percentage
may be given to ROLAP (relational on-line analytical
processing) applications than to batch jobs.
Limit the degree of parallelism of any operation
performed by members of a group of users

33
6
Slide 338

Database Resource Manager


Create an active session pool.
Allow automatic switching of users from one group to
another group based on administrator defined criteria.
Prevent the execution of operations that the optimizer
estimates will run for a longer time than a specified limit
Create an undo pool. This pool consists of the amount
of undo space that can be consumed in by a group of
users.
Limit the amount of time that a session can be idle. This
can be further defined to mean only sessions that are
blocking other sessions.

33
7
Slide 339

Database Resource Manager


Configure an instance to use a particular method of
allocating resources. You can dynamically change the
method, for example, from a daytime setup to a nighttime
setup, without having to shut down and restart the
instance.
Allow the cancellation of long-running SQL statements
and the termination of long-running sessions.

33
8
Slide 340

Elements of the Database Resource Manager

Element Description
Resource consumer group User sessions grouped together based on resource processing
requirements.

Resource plan Contains directives that specify how resources are allocated to resource
consumer groups.

Resource allocation method The method/policy used by the Database Resource Manager when
allocating for a particular resource; used by resource consumer groups
and resource plans. The database provides the resource allocation
methods that are available, but you determine which method to use.

Resource plan directive Used by administrators to associate resource consumer groups with
particular plans and allocate resources among resource consumer groups.

33
9
Slide 341

Resource Consumer Groups

Resource consumer groups are groups of users, or


sessions, that are grouped together based on their
processing needs.

34
0
Slide 342

Understanding Resource Plans

Resource plans specify the resource consumer groups


belonging to the plan and contain directives for how
resources are to be allocated among these groups

34
1
Slide 343

Resource Plan Directives

Specify how resources are allocated to resource


consumer groups. The Database Resource Manager
provides several means of allocating resources.
CPU Method
Active Session Pool with Queuing
Degree of Parallelism Limit
Canceling SQL and Terminating Sessions
Execution Time Limit
Undo Pool
Idle Time Limit

34
2

CPU Method
This method enables you to specify how CPU resources are to be allocated among consumer
groups or subplans. Multiple levels of CPU resource allocation (up to eight levels) provide a
means of prioritizing CPU usage within a plan schema.
When there is only one level, unused allocation by any consumer group or subplan can be used
by other consumer groups or subplans in the level.
Active Session Pool with Queuing
You can control the maximum number of concurrently active sessions allowed within a
consumer group. This maximum designates the active session pool. When a session cannot be
initiated because the pool is full, the session is placed into a queue. When an active session
completes, the first session in the queue can then be scheduled for execution. You can also
specify a timeout period after which a job in the execution queue (waiting for execution) will
timeout, causing it to terminate with an error.
An entire parallel execution session is counted as one active session.
Degree of Parallelism Limit
Specifying a parallel degree limit enables you to control the maximum degree of parallelism for
any operation within a consumer group.
Automatic Consumer Group Switching
This method enables you to control resources by specifying criteria that, if met, causes the
automatic switching of sessions to another consumer group. The criteria used to determine
switching are:
Switch group: specifies the consumer group to which this session is switched if the other
(following) criteria are met
Switch time: specifies the length of time that a session can execute before it is switched to
another consumer group
Switch time in call: specifies the length of time that a session can execute before it is switched to
another consumer group. Once the top call finishes, the session is restored to its original
consumer group.
Use estimate: specifies whether the database is to use its own estimate of how long an operation
will execute
The Database Resource Manager switches a running session to switch group if the session is
active for more than switch time seconds. Active means that the session is running and
consuming resources, not waiting idly for user input or waiting for CPU cycles. The session is
allowed to continue running, even if the active session pool for the new group is full. Under
these conditions a consumer group can have more sessions running than specified by its active
session pool. Once the session finishes its operation and becomes idle, it is switched back to its
original group.
If use estimate is set to TRUE, the Database Resource Manager uses a predicted estimate of how
long the operation will take to complete. If the database estimate is longer than the value
specified as the switch time, then the database switches the session before execution starts. If this
parameter is not set, the operation starts normally and only switches groups when other switch
criteria are met.
Switch time in call is useful for three-tier applications where the middle tier server is using
session pooling. At the end of every top call, a session is switched back to its original consumer
group--that is, the group it would be in had it just logged in. A top call in PL/SQL is an entire
PL/SQL block being treated as one call. A top call in SQL is an individual SQL statement issued
separately by the client being treated as a one call.
You cannot specify both switch time in call and switch time.
Canceling SQL and Terminating Sessions
You can also specify directives to cancel long-running SQL queries or to terminate long-running
sessions. You specify this by setting CANCEL_SQL or KILL_SESSION as the switch group.
Execution Time Limit
You can specify a maximum execution time allowed for an operation. If the database estimates
that an operation will run longer than the specified maximum execution time, the operation is
terminated with an error. This error can be trapped and the operation rescheduled.
Undo Pool
You can specify an undo pool for each consumer group. An undo pool controls the amount of
total undo that can be generated by a consumer group. When the total undo generated by a
consumer group exceeds its undo limit, the current DML statement generating the redo is
terminated. No other members of the consumer group can perform further data manipulation
until undo space is freed from the pool.
Idle Time Limit
You can specify an amount of time that a session can be idle, after which it will be terminated.
You can further restrict such termination to only sessions that are blocking other sessions.
Slide 344

Creating a Simple Resource Plan

Can be created using the CREATE_SIMPLE_PLAN


procedure

34
3
Slide 345

Administering the Database Resource Manager

Must have the system privilege


ADMINISTER_RESOURCE_MANAGER to administer
the Database Resource Manager.

EXEC DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SYSTEM_PRIVILEGE -
(GRANTEE_NAME => 'scott', PRIVILEGE_NAME =>
'ADMINISTER_RESOURCE_MANAGER', - ADMIN_OPTION => FALSE);

34
4
Slide 346

DBMS_RESOURCE_MANAGER_PRIVS
Procedure Description
GRANT_SYSTEM_PRIVILEGE Grants ADMINISTER_RESOURCE_MANAGER system privilege to
a user or role.

REVOKE_SYSTEM_PRIVILEGE Revokes ADMINISTER_RESOURCE_MANAGER system privilege


from a user or role.

GRANT_SWITCH_CONSUMER_ Grants permission to a user, role, or PUBLIC to switch to a


GROUP specified resource consumer group.

REVOKE_SWITCH_CONSUMER Revokes permission for a user, role, or PUBLIC to switch to a


_GROUP specified resource consumer group.

34
5
Slide 347

Database Resource Manager


Steps for using Database Resource Manager

1. Create a pending area. This is the work area where


you create and validate resource consumer groups,
resource plans and plan directives.
2. Create Resource Consumer Groups These are
group of users having similar resourcing needs
3. Create Resource Plans Resource plans are a
means of allocating resources by a name
4. Create Resource Plan Directives - These are
means of associating groups and resources to plans

34
6
Slide 348

Database Resource Manager


5. Validate the pending area This process validates
the resource consumer group, the resource plan and
the plan directive.
6. Submit the pending area This creates the resource
consumer group, the resource plan and the plan
directives and makes them active.
7. Assign users to consumer groups
8. Specify plan to be used by an instance

34
7
Slide 349

Database Resource Manager


STEP1:- Creating Pending area
Before using the Database Resource Manager to allocate
resources, modify an old plan or create a new plan, you need
to create what is called a pending area to validate changes
before their implementation.
Heres how you create the pending area
SQL> EXECUTE
dbms_resource_manager.create_pending_area
You can clear pending area by using the following
procedure
SQL> EXECUTE
dbms_resource_manager.clear_pending_area

34
8

Before using the Database Resource Manager to allocate resources, modify an old plan or create a new plan, you need to create what is called a
pending area to validate changes before their implementation. The pending area serves as a work area for your changes. All the resource plans
youll create will be stored in the data dictionary and the pending area is the staging area where you work with before they are implemented
Slide 350

Database Resource Manager


STEP2:- Creating Resource Consumer Groups
SQL> EXECUTE
dbms_resource_manager.create_consumer_group ->
(conumer_group => OLTP, comment => Group for
OLTP users);
SQL> EXECUTE
dbms_resource_manager.create_consumer_group ->
(conumer_group => DSS, comment => Group for DSS
users);
DBA_RSRC_CONSUMER_GROUPS can be checked to find out what
Consumer groups exist in the database

SQL> SELECT consumer_group, status from


dba_rsrc_consumer_group;

34
9
Slide 351

Database Resource Manager


STEP3:- Creating Resource Plan
SQL> EXECUTE DBMS_RESOURCE_MANAGER.CREATE_PLAN
(PLAN =>PROD_PLAN,
CPU_MTH=RATIO,
COMMENT => PLAN for Production Hrs);

35
0
Slide 352

Database Resource Manager


STEP4:- CREATING PLAN DIRECTIVE
SQL> EXECUTE
dbms_resource_manager.create_plan_directive
(plan => PROD_PLAN,
group_or_subplan => oltp_group,
comment => OLTP Group,
CPU_P1 => 70);
SQL> EXECUTE
dbms_resource_manager.create_plan_directive
(plan => PROD_PLAN,
group_or_subplan => dss_group,
comment => DSS Group,
CPU_P1 => 30);

35
1
Slide 353

Database Resource Manager

STEP5:- VALIDATING THE PENDING AREA


SQL> EXEC
DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_area();

35
2
Slide 354

Database Resource Manager


STEP6:- SUBMITTING THE PENDING AREA
By submitting the pending area, you actually create all the
necessary entities, such as resource consumer groups,
the resource plan and the plan directives and make them
active.

SQL> EXEC
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();

Determining status of resource plans


SQL> SELECT plan, group_or_subplan,cpu_p1,cpu_p2,
status from dba_rsrc_plan_directives;

35
3
Slide 355

Database Resource Manager


STEP7:- Assigning Users to Consumer Groups
SQL> EXEC
dbms_resource_manager.set_initial_consumer_
group (MIS_USER1,DSS);
SQL> EXEC
dbms_resource_manager.set_initial_consumer_
group (OLTP_USER1,OLTP);

Verify resource consumer group Membership of users


SQL> SELECT username,
initial_rsrc_consumer_group from dba_users;

35
4
Slide 356

Database Resource Manager


STEP8:- Enabling the Database Resource Manager

SQL> ALTER SYSTEM SET


resource_manager_plan=PROD_PLAN;

SQL> SELECT * FROM v$rsrc_plan;

35
5
Slide 357

Database Resource Manager


Summary
Database Resource Manager gives the Oracle Database server more control
over the resource management decisions.
Resource Consumer Group, Resource Plan, Resource Allocation Method and
Resource Plan Directives are the elements of Database Resource Manager.
Resource consumer groups are groups of users, or sessions, that are grouped
together based on their processing needs.
Resource plans specify the resource consumer groups belonging to the plan
and contain directives for how resources are to be allocated among these
groups
Resource Plan Directives Specify how resources are allocated to resource
consumer groups.
One must have the system privilege ADMINISTER_RESOURCE_MANAGER to
administer the Database Resource Manager.
Pending Area is use to validate changes before their implementation.

35
6
Slide 358

Database Resource Manager


LAB

1) Create 2 Resource Consumer Groups OLTP_GRP and


DSS_GRP as given in the slides
2) Assign users to both the groups
3) Create a plan PEAK Plan which assigns OLTP_GRP
users 80% of CPU and DSS_GRP users 20% of
CPUusing CPU method of allocation

35
7
Slide 359

Detecting and Recovering From Datafile Block


Corruption

Detecting and Recovering From Datafile Block


Corruption

35
8
Slide 360

Detecting and Recovering From Datafile Block Corruption

Block Corruption
Datafile block can be accessed but the contents within
block are invalid or inconsistent.
Database remains available

Causes
Faulty hardware or software in the I/O stack including
The file system
Volume Manager
Device Manager
Host Bus Adapter
Storage Controller
Disk Drive

35
9
Slide 361

Detecting and Recovering From Datafile Block Corruption

Detecting Datafile Block Corruption

A data fault is detected when it is recognized


by the user
administrator
RMAN backup
or application because it has affected the availability of the
application.

36
0
Slide 362

Detecting and Recovering From Datafile Block Corruption

Examples of Detecting Block Corruption

A single corrupt data block in a user table that cannot be read by the
application because of a bad spot of the physical disk
A database that automatically shuts down because of the invalid
blocks of a datafile in the SYSTEM tablespace caused by a failing
disk controller
Regularly monitoring of application logs the alert log, and Oracle
trace files for errors such as ORA-1578 and ORA-1110
ORA-01578: ORACLE data block corrupted (file # 4, block # 26)
ORA-01110: data file 4: '/u01/oradata/objrs/obj_corr.dbf'

36
1
Slide 363

Detecting and Recovering From Datafile Block Corruption

Oracle Tools for Detecting Datafile Block Corruption

RMAN BACKUP or RESTORE command with VALIDATE option


DBVERIFY utility (dbv)
ANALYZE SQL statement with VALIDATE STRUCTURE option
DBMS_REPAIR PL/SQL package

36
2
Slide 364

Detecting and Recovering From Datafile Block Corruption

Steps to Recovering From Datafile Block Corruption

1. Determine the Extent of the Corruption Problem


2. Replace or Move Away From Faulty Hardware
3. Determine Which Objects Are Affected
4. Decide Which Recovery Method to Use

36
3
Slide 365

Detecting and Recovering From Datafile Block Corruption

Methods Determine the Extent of the Corruption Problem


Gather details from error messages
Gather the file number, file name, and block number from the error
messages. For example:
ORA-01578: ORACLE data block corrupted (file # 22, block # 12698)
ORA-01110: data file 22: '/oradata/SALES/users01.dbf
The file number is 22, the block number is 12698, and the file name is
/oradata/SALES/users01.dbf.
Check log files for additional error messages
Use Oracle utilities to check for additional corruption
Check log files for additional error messages
Record additional error messages that appear in the alert log, Oracle trace
files, or application logs. Note that log files may be distributed across the
data server, middle tier, and client machines.
Use Oracle utilities to check for additional corruption

36
4
Slide 366

Detecting and Recovering From Datafile Block Corruption

Replace or Move Away From Faulty Hardware

Repair or replace the faulty hardware, or make space available on a


separate disk sub-system before proceeding with a recovery.

In dataguard environment, a switchover can be performed to bring up


the application and restore services quickly while the corruption problem
can be handled offline.

36
5
Slide 367

Detecting and Recovering From Datafile Block Corruption

Determine Which Objects Are Affected

The following query can be used to find out which objects are affected after
getting the file id and block id

SELECT tablespace_name, partition_name, segment_type,


owner, segment_name
FROM dba_extents
WHERE file_id = fid
AND bid BETWEEN block_id AND block_id + blocks - 1;

36
6
Slide 368

Detecting and Recovering From Datafile Block Corruption

Decide Which Recovery Method to Use


The proper recovery method to use depends on the following criteria

Object affected
Data dictionary or UNDO segment:
Temporary segment
Application segment
Impact to application
Cost of local recovery
Extent of corruption

36
7

Object affected: The recovery actions available depend on which objects are affected by the
corruption. Possible values are:
Data dictionary or UNDO segment: The object affected is owned by SYS or is part of
UNDO tablespace
Temporary segment: Corruptions within a temporary segment or object within a
temporary tablespace do not affect permanent objects
Application segment: Table, index, or cluster used by the application
Impact to application
An object may be critical for the application to function. This includes objects that are
critical for the performance and usability of the application. It could also be a history,
reporting, or logging table, which may not be as critical. It could also be an object that is
no longer in use or a temporary segment.
This criterion only applies to a Data Guard environment and should be used to decide
between recovering the affected object locally and using Data Guard failover. Possible
values are:
High: Object is critical to the application such that available or performance
suffers significantly
Low: Object is not critical and has little or no impact to the application
Cost of local recovery
This criterion only applies to a Data Guard environment and should be used to decide
between recovering the affected object locally and using Data Guard failover. This is not
a business cost which is assumed to be implicit in deciding how critical an object is to the
application but cost in terms of feasibility of recovery, resources required and their
impact on performance and total time taken.
Cost of local recovery should include the time to restore and recover the object from a valid
source; the time to recover other dependent objects like indexes, constraints, and related tables
and its indexes and constraints; availability of resources like disk space, data or index tablespace,
temporary tablespace; and impact on performance and functionality of current normal
application functions due to absence of the corrupt object.
Extent of corruption
Corruption may be localized so that it affects a known number of blocks within one or a
few objects, or it may be widespread so that it affects a large portion of an object.
Slide 369

Detecting and Recovering From Datafile Block Corruption

Object Affected Extent of Problem Action


Data dictionary or UNDO N/A Use RMAN Datafile Media
segment Recovery

Application segment (user Widespread or unknown Use RMAN Datafile Media


table, index, cluster) Recovery
or
Re-Create Objects Manually
Application segment (user Localized Use RMAN Block Media
table, index, cluster) Recovery
or
Re-Create Objects Manually
TEMPORARYsegment or N/A No impact to permanent
temporary table objects. Re-create temporary
tablespace if required.

36
8
Slide 370

Detecting and Recovering From Datafile Block Corruption

Limitations and Restrictions

DBMS_REPAIR procedures have the following limitations:


Tables with LOB datatypes, nested tables, and varrays are supported,
but the out of line columns are ignored. xx what are out-of-line columns?
Clusters are supported in the SKIP_CORRUPT_BLOCKS and
REBUILD_FREELISTS procedures, but not in the CHECK_OBJECT
procedure.
Index-organized tables and LOB indexes are not supported.
The DUMP_ORPHAN_KEYS procedure does not operate on bitmap
indexes or function-based indexes.
The DUMP_ORPHAN_KEYS procedure processes keys that are no
more than 3,950 bytes long.

36
9
Slide 371

Detecting and Recovering From Datafile Block Corruption

About the DBMS_REPAIR Package


Procedure Name Description
ADMIN_TABLES Provides administrative functions (create, drop, purge) for repair or orphan
key tables. Note: These tables are always created in the SYS schema.

CHECK_OBJECT Detects and reports corruptions in a table or index

DUMP_ORPHAN_KEYS Reports on index entries that point to rows in corrupt data blocks

FIX_CORRUPT_BLOCKS Marks blocks as software corrupt that have been previously identified as
corrupt by the CHECK_OBJECT procedure

REBUILD_FREELISTS Rebuilds the free lists of the object

SEGMENT_FIX_STATUS Provides the capability to fix the corrupted state of a bitmap entry when
segment space management is AUTO

SKIP_CORRUPT_BLOCKS When used, ignores blocks marked corrupt during table and index scans. If
not used, you get error ORA-1578 when encountering blocks marked
corrupt.

37
0
Slide 372

Detecting and Recovering From Datafile Block Corruption

Using the DBMS_REPAIR Package


The following approach is recommended when
considering DBMS_REPAIR for addressing data block
corruption:
Task 1: Detect and Report Corruptions
Task 2: Evaluate the Costs and Benefits of Using
DBMS_REPAIR
Task 3: Make Objects Usable
Task 4: Repair Corruptions and Rebuild Lost Data

37
1

Instructor Note

Demonstrate all the four tasks mentioned in the slide using DBMS_REPAIR package
Slide 373

Detecting and Recovering From Datafile Block Corruption

Summary

Understand what is meant by Block Corruption


Causes of Block Corruption
Detecting Datafile Block Corruption
Oracle Tools for Detecting Datafile Block Corruption
Steps to Recovering From Datafile Block Corruption
Methods Determine the Extent of the Corruption Problem
Methods Determine the Extent of the Corruption Problem
Decide Which Recovery Method to Use
Use DBMS_REPAIR package to repair corrupt blocks

37
2
Slide 374

Detecting and Recovering From Datafile Block Corruption

LAB
1) Use dbv command to verify a datafile and study the
output it gives.
2) Use DBMS_REPAIR package to Detect Block Corruption
3) Use DBMS_REPAIR package to Fix Corrupt Blocks
4) Use DBMS_REPAIR package to find index entries
pointing to corrupt data blocks
5) Use DBMS_REPAIR package to skip corrupt blocks

37
3
Slide 375

Oracle Backup and Recovery Concepts

Oracle Backup and Recovery


Concepts

37
4
Slide 376

Oracle Backup and Recovery Concepts

A backup is a copy of data. This copy can include important parts of


the database such as the control file and datafiles. A backup is a
safeguard against unexpected data loss and application errors. If
you lose the original data, then you can reconstruct it by using a
backup.
Backups are divided into
Physical Backups
Logical Backups

37
5
Slide 377

Oracle Backup and Recovery Concepts

Physical backups, which are the primary concern in a


backup and recovery strategy, are copies of physical
database files. You can make physical backups with
either the Recovery Manager (RMAN) utility or operating
system utilities.

Examples
Copy of physical datafiles, controlfiles and logfiles,
when the database is shutdown (Also called cold
backup)
Rman Backup

37
6
Slide 378

Oracle Backup and Recovery Concepts

Logical backups contain logical data (for example,


tables and stored procedures) extracted with the Oracle
Export utility and stored in a binary file. You can use
logical backups to supplement physical backups

Examples
Backup of a particular schema or an object using
tools like export utility, datapump

37
7
Slide 379

Oracle Recovery: Basic Concepts


To restore a physical backup of a datafile or control file
is to reconstruct it and make it available to the Oracle
database server. To recover a restored datafile is to
update it by applying archived redo logs and online
redo logs, that is, records of changes made to the
database after the backup was taken.

After the necessary files are restored, media recovery


must be initiated by the user. Media recovery can use
both archived redo logs and online redo logs to recover
the datafiles

37
8
Slide 380

Recovery concepts
Recovery can be done using SQLPLUS or RMAN. If
SQLPLUS is used the command used is RECOVER. If
RMAN is used then RMAN RECOVER command is run
from the RMAN prompt

37
9
Slide 381

Cold Backup and Hot Backup


Cold backup is backkup of the database taken when the
database is not running - i.e. users are not logged on -
hence no activity going on. This is also known as an
offline backup.


Hot backup is taken when the database is up and
running. Hot backup is taken when the database is
mission critical and cannot be shutdown for taking
backup. This type of backup is also called online backup

38
0
Slide 382

Overview of RMAN Backups


Slide 383

Objectives
After completing this lesson, you should be able to
do the following:
Identify types of RMAN specific backups
Use the RMAN BACKUP command to create backup
sets
Back up the control file
Back up the the archived redo log files
Use the RMAN COPY command to create image
copies

38
2
Slide 384

RMAN Backup Concepts


Recovery Manager backup is a server-managed backup
Recovery Manager uses Oracle server sessions for backup operations
Can back up entire database, all datafiles in a tablespace, selected datafiles, control
files, archived redo log files
Closed database backup
Target database must be mounted (not open)
Includes datafiles, control files, archived redo log files
Open database backup
Tablespaces should not be put in backup mode
Includes datafiles, control files, archived redo log files

38
3

Types of Recovery Manager Backups


Recovery Manager provides functionality to back up:
The entire database, every datafile in a tablespace, or a single datafile
The control file
All or selected archived logs
Note: The online redo log files are not backed up when using Recovery Manager.
Closed Database Backups
A closed database backup is defined as a backup of the database while it is closed (offline).
This is the same as the consistent database backup. If you are performing a closed backup, the
target database must not be open. If you are using a recovery catalog, the recovery catalog
database must be open.
Open Database Backups
An open database backup is defined as a backup of any portion of the database while it is open
(online). Recovery Manager uses server processes to make copies of datafiles, control files, or
archive logs. When using Recovery Manager, do not put tablespaces in backup mode using the
ALTER TABLESPACE ... BEGIN BACKUP command. RMAN reads a block until a
consistent read is obtained.
Slide 385

Recovery Manager Backups


Image copy

Datafile Datafile Copy of datafile 3


3 3

Control Control Copy of control file


file file
Archived Archived
Log file Log file Copy of archived log

Backup set
Datafile Datafile Datafile Datafile Control
1 4 1 3 file
Datafile Control Datafile Datafile
2 file 2 4
Datafile Backup Backup Backup
3 set 1 set 2 set 3

38
4

Recovery Manager Backups


You can make the following types of backups with Recovery Manager:
Image copies are copies of a datafile, control file, or archived redo log file. A copy can be
made using Recovery Manager or an operating system utility. The image copy of a datafile
consists of all the blocks of the datafile, including the unused blocks. The image copy can
include only one file and a single operation of copy cannot be multiplexed.
Backup sets can include one or more datafiles, the control file or archived redo log files. The
backup set can contain one or more files. You can make a backup set in two distinct ways:
Full backup: In a full backup, you back up one or more files. In a full backup, all blocks
containing data for the files specified are backed up.
Incremental backup: An incremental backup is a backup of datafiles that include only the
blocks that have changed since the last incremental backup. Incremental backups
require a base-level (or incremental level 0) backup, which backs up all blocks
containing data for the files specified. Incremental level 0 and full backups copy all
blocks in datafiles, but full backups cannot be used in an incremental backup strategy.
Note: You can configure automatic control file backup so that the control file is backed up
when you issue a BACKUP or COPY command.
Slide 386

Backup Sets

Datafile Datafile Datafile Datafile Control


1 4 1 3 file

Datafile Control Datafile Datafile


2 file 2 4

Datafile Backup Backup Backup


3 set 1 set 2 set 3

38
5

Backup Sets
A backup set consists of one or more physical files stored in an RMAN-specific format, on
either disk or tape. You can make a backup set containing datafiles, control files, and archived
redo log files. You can also back up a backup set. Backup sets can be of two types:
Datafile: Can contain datafiles and control files, but not archived logs
Archived log: Contains archived logs, not datafiles or control files
Note: Backup sets may need to be restored by Recovery Manager before recovery can be
performed, unlike image copies which are on disks.
Control Files in Datafile Backup Sets
Each file in a backup set must have the same Oracle block size. When a control file is included,
it is written in the last datafile backup set. A control file can be included in a backup set either:
Explicitly by using the INCLUDE CONTROL FILE syntax
Implicitly by backing up file 1 (the system datafile)
The RMAN BACKUP command is used to back up datafiles, archived redo log files, and
control files. The BACKUP command backs up the files into one or more backup sets on disk or
tape. You can make the backups when the database is open or closed. Backups can be full or
incremental backups.
Slide 387

Characteristics of Backup Sets


The BACKUP command creates backup sets.
Backup sets usually contain more than one file.
Backup sets can be written to a disk or tape.
A restore operation is required to extract files from a
backup set.
Datafile backup sets can be incremental or full.
Backup sets do not include never-used blocks.

38
6

Characteristics of Backup Sets


A backup set is a logical structure that has the following characteristics:
A backup set contains one or more physical files called backup pieces.
A backup set is created by the BACKUP command. The FILESPERSET parameter controls
the number of datafiles contained in a backup set.
A backup set can be written to disk or tape. Oracle provides one tape output by default for
most platforms, known as SBT (System Backup to Tape), which writes to a tape device
when you are using a media manager.
A restore operation must extract files from a backup set before recovery.
Archived redo log file backup sets cannot be incremental (they are full by default).
A backup set does not include data blocks that have never been used.
Slide 388

Backup Piece
A backup piece is a file in a backup set.
A backup piece can contain blocks from more than
one datafile.

Backup set 1 (Logical) Piece 1


Piece 1 (file) Piece 2 (file) Server
process
Datafile Datafile Datafile (channel)
1 4 5 MML Piece 2
Set 1
Backup set 2 (Logical)
Piece 1 (file) Server
process
Datafile Datafile Datafile
(channel)
2 3 9 MML Set 2

38
7

Backup Piece
A logical backup set usually only has one backup piece. A backup piece is a single physical file
that can contain one or more Oracle datafiles or archived logs.
For a large database, a backup set might exceed the maximum size for a single tape reel,
physical disk, or operating system file. The size of each backup set piece can therefore be
limited by using MAXPIECESIZE with the CONFIGURE CHANNEL or ALLOCATE CHANNEL
commands.
Slide 389

Backup Piece Size


Backup piece size can be limited as follows:

RMAN> RUN {
2> ALLOCATE CHANNEL t1 TYPE 'SBT
3> BACKUP DATABASE;
4> }

38
8

Backup Piece Size


You can use the following commands to restrict the size of a backup piece and generate more
than one piece per set when required:
ALLOCATE CHANNEL MAXPIECESIZE = integer
CONFIGURE CHANNEL MAXPIECESIZE = integer
Specify the size in bytes, kilobytes (K), megabytes (M), or gigabytes (G).
Example (from slide)
Scenario: The USER_DATA tablespace needs to be backed up to one tape drive. The
maximum file size for the tape drive is 4 GB.
Result: If the output file is less than 4 GB, only one backup piece will be written for the
backup set. If the output size is greater than 4 GB, more than one backup piece will be
written for the backup set. Each backup piece will have blocks from three files
interspersed.
Note: In Oracle8i, the following command would be used:
SET LIMIT CHANNEL t1 KBYTES 4194304;
Slide 390

The BACKUP Command

RMAN> BBACKUP TABLESPACE users;

Datafile Datafile Datafile Datafile Control


1 4 1 3 file
Datafile Control Datafile Datafile
2 file 2 4
Datafile Backup Backup Backup
3 set 1 set 2 set 3

38
9

The BACKUP Command


You can control the number of backup sets that Oracle produces as well as the number of input
files that Recovery Manager places into a single backup set. If any I/O errors are received
when reading files or writing backup pieces, the job is aborted.
When using the BACKUP command, you must do the following:
Mount or open the target database. Recovery Manager allows you to make an inconsistent
backup if the database is in ARCHIVELOG mode, but you must apply redo logs to make
the backups consistent for use in recovery operations.
Manually allocate a channel for execution of the BACKUP command if you are not using
automatic channel allocation.
Optionally, you can do the following:
Specify naming convention for backup pieces. If you do not specify the FORMAT parameter,
RMAN stores the backup pieces in a port-specific directory ($ORACLE_HOME/dbs on
UNIX). If you do not specify a file name format, RMAN uses %U by default.
Include the control file in the backup set by using the INCLUDE CURRENT CONTROLFILE
option.
Slide 391

Multiplexed Backup Sets


Multiplex two or more datafiles into a backup set for tape
streaming.

filesperset = 3 Backup set


Datafile Datafile
1 1,2,3,1,2,3
Server
Datafile process
2 (channel)
MML Tape
Datafile
3

39
0

RMAN Multiplexed Backup Sets


The technique of RMAN multiplexing is to simultaneously read files on disks and and then
write them into the same backup piece. When more than one file is written to the same backup
file or piece, Recovery Manager automatically performs the allocation of files to channels,
multiplexes the files, and skips any unused blocks. With a sufficient number of files to back up
concurrently, high-performance sequential output devices (for example, fast tape drives) can be
streamed. This is important for backups that must compete with other online system resources.
It is the responsibility of the operator or storage subsystem to change the tape on the target
database where the tape drive is located. This process was designed for writing to tape but it
can also be used to write to disk.
Multiplexing is controlled by the following:
The FILESPERSET parameter on the BACKUP command
The MAXOPENFILES parameter of the ALLOCATE CHANNEL and CONFIGURE CHANNEL
commands
Example
The database contains three datafiles that will be multiplexed together into one physical file
(set) and stored on tape. The datafiles are multiplexed by writing n number of blocks from
datafile 1, then datafile 2, then datafile 3, then datafile 1, and so on until all files are backed up.
Slide 392

Parallelization of Backup Sets


Allocate multiple channels, optionally specify
filesperset, and include many files.

Backup Set 1 Server


process
Datafile Datafile Datafile (channel)
1 4 5 Set 1
MML
Backup Set 2
Server
Datafile Datafile Datafile process
2 3 9
(channel)
Set 2
MML
Backup Set 3
Server
Datafile Datafile Datafile process
6 7 8
(channel)
MML Set 3

39
1

Parallelization of Backup Sets


You can configure parallel backups by setting the PARALLELISM option of the CONFIGURE
command to greater than 1 or manually allocate multiple channels, RMAN parallelizes its
operation and writes multiple backup sets in parallel. The server sessions divide the work of
backing up the specified files.
Example
RMAN> run {
2> allocate channel c1 type sbt;
3> allocate channel c2 type sbt;
4> allocate channel c3 type sbt;
5> backup
6> incremental level = 0
7> format '/disk1/backup/df_%d_%s_%p.bak
8> (datafile 1,4,5 channel c1 tag=DF1)
9> (datafile 2,3,9 channel c2 tag=DF2)
10> (datafile 6,7,8 channel c3 tag=DF3);
11> alter system archive log current;
12> }
Slide 393

Duplexed Backup Sets

Datafile Datafile
Datafile 1 1
1
Datafile Datafile
Datafile 2 2
2
BACKUP1 BACKUP2

Backup set

39
2

Duplexed Backup Sets


You can create up to four identical copies of each backup piece by duplexing the backup set.
You can use the following commands to produce a duplexed backup set:
BACKUP COPIES
SET BACKUP COPIES
CONFIGURE BACKUP COPIES
RMAN does not produce multiple backup sets, but produces identical copies of each backup
piece in the set.
Example
This example shows how you can create two copies of the backup of datafiles 1 and 2:
RMAN> BACKUP COPIES 2 DATAFILE 1, DATAFILE 2
2> FORMAT '/BACKUP1/%U','/BACKUP2/%U';
RMAN places the first copy of each backup piece in /BACKUP1 and the second in
/BACKUP2. RMAN produces one backup set with a unique key and generates two identical
copies of each backup piece in the set.
Slide 394

Backups of Backup Sets

Datafile Datafile
1 1

Datafile Datafile
2 2

Backup set Backup set

39
3

Backing Up Backup Sets


You can back up a backup set as an additional way to manage your backups. You can use the
RMAN BACKUP BACKUPSET command for disk-to-disk and disk-to-tape backups. This
allows you to make an additional backup on tape or to move your backup from disk to tape.
Slide 395

Archived Redo Log File Backups


Online redo log file switch is automatic.
Archived log failover is performed.

39
4

Archived Redo Log File Backups


At the beginning of every BACKUP ... ARCHIVELOG command that does not include an
UNTIL clause or SEQUENCE parameter, RMAN attempts to automatically switch out of and
archive the current online redo log.
In Oracle9i, RMAN performs archived log failover. If any corrupt blocks are detected in an
archived redo log file, RMAN searches other archiving destinations for a file without corrupt
blocks.
Slide 396

Archived Redo Log Backup Sets


Include only archived redo log files
Are always full backups

RMAN> BACKUP
2> FORMAT '/disk1/backup/ar_%t_%s_%p'
3> ARCHIVELOG ALL DELETE ALL INPUT;

39
5

Archived Redo Log File Backup Sets


A common problem experienced by DBAs is not knowing whether an archived log has been
completely copied out to the archive log destination before attempting to back it up. Recovery
Manager has access to control file or recovery catalog information, so it knows which logs
have been archived and can be restored during recovery.
You can back up archived redo log files with the BACKUP ARCHIVELOG command or
include them when backing up datafiles and control files with the BACKUP PLUS
ARCHIVELOG command.
Characteristics of Archived Log Backup Sets
Can include only archived logs, not datafiles or control files.
Are always full backups. (There is no logic in performing incremental backups because you
can specify the range of archived logs to backup.)
Example (from slide)
This example backs up all archived redo logs to a backup set, where each backup piece
contains three archived logs. After the archived logs are copied, they are deleted from disk and
marked as deleted in the V$ARCHIVED_LOG view.
Slide 397

Backup Constraints
The database must be mounted or open.
Online redo log backups are not supported.
Only clean backups are usable in NOARCHIVELOG
mode.
Only current datafile backups are usable in
ARCHIVELOG mode.

39
6

Backup Constraints
When performing a backup using Recovery Manager, you must be aware of the following:
The target database must be mounted for Recovery Manager to connect.
Backups of online redo logs are not supported.
If the target database is in NOARCHIVELOG mode, only clean tablespace and datafile
backups can be taken (that is, backups of offline normal or read only tablespaces).
Database backups can be taken only if the database has first been shut down cleanly and
restarted in Mount mode.
If the target database is in ARCHIVELOG mode, only current datafiles can be backed up
(restored datafiles are made current by recovery).
If a recovery catalog is used, the recovery catalog database must be open.
Slide 398

Image Copies

Datafile Datafile Copy of datafile 3


3 3

Archived Archived Copy of archived log


Log file Log file

39
7

Image Copies
An image copy contains a single datafile, archived redo log file, or control file. An image copy
can be created with the RMAN COPY command or an operating system command.
When you create the image copy with the RMAN COPY command, the server session validates
the blocks in the file and records the copy in the control file.
Slide 399

Characteristics of an Image Copy


Can be written only to a disk
Can be used for recovery immediately; does not need
to be restored
Is a physical copy of a single datafile, archived log, or
control file
Is most like an operating system backup (contains all
blocks)
Can be part of an incremental strategy

39
8

Characteristics of an Image Copy


An image copy has the following characteristics:
An image copy can be written only to disk. Hence additional disk space may be required to
retain the copy on the disk. When large files are being considered, copying may take a long
time, but restoration time is reduced considerably because the copy is available on the disk.
If files are stored on disk, they can be used immediately (that is, they do not need to be
restored from other media). This provides a fast method for recovery using the SWITCH
command in Recovery Manager, which is equivalent to the ALTER DATABASE RENAME
FILE SQL statement.
In an image copy all blocks are copied, whether they contain data or not, because an Oracle
server process copies the file and performs additional actions such as checking for corrupt
blocks and registering the copy in the control file. To speed up the process of copying, you
can use the NOCHECKSUM parameter.
Image copy can be part of a full or incremental level 0 backup, because a file copy always
includes all blocks. Use the level 0 option if the copy will be used in conjunction with an
incremental backup set.
Image copy can be designated as a level 0 backup in incremental backup strategy, but no
other levels are possible with image copy.
Slide 400

Image Copy Example

Datafile Datafile Copy of datafile 3


3 3

Archived Archived
log file log file Copy of archived log

RMAN> BACKUP AS COPY DATABASE;

RMAN> BACKUP AS COPY TABLESPACE USERS;

RMAN> BACKUP AS COPY DATAFILE


/u01/app/oracle/oradata/finance/users01.dbf';

39
9

Image Copies
The RMAN COPY command creates an image copy of a file. The output file is always written
to disk. You can copy datafiles, archived redo log files, or control files. In many cases, copying
datafiles is more beneficial than backing them up, because the output is suitable for use without
any additional processing.
If you want to make a whole database backup with the COPY command, you must copy each
datafile with a separate COPY statement. You can also make a copy of the control file and
archived redo log files.
The example in the slide assumes that you are using automatic channel allocation. If you are
manually allocating channels, include the COPY command within the RUN statement as
follows:
RMAN > RUN {
2> ALLOCATE CHANNEL c1 type disk;
3> COPY
4> DATAFILE '/ORADATA/users_01_db01.dbf' to
5> '/BACKUP/users01.dbf' tag=DF3,
6> ARCHIVELOG 'arch_1060.arc' to
7> 'arch_1060.bak';}
Slide 401

The COPY Command

RMAN> COPY
2> DATAFILE 3 TO '/BACKUP/file3.dbf',
3> DATAFILE 1 TO '/BACKUP/file1.dbf';

Datafile
Datafile Control Redo log
1 files file 1
1
Image copy
Datafile Redo log
2 file 2
Datafile
3 Datafile
3 Database
Image copy

40
0

The COPY Command


During the copy operation, an Oracle server process computes a checksum for each block to
detect corruption. RMAN verifies the checksum when restoring the copy. This is referred to as
physical corruption detection. You can use the NOCHECKSUM option to suppress the checksum
operation and speed up the copy process. If the database is already maintaining block
checksums, then this option has no effect.
You can use the CHECK LOGICAL option to test data and index blocks that pass physical
corruption checks for logical corruption, for example, corruption of a row piece or index entry.
If logical corruption is detected, the block is logged in the alert log and trace file of the server
process.
You can set a threshold for logical and physical corruption with the MAXCORRUPT parameter.
As long as the sum of physical and logical corruptions that is detected for a file remain below
this value, the RMAN command completes and Oracle populates the view
V$COPY_CORRUPTION with corrupt block ranges. If MAXCORRUPT is exceeded, then the
command terminates without populating the views.
Slide 402

Image Copy Parallelization


One COPY command with many channels

RMAN> CONFIGURE DEVICE TYPE disk parallelism 4;


2> COPY # 3 files copied in parallel
3> datafile 1 TO '/BACKUP/df1.dbf',
4> datafile 2 TO '/BACKUP/df2.dbf',
5> datafile 3 TO '/BACKUP/df3.dbf';
RMAN> COPY # Second copy command
2> datafile 4 TO '/BACKUP/df4.dbf';

40
1

Image Copy Parallelization


By default, Recovery Manager executes each COPY command serially. However, you can
parallelize the copy operation by:
Using the CONFIGURE DEVICE TYPE PARALLELISM
Or allocating multiple channels ( required in Oracle8i)
Specifying one COPY command for multiple files
You can allocate the channels manually as shown in the slide or by automatic channel
configuration.
In the example, four channels are created, but only three will be used. This is how the
command is executed:
Four channels are configured for writing to disk.
The first COPY command uses three channels (server processes), one for writing each
datafile to disk.
3. The second COPY command does not execute until the previous COPY command has
finished execution. It will use only one channel.
Note: When you use a high degree of parallelism, more machine resources are used, but the
backup operation can be completed faster.
Slide 403

Copying the Whole Database


Mount the database for a whole consistent backup.
Use the REPORT SCHEMA command to list the files.
Use the COPY command or make an image copy of
each datafile.
Use the LIST COPY command to verify the copies.

40
2

How to Make an Image Copy of the Whole Database


To make an image copy of all the datafiles using Recovery Manager, follow this procedure:
1. Connect to RMAN and start up in mount mode:
RMAN> STARTUP MOUNT
2. Obtain a list of datafiles of the target database:
RMAN> REPORT SCHEMA;
3. Use the COPY command or script to create the copy of all datafiles listed above:
RMAN> COPY datafile 1 TO /BACKUP/df1.cpy,
datafile 2 TO /BACKUP/df2.cpy ,...;
4. Use the LIST COPY command to verify the copy:
RMAN> LIST COPY;
You can include the control file in the copy with the CURRENT CONTROLFILE command. In
addition, if CONFIGURE CONTROLFILE AUTOBACKUP is ON, RMAN automatically backs
up the control file after the COPY command is issued.
Slide 404

Making Incremental Backups


Full backups contain all Full backup
datafile blocks.
Differential incremental
backups contain only
modified blocks from Differential
incremental backup
level n or lower.
Cumulative incremental
backups contain only
modified blocks from Cumulative
incremental backup
level n-1 or lower.

40
3

RMAN Backup Types


Full Backups
A full backup differs from a whole database backup. A whole backup is composed of all of the
datafiles and control file of the target database, whereas a full backup may contain one or more
of the datafiles, the control file, or archived redo log files.
When performing a full backup, an Oracle server process reads the entire file and copies all
blocks into the backup set, skipping only datafile blocks that have never been used. The server
session does not skip blocks when backing up archived redo logs or control files.
A full backup is not a part of the incremental backup strategy. You can create and restore full
backups of datafiles, datafile copies, tablespaces, database, control files, archive logs, and
archive log copies. Note that backup sets containing archived redo logs are always full
backups.
Slide 405

Differential Incremental Backup Example


n level backup of all blocks that have changed since the
most recent backup at level n or lower.

Level 0 Level 0

Lvl 0 2 2 1 2 2 2 0
Day Sun Mon Tue Wed Thu Fri Sat Sun

40
4

Differential Incremental Backup Example


You are maintaining a 100 GB database, which is continuously growing. Based on existing
hardware, you determine that open backups of the entire database take 4 hours. The database is
online 24 hours a day, 7 days a week and the backups are consuming too much of the system
resources during this period of time. Level 0 backups cannot be performed more than once a
week, but fast recovery in case of failure is required. You therefore decide on the following
backup and recovery strategy:
A level 0 backup will be performed each week on the day with the least activity. You
determine this day to be Sunday.
RMAN> BACKUP INCREMENTAL level 0 database;
Incremental level 2 backups will be performed every day, except Wednesday. In this way,
backups will be fast because only changed blocks from the previous day will be copied:
RMAN> BACKUP INCREMENTAL level 2 database;
Slide 406

Cumulative Incremental Backup Example


n level backup which contains all blocks changed since the
previous backup at a level n 1 or lower

Level 0 Level 0

Lvl 0 2 2C 1 2 2C 2C 0
Day Sun Mon Tue Wed Thu Fri Sat Sun

40
5

Cumulative Incremental Backups Example


Cumulative incremental backups have the following characteristics:
A cumulative incremental level n backup (where n > 0) copies all changed blocks since the
previous backup at level n-1 or lower.
A cumulative incremental backup backs up blocks previously backed up at the same level.
Therefore, they may take longer, write out more blocks, and produce larger backup files
than noncumulative backups.
Cumulative incremental backups are provided for recovery speed, because fewer backups
must be applied at each level when recovering.
Example
Cumulative incremental backups duplicate changes already copied by the previous incremental
backup at the same level. Therefore, if an incremental level 2 backup is taken, then the
following cumulative level 2 backs up all newly modified blocks plus those backed up by the
incremental level 2. This means that only one incremental backup of the same level is needed
to completely recover.
RMAN> BACKUP INCREMENTAL level 2 cumulative database;
Slide 407

Backup in NOARCHIVELOG Mode


1.Ensure sufficient space for the backup.
2.Shut down using the NORMAL or IMMEDIATE clause.
3.Mount the database.
4.Allocate multiple channels if not using automatic.
5.Run the BACKUP command.
6.Verify that the backup is finished and cataloged.
7.Open the database for normal use.

RMAN> BACKUP DATABASE FILESPERSET 3;

40
6

How to Perform a Multiplexed Backup in NOARCHIVELOG Mode


1. Ensure that the destination directory where you want to store the backup is available
and has sufficient space.
2. Shut down the database cleanly using the NORMAL, IMMEDIATE, or
TRANSACTIONAL clause.
3. Mount the database.
4. If you are not using automatic channel allocation, allocate multiple channels and use a
format string to multiplex channels to different disks.
5. Run the BACKUP command. Because the database is in NOARCHIVELOG mode, the
incremental backups are not applicable, so use the full backup option.
6. Verify that the backup is finished and cataloged.
7. Open the database for normal use.
Slide 408

RMAN Control File Autobackups


Use the CONFIGURE CONTROLFILE AUTOBACKUP
command to enable
When enabled, RMAN automatically performs a
control file autobackup after BACKUP or COPY
commands
Backup is given a default name

40
7

Control File Autobackups


If CONFIGURE CONTROLFILE AUTOBACKUP is ON, RMAN automatically performs a
control file autobackup in these situations:
After every BACKUP or COPY command issued at the RMAN prompt
Whenever a BACKUP or COPY command within a RUN block is followed by a command that
is neither BACKUP nor COPY
At the end of every RUN block if the last command in the block was either BACKUP or COPY
The control file autobackup occurs in addition to any backup or copy of the current control file
that has been performed during these commands.
By default, CONFIGURE CONTROLFILE AUTOBACKUP is set to OFF.
RMAN automatically backs up the current control file using the default format of %F. You can
change this format using the CONFIGURE CONTROLFILE AUTOBACKUP FORMAT and SET
CONTROLFILE AUTOBACKUP FORMAT commands.
Slide 409

Tags for Backups and Image Copies


Logical name assigned to a backup set or image copy

month_full_backup week_full_backup Wednesday_1_backup

Datafiles Datafile Datafile


1,3 3 1

Datafiles Datafile
2,4 4

Backup set Backup set Backup set

40
8

Tags for Backups and Image Copies


A tag is a meaningful name that you can assign to a backup set or image copy. The advantages
of user tags are as follows:
Tags provide a useful reference to a collection of file copies or a backup set.
Tags can be used in the LIST command to locate backed up files easily.
Tags can be used in the RESTORE and SWITCH commands.
The same tag can be used for multiple backup sets or file copies.
If a nonunique tag references more than one datafile, then Recovery Manager chooses the most
current available file.
Example
Each month, a full backup of datafiles 1, 2, 3, and 4 is performed. The tag in the control file
for this backup is month_full_backup, even though the physical filename generated is
df_DB00_863_1.dbf.
Each week, a full backup of datafiles 3 and 4 is performed. The tag name for this backup is
week_full_backup.
Slide 410

RMAN Dynamic Views


V$ARCHIVED_LOG
V$BACKUP_CORRUPTION
V$COPY_CORRUPTION
V$BACKUP_DATAFILE
V$BACKUP_REDOLOG
V$BACKUP_SET
V$BACKUP_PIECE

40
9

RMAN Dynamic Views


You can use the following views to obtain RMAN information stored in the control file:
V$ARCHIVED_LOG shows which archives have been created, backed up, and cleared in the
database.
V$BACKUP_CORRUPTION shows which blocks have been found corrupt during a backup of
a backup set.
V$COPY_CORRUPTION shows which blocks have been found corrupt during an image
copy.
V$BACKUP_DATAFILE is useful for creating equal-sized backup sets by determining the
number of blocks in each datafile. It can also find the number of corrupt blocks for the
datafile.
V$BACKUP_REDOLOG shows archived logs stored in backup sets.
V$BACKUP_SET shows backup sets that have been created.
V$BACKUP_PIECE shows backup pieces created for backup sets.
Slide 411

Monitoring RMAN Backups


Correlate server sessions with channels with the SET
COMMAND ID command.
Query V$PROCESS and V$SESSION to determine
which sessions correspond to which RMAN channels.
Query V$SESSION_LONGOPS to monitor the progress
of backups and copies.
Use an operating system utility to monitor the process
or threads.

41
0

How to Monitor the Copy Process


To correlate a process with a channel during a backup:
1. Start Recovery Manager and connect to the target database and, optionally, the
recovery catalog.
rman target / catalog rman/rman@rcat
2. Set the COMMAND ID parameter after allocating the channels and then copy the desired
object.
run {
allocate channel t1 type disk;
set command id to 'rman';
copy datafile 1 to '/u01/backup/df1.cpy';
release channel t1;}
3. Query the V$SESSION_LONGOPS view to get the status of the copy.
SELECT sid, serial#, context, sofar, totalwork
round(sofar/totalwork*100,2) "% Complete",
FROM v$session_longops
WHERE opname LIKE 'RMAN:%'
AND opname NOT LIKE 'RMAN: aggregate%';
Slide 412

Miscellaneous RMAN Issues


Abnormal termination of a Recovery Manager job
Detecting physical and logical block corruption
Detecting a fractured block during open backups

41
1

Miscellaneous RMAN Issues


Abnormal Termination of Recovery Manager
Recovery Manager records only backup sets that have finished successfully in the control file.
If a Recovery Manager job terminates abnormally, incomplete files might exist in the operating
system. Recovery Manager will not use them, but you will need to remove them.
Detecting Corruption
Recovery Manager detects and can prohibit any attempt to perform operations that would result
in unusable backup files or corrupt restored datafiles.
By default, error checking for physical corruption is enabled. Information about corrupt
datafile blocks encountered during a backup are recorded in the control file and the alert log.
The server identifies corrupt datafile blocks, but they are still included in the backup. The
Oracle server records the address of the corrupt block and the type of corruption in the control
file. To view corrupt blocks from the control file, view either V$BACKUP_CORRUPTION for
backup sets or V$COPY_CORRUPTION for image copies.
RMAN tests data and index blocks for logical corruption and logs any errors in the alert.log
and server session trace file. By default, error checking for logical corruption is disabled.
Slide 413

Summary
In this lesson, you should have learned how to:
Determine what type of RMAN backups should be
taken
Make backups with the RMAN COPY and BACKUP
commands
Back up the control file
Back up the archived redo log files

41
2
Slide 414

Oracle DB Performance Monitoring and


analysis

Oracle DB Performance Monitoring and


analysis

41
3
Slide 415

Performance Monitoring Overview

Why Monitoring the Database Performance


- to ensure that an issue does not become a problem

How
- To diagnose performance problems correctly,
statistics must be available.
- Oracle generates different types of cumulative
statistics for the system, sessions, and individual SQL
statements.
- Oracle provides different performance monitoring
tools to effective diagnose a problem

41
4
Slide 416

Performance Monitoring Overview

DB Statistics
- gives information on the type of load, internal and
external resources used by the database etc

Wait Event
- Any event that prevents an oracle process from
proceeding to complete its task
E.g..
Idle: wait events that signify the session is inactive, such
as SQL*Net message from client

41
5
Slide 417

Performance Monitoring Overview


- User I/O: wait for blocks to be read off a disk
- Commit: waits for redo log write confirmation after a
commit
- Application: locks waits caused by row level locking or
explicit lock commands
- Network: waits for data to be sent over the network

Time Model Statistics


- Oracle advisories and reports describe statistics in
terms of time

41
6
Slide 418

Time Model Statistics


- V$SESS_TIME_MODEL and V$SYS_TIME_MODEL
views has information on time model statistics.
- DB time statistics provides the total time spent in
database calls and total instance workload.
- Cumulative
E.g., an instance that has been running for 60
minutes could have 3 active user sessions whose
cumulative DB time is approximately 180 minutes.

- Reducing the time users spend to perform an action


on the database, or reducing DB time would the goal of
tuning

41
7
Slide 419

Time Model Statistics


- Set initialization parameter STATISTICS_LEVEL to
TYPICAL or ALL
- If STATISTICS_LEVEL is set BASIC, then set
TIMED_STATISTICS to TRUE to enable timed statistics

System and Session Statistics

- Available in V$SYSSTAT and V$SESSTAT views.

41
8
Slide 420

Active Session History (ASH)


- V$ACTIVE_SESSION_HISTORY view has sampled
session activity in the instance.
- Sampled every second and stored in a circular buffer in
SGA
- Flushed to disk for Automatic Workload Repository
(AWR) snapshots
- Enables to examine and perform detailed analysis on
current data in the V$ACTIVE_SESSION_HISTORY
view and historical data in the
DBA_HIST_ACTIVE_SESS_HISTORY view, often
avoids the need to replay the workload to gather
additional performance tracing information

41
9
Slide 421

Operating System Statistics


- Details on the usage and performance of the hardware
components of the system & of the operating system.
- Operating system statistics include the following:
CPU Statistics, Available UNIX tools are sar, vmstat,
mpstat, iostat
Virtual Memory Statistics, Available UNIX tools are sar,
vmstat
Disk Statistics, Available UNIX tools are sar, iostat
Network Statistics, Available UNIX tools are netstat

42
0
Slide 422

Automatic Workload Repository(AWR)


- Gathers and maintains processes performance
statistics.

- For problem detection and self-tuning purposes.

- Data is available in both memory and in the database.

- Statistics can be displayed in "Workload Repository


Views" and "Workload Repository Reports".

42
1
Slide 423

Statistics collected and processed by AWR


- Object statistics to determine both access and usage
statistics of database segments
- Time model statistics, under V$SYS_TIME_MODEL and
V$SESS_TIME_MODEL views

- System and session statistics collected in the


V$SYSSTAT and V$SESSTAT views
- SQL statements producing the highest load on the
system, based on specific
- ASH statistics, represents the history of recent
sessions activity

42
2
Slide 424

AWR
- Automatically generates snapshots once every hour &
stores in workload repository.
- Possible to create manual snapshots
- Automatic Database Diagnostic Monitor (ADDM) uses
snapshot details
- Compares the difference between snapshots to
determines, SQL statements to be captured
- snapshots are captured once every hour and are
retained in the database for 7 days, by default

42
3
Slide 425

AWR Views
DBA_HIST_ACTIVE_SESS_HISTORY displays the
history of the contents of the in-memory active
session history for recent system activity.

DBA_HIST_BASELINE displays information about the


baselines captured on the system

DBA_HIST_DATABASE_INSTANCE displays
information about the database environment

42
4
Slide 426

AWR Views
DBA_HIST_SNAPSHOT displays information on
snapshots in the system

DBA_HIST_SQL_PLAN displays the SQL execution


plans

DBA_HIST_WR_CONTROL displays the settings for


controlling AWR

42
5
Slide 427

Workload Repository Reports

awrrpt.sql SQL script generates an HTML or text


report that displays statistics for a range of snapshot
Ids

The awrrpti.sql SQL script generates an HTML or text


report that displays statistics for a range of snapshot
Ids for a specified database and instance

42
6
Slide 428

Automatic Database Diagnostic Monitor (ADDM)


analyzes AWR data on a regular basis

Finds the root causes of performance problems

Provides recommendations to correct any problems

Analysis is performed every time an AWR snapshot is


taken and the results are saved in the database

Minimal overhead to the system during the diagnostic


process

42
7
Slide 429

Types of problems that ADDM considers


CPU bottlenecks
Undersized Memory Structures , i.e. analyses size of
Oracle memory structures, such as the SGA, PGA,
and buffer cache
I/O capacity issues
High load SQL statements
High load PL/SQL execution and compilation
Database configuration issues - like size of log file,
excessive check point etc
Concurrency issues e.g. buffer busy problems
Hot objects and top SQLs

42
8
Slide 430

ADDM

To enable set STATISTICS_LEVEL parameter to


TYPICAL or ALL

addmrpt.sql script generates ADDM report, located in


/rdbms/admin/

DBMS_ADVISOR package also used for specific


analysis

42
9
Slide 431

ADDM Views
DBA_ADVISOR_TASKS - provides basic information
about existing tasks, such as the task Id, task name

DBA_ADVISOR_LOG - contains the current task


information, like status, progress, error messages,
and execution times

DBA_ADVISOR_RECOMMENDATIONS - displays
the results of completed diagnostic tasks with
recommendations
DBA_ADVISOR_FINDINGS - findings and symptoms
encountered along with the specific recommendation

43
0
Slide 432

TKPROF
Converts Oracle trace files into a more readable form

utility works better if timed statistics enabled

Syntax to execute TKPROF

TKPROF <trace-file> <output-file> explain=user/password@service


table=<<sys.plan_table>>

43
1
Slide 433

STATSPACK
Set of performance monitoring and reporting utilities
provided by Oracle

AWR/ADDM provides better statistics than


STATSPACK

STATSPACK doesnt need a EM license but to use


AWR/ADDM license is must

Run spcreate.sql to install STATSPACK

43
2
Slide 434

STATSPACK
PERFSTAT schema & related objects will be created

Run statspack.snap OR perfstat.statspack.snap to


take DB performance snapshot

Execute spreport.sql to generate report

Execute spdrop.sql to uninstall STATSPACK

43
3
Slide 435

Oracle Enterprise Manager(OEM)


Utilizes ADDM/AWR statistics & provides visual
display of important Oracle performance metrics

Has an interface to the ASH component , uses system


event statistics stored in dba_hist & provides visual
display

Has interface to ADDM, allows OEM to make


intelligent performances recommendations

43
4
Slide 436

SQL Trace File


Used to diagnose a performance problem for further
analysis
Need to set parameter SQL_TRACE = TRUE
Execute DBMS_SUPPORT.start_Trace; --- Starts the
trace
Run SQL that needs to be diagnosed
Execute DBMS_SUPPORT.stop_trace; --- Stops the
trace
Get the trace file from & analyze
USER_DUMP_DEST/BACKGROUND_DUMP_DEST
locations

43
5
Slide 437

Oracle Wait Interface (OWI)

Service time : Time taken to complete a request



Wait time : Time waited in completing request

Performance depends on Service time & Wait time

43
6
Slide 438

Oracle Wait Interface (OWI)

Oracles Performance tracking methodology

Focuses on process bottlenecks( Wait events)


E.g..
Waits for I/O Operations
Locks
Network Latencies & so on

Represents all the bottlenecks process encounters


from start to finish

43
7
Slide 439

Oracle Wait Interface (OWI)

Tracks number of times & amount of time a process


spent on each bottleneck

Consists Dynamic Performance views & extended


SQL Trace file

Provides statistics of wait events through Dynamic


performance views

V$EVENT_NAME contains all different wait events


defined for Oracle Instance

43
8
Slide 440

Performance related Dynamic views


V$SYSTEM_EVENT : Aggregated Statistics of all wait
events encountered by Oracle sessions since
instance startup
V$SESSION_EVENT : Aggregated Statistics of all
wait events at session level
V$SESSION_WAIT : detailed information about the
events/resources each session is waiting for
V$SESSION_WAIT_HISTORY : History of the
session wait events
V$SESSION_WAIT_CLASS : Instance level total
waits & time waited by wait class since startup

43
9
Slide 441

Common Wait events


Buffer busy waits : A session wants to access a data
block in buffer cache that is currently in use by some
other session

DB file scattered read : Posted by a session when it


issues an I/O request to read multiple data blocks

DB file sequential read : Occurs when the process


waits for an I/O completion for a sequential read

DB file single write : Posted by DBWR, when Oracle


updating data file headers esp. during checkpoints

44
0
Slide 442

Common Wait events

Free buffer wait : Occurs when a session could not


find free buffers from buffer cache to read data blocks

Latch Free wait : Occurs when a process waits to get


a latch currently held by another process

Log buffer wait : Occurs when a session waits to get


space from log buffer to write new information

44
1
Slide 443

Common Wait events


Log file switch : Indicates a process is waiting for the
log file switch to complete, but the log file switch is not
happening at this time

Log file sync : a wait event that occurs while LGWR


writes the contents of a user session transaction into
redo logs

SQL *Net message from client : Occurs when a


session waits to get a message from the client
SQL *Net message to client : Occurs when a session
waits to post a message to the client

44
2
Slide 444

Locks
mechanism to prevent more than one transaction
accessing the same resource, which would destruct
the transaction

Oracle obtains & manages necessary locks on the


resource while executing SQL statements

A deadlock can occur when two or more users are


waiting for data locked by each other

44
3
Slide 445

Locks
Oracle automatically detects deadlocks and resolves
by rolling back any one of the statements involved in
the deadlock

The smallest unit that can be locked in oracle is the


row

For each SQL statement oracle acquires the


appropriate lock automatically

User cannot acquire locks at the row level manually,


but can acquire locks at table level

44
4
Slide 446

Locks
Locks are released when transaction commits or rolls
back

utllockt.sql script gives, the sessions waiting for locks


and the locks that are waiting for in a tree format

catblock.sql creates necessary views for utllockt.sql

44
5
Slide 447

Common Lock Views


V$LOCK - Displays Locks currently held & outstanding
requests for a lock

DBA_BLOCKERS Displays a session, if another


session is waiting for the object that had been locked
by the session

DBA_WAITERS - Displays a session if it is waiting for a


locked object

DBA_DDL_LOCKS Displays all DDL locks, and all


outstanding requests for a DDL lock

44
6
Slide 448

Common Lock Views


DBA_DML_LOCKS Displays all DML locks, and all
outstanding requests for a DML lock

DBA_LOCK Displays all locks and all outstanding


requests for a lock

DBA_LOCK_INTERNAL - Displays a row for each lock


and one row for each outstanding request for a lock

44
7
Slide 449

Performance impacting, Oracle Memory structures

Shared pool

Buffer cache

Redo Log buffer

44
8
Slide 450

Monitoring shared pool

This Internal Memory Structure holds textual/executable


forms of PL/SQL blocks and SQL statements, dictionary
cache data, result cache data, and other data

Reduces resource consumption by avoiding Parsing, I/O


resources & Latches on resources

Reuse of shared SQL for multiple users running the same


application, avoids parsing

44
9
Slide 451

Monitoring shared pool


V$LIBRARYCACHE used to monitor Library Cache activity

Library Cache Hit Ratio calculated as sum(pin hits) /


sum(pins)

V$SGASTAT used to get free shared pool memory size

Advisories V$SHARED_POOL_ADVICE,
V$LIBRARY_CACHE_MEMORY gives suggestions
based on the Statics collected

45
0
Slide 452

Monitoring Buffer cache


This Internal Memory Structure, stores blocks read from
disk

Hit ratio determines how frequently a requested block has


been found in the cache without requiring disk access

Use V$DB_CACHE_ADVICE & Buffer cache hit ratio to get


buffer cache activity

Set DB_CACHE_ADVICE to ON, to populate


V$DB_CACHE_ADVICE advisory

DB_CACHE_ADVICE parameter is dynamic

45
1
Slide 453

Monitoring Buffer cache


Enabling DB_CACHE_ADVICE slightly increases CPU
usage

V$SYSSTAT is used to get Hit Ratio

Hit Ratio calculated as


1 - (('physical reads cache') / ('consistent gets from
cache' + 'db block gets from cache')

To increase the memory allocated to the buffer cache,


increase DB_CACHE_SIZE parameter value.

45
2
Slide 454

Monitoring Redo Log Buffer


This Internal Memory Structure, stores redo data when
changes are made to data blocks in Buffer cache by server
processes

LGWR server process writes data into Online redo log file
from Redo log buffer, when

log buffer becomes one third full

After a COMMIT or ROLLBACK

DBWR( Process writes Data block from buffer cache to


Data files) forces LGWR

45
3
Slide 455

Monitoring Redo Log Buffer


LGWR frees the buffer after writing the content to online
redo log file

DBWR waits until LGWR finishes.

Redo Buffer uses Round Robin method

Should be of sufficient size, to avoid bottleneck in the


LGWR writes & allocation of space for new entries

LOG_BUFFER parameter determines size of Redo Log


buffer

45
4
Slide 456

Monitoring Redo Log Buffer


Small redo buffer size has direct impact on performance

Large redo buffer has no effect on performance, it neither


improves not degrades

Size can be maximum of these


0.5M or
(128K * number of CPUs)

V$SYSSTATs 'redo buffer allocation retries' entry holds


redo buffer activity

45
5
Slide 457

Monitoring Redo Log Buffer

Good Usage can be,

In batch jobs, batch the commits, so LGWR gets enough


time

Use NOLOGGING operations, for large volume loading

45
6

Vous aimerez peut-être aussi