Vous êtes sur la page 1sur 49

Oracle Database Architecture overview

There are two terms that are used with Oracle

Database - A collection of physical operating system files
Instance - A set of Oracle processes and a SGA (allocation of memory)
These two are very closely related but a database can be mounted and opened by
many instances. An instance may mount and open only a single database at any one
point in time.
The File Structure
The are a number of different file types that make up a database
Parameter File - These files tells Oracle were to find the control files. Also
they detail how big the memory area will be, etc
Data Files - These hold the tables, indexes and all other segments
Temp Files - used for disk-based sorting and temporary storage
Redo Log Files - Our transaction logs
Undo log files - allows a user to rollback a transaction and provides read
Archive Log Files - Redo log files which have been archived
Control File - Details the location of data and log files and other relevant
information about their state.
Password File - Used to authenticate users logging in into the database.
Log files - alert.log contains database changes and events including startup
trace files - are debugging files.

Parameter Files
In order for Oracle to start it needs some basically information, this information is
supplied by using a parameter file. The parameter file can be either a pfile or a spfile:
pfile - a very simple plain text file which can be manually edited via vi or
spfile - a binary which cannot be manually edited (Oracle 9i or higher required)
The parameter file for Oracle is the commonly know file init.ora or init<oracle
sid>.ora, the file contains key/value pairs of information that Oracle uses when
starting the database. The file contains information such as database name, caches
sizes, location of control files, etc.
By Default the location of the parameter file is
windows - $ORACLE_ HOME\database
unix - $ORACLE_ HOME/dbs
The main difference between the spfile and pfile is that instance parameters can be
changed dynamically using a spfile, where as you require a instance reboot to load
pfile parameters.
To convert the file from one of the other you can perform the following
create pfile using a spfile

create pfile='c:\oracle\pfile\initD10.ora' from spfile;

startup db using pfile

startup pfile='c:\oracle\pfile\initD10.ora';

create spfile using a pfile

create spfile from pfile;

Display spfile location

show parameter spfile

Data Files
By Default Oracle will create at least two data files, the system data file which holds
the data dictionary and sysaux data file which non-dictionary objects are stored,
however there will be many more which will hold various types of data, a data file
will belong to one tablespace only (see tablespaces for further details).
Data files can be stored on a number of different filesystem types

Cooked - these are normally filesystems that can be accessed using "ls"
commands in unix
Raw - these are raw disk partitions which cannot be viewed, normally used to
avoid filesystem buffering.
ASM - automatic storage management is Oracle new database filesystem
(see asm for further details).
Clustered FS - this is a special filesystem used in Oracle RAC environments.
Data files contain the following
Segments - are database objects, a table, a index, rollback segments. Every
object that consumes space is a segment. Segments themselves consist of one
or more extents.
Extents - are a contiguous allocation of space in a file. Extents, in turn, consist
of data blocks
Blocks - are the smallest unit of space allocation in Oracle. Blocks normally are
2KB, 4KB, 8KB, 16KB or 32KB in size but can be larger.
The relationship between segments, extents and blocks looks like this

The parameter DB_BLOCK_SIZE determines the default block size of the database.
Determining the block size depends on what you are going to do with the database, if
you are using small rows then use a small block size (oracle recommends 8KB), if you
are using LOB's then the block size should be larger.
2KB or 4KB

OLTP - online transaction processing database would benefit from a small block size

8KB (default)

Most databases would be OK to use the default size

16KB or 32KB

DW - data warehouses, media database would benefit from a larger block size

You can have different block sizes within the database, each tablespace having a different block size depending on what is stored in t
tablespace. For an example
System tablespace could use the default 8KB and the OLTP tablespace could use a block size of 4KB.

There are few parameters that cannot be changed after installing Oracle and the DB_BLOCK_SIZE is one of them, so make sure t
correct choice when installing Oracle.

A data block will be made up of the following, the two main area's are the free space
and the data area.


contains information regarding the type of block (a table block, index block, etc), transaction information rega
and past transactions on the block and the address (location) of the block on the disk

Table Directory

contains information about the tables that store rows in this block

Row Directory

contains information describing the rows that are to be found on the block. This is an array of pointers to wher
are to be found in the data portion of the block.

Block overhead

The three above pieces are know as the Block Overhead and are used by Oracle to manage the block itself.

Free space

available space within the block


data within the block

A tablespace is a container which holds segments. Each and every segment belongs to
exactly one tablespace. Segments never cross tablespace boundaries. A tablespace
itself has one or more files associated with it. An extent will be contained entirely
within one data file.
So in summary the Oracle hierarchy is as follows:

A database is made up of one or more tablespaces

A tablespace is made up of one or more data files, a tablespace contains
A segment (table, index, etc) is made up of one or more extents. A segment
exists in a tablespace but may have data in many data files within a tablespace.
An extent is a continuous set of blocks on a disk. An extent is in a single
tablespace and is always in a single file within that tablespace.
A block is the smallest unit of allocation in the database. A block is the smallest
unit of i/o used by the database.
The minimum tablespaces required are the system and sysaux tablespace, the
following reasons are why tablespaces are used.
Tablespaces make it easier to allocate space quotas to users in the database
Tablespaces enable you to perform partial backups and recoveries based on the
tablespace as a unit
Tablespaces can be allocated to different disks and controllers to improve
You can take tablespaces offline without affecting the entire database
You can import and export specific application data by using the import and
export utilities at the tablespace.
There are a number of types that a tablespace can be
Bigfile tablespaces, will have only one file which can range from 8-128
Smallfile tablespaces (default), can have multiple files but the files are smaller
than a bigfile tablespace.
Temporary tablespaces, contain data that only persists for the duration a users
session, used for sorting

Permanent tablespaces, any tablespace that is not temporary one.

Undo tablespaces, Oracle uses this to rollback or undo changes to the db.
Read-only, no write operations are allowed.
See tablespaces for detailed information regarding creating, resizing, etc
Temp Files
Oracle will use temporary files to store results of a large sort operations when there is
insufficient memory to hold all of it in RAM. Temporary files never have redo
information (see below) generated for them, although they have undo information
generated which in turns creates a small amount of redo information. Temporary data
files never need to be backed up ever as they cannot be restored.
Redo log files
All the Oracle changes made to the db are recorded in the redo log files, these files
along with any archived redo logs enable a dba to recover the database to any point in
the past. Oracle will write all committed changes to the redo logs first before applying
them to the data files. The redo logs guarantee that no committed changes are ever
lost. Redo log files consist of redo records which are group of change vectors each
referring to specific changes made to a data block in the db. The changes are first kept
in the redo buffer but are quickly written to the redo log files.
There are two types of redo log files online and archive. Oracle uses the concept of
groups and a minimum of 2 groups are required, each group having at least one file,
they are used in a circular fashion when one group fills up oracle will switch to the
next log group.
See redo on how to configure and maintain the log files.
Archive Redo log
When a redo log file fills up and before it is used again the file is archived for safe
keeping, this archive file with other redo log files can recover a database to any point
in time. It is best practice to turn on ARCHIVELOG mode which performs the
archiving automatically.
See redo on how to enable archiving and maintain the archive log files.

Undo File
When you change data you should be able to either rollback that change or to provide
a read consistent view of the original data. Oracle uses undo data (change vectors) to
store the original data, this allows a user to rollback the data to its original state if
required. This undo data is stored in the undo tablespace. See undo for further
Control file
The control is one of the most important files within Oracle, the file contains data and
redo log location information, current log sequence numbers, RMAN backup set
details and the SCN (system change number - see below for more details). This file
should have multiple copies due to it's importance. This file is used in recovery as the
control file notes all checkpoint information which allows oracle to recover data from
the redo logs. This file is the first file that Oracle consults when starting up.
The view V$CONTROLFILE can be used to list the controlfiles, you can also use the
V$CONTROLFILE_RECORD_SECTION to view the controlfile's record structure.
You can also log any checkpoints while the system is running by setting the
See recovering critical files for more information.
Password file
This file optional and contains the names of the database users who have been granted
the special SYSDBA and SYSOPER admin privilege.
Log files
The alert.log file contains important startup information, major database changes and
system events, this will probably be the first file that will be looked at when you have
database issues. The file contains log switches, db errors, warnings and other
messages. If this file is removed Oracle creates another one automatically.
Trace Files
Traces files are debugging files which can trace background process information
(LGWR, DBWn, etc), core dump information (ora-600 errors, etc) and user
processing information (SQL).

Oracle Managed Files

The OMF feature aims to set a standard way of laying out Oracle files, there is no
need to worry about file names and the physical location of the files themselves. The
method is suited in small to medium environments, OMF simplifies the initial db
creation as well as on going file management.
System Change (Commit) Number (SCN)
The SCN is an important quantifier that oracle uses to keep track of its state at any
given point in time. The SCN is used to keep track of all changes within the database,
its a logical timestamp that is used by oracle to order events that have occurred within
the database. SCN's are increasing sequence numbers and are used in redo logs to
confirm that transactions have been committed, all SCN's are unique. SCN's are used
in crash recovery as the control maintains a SCN for each data file, if the data files are
out of sync after a crash oracle can reapply the redo log information to bring the
database backup to the point of the crash. You can even take the database back in time
to a specific SCN number (or point in time).
Checkpoints are important events that synchronize the database buffer cache and the
datafiles, they are used with recovery. Checkpoints are used as a starting point for a
recovery, it is a framework that enables the writing of dirty blocks to disk based on a
System Change or Commit Number (for SCN see above) and a Redo Byte Address
(RBA) validation algorithm and limits the number of blocks to recover.
The checkpoint collects all the dirty buffers and writes them to disk, the SCN is
associated with a specific RBA in the log, which is used to determine when all the
buffers have been written.

Oracle Processes
Oracle server processes perform all the database operations such as inserting and
deleting data. The oracle processes working with the SGA (oracle memory structure)
manage the database.
There are two types of Oracle process

User process - Responsible for running the application that connects to the
Oracle server process - Perform oracle tasks that manage the database.
There are a number of server processes that could be running , Windows will only
have one process called Oracle, this process will have one thread for each of the
below processes.

Process Monitor

System Monitor



Responsible for cleaning up after abnormally terminated connections.

Responsible for monitoring other server processes and restarting them if necessary
Registers the instance with the listener dynamically (dynamic service registration).
Restarts failed server processes and dispatcher processes

Temporary space cleanup

Crash recovery apon restart
Coalescing free space
Recovering transactions active against unavailable files
Instance recovery of failed node in OPS (Oracle parallel server)
Cleans up OJB$ (Low Level data dictionary)
Shrinks rollback segments
Offline's rollback segments
Other processes call the SMON process when required.

Distributed database


Recovers transactions that are left in a prepared state because of a crash or loss of connection durin

The checkpoint process is charged with instructing the database block buffer writers to write the da
then updates the data file headers and control file to indicate when the checkpoint was performed.
checkpoints and recovery time, the more checkpointing the less recovery time is need when a crash
The ckpt process does not do the checkpoint but assists with the checkpointing process by updating
A checkpointing process involves the following:
Checkpoint process

Database block writer


Flushing the redo log buffers to the redo log files

Writing a checkpoint record to the redo log file

Flushing the database log buffers to the data files

Updating the data file headers and control files after the checkpoint completes


Responsible for writing dirty blocks to disk when free space within the database buffer cache is low,

the buffer cache out to the disk. It uses the LRU (Least Recently Used) algorithm which retains data
long it has been since someone asked for that data. The database buffer cache is flushed to disk

when the database issues a checkpoint

when a server process can't find a clean reusable buffer after checking a threshold number

every 3 seconds

users process has searched to long for a free buffer when reading a buffer into the buffer c

Instance is shutdown

tablespace is put in backup mode or offline

segment is dropped

If you have multiple CPU's then it is advised to run multiple database writers. Use the DB_WRITER_P
increase the number of database writers, the instance has to be rebooted.

Responsible for flushing to disk the contents of the redo log buffer located in the SGA. Both committ
are written to the redo log buffer. The redo log buffer is flushed to disk before the data blocks are w
buffer is flushed to disk

Log writer


every 3 seconds

whenever a user commits a transaction

when the redo log buffer is a third full or contains 1 Mb of buffered data

before the DBWn process writes when a checkpoint occurs

Used when the database is in archive-mode, it copies the online redo log file to another location wh
would be used to perform media recovery. There can be a maximum of ten archive processes runnin
LOG_ARCHIVE_MAX_PROCESSES parameter determines how many archive processes will be started (d

Archive process


Manageability Monitor

Collects statistics to help the database manage itself. The MMON process collects the AWR (automat
MMON snapshot information which is used by the ADDM (automatic database diagnostic monitor), also MMO
thresholds are exceeded.

Manageability Monitor


The process flushes ASH information to disk when the buffer is full, it also captures session history a

Memory Manager


Uses the the metrics collected to determine the ideal distribution of memory within oracle. It const
adjusts the memory allocations according to workloads.

Job Queue


Used to schedule and run user jobs. It spawns job queue slave processes (J000-J999) which actually

Job Queue Process


These processes are what actually run the schedule jobs requested by CJQ0.

File Mapping Monitor


Maps files to immediate storage layers and physical devices. Results are normally kept in the DBMS_S
the 3rd party LVM (logical volume manager) supplier will supply a driver to map to.

Recovery Writer


This process is started when you implement flashback logging, it logs the before image (taken from
oracle block before it is changed, this is written to the flashback log files.

Change Tracking


This process tracks any data blocks that have changed which then RMAN can use to speed up backup
read the entire data file to see what has changed.

Queue Monitor


Spawns and coordinates queue slave processes.

Block server process


Used in OPS and keeps each servers SGA in the clusters consistent with each other.

Lock monitor process


Lock manager daemon


Used in OPS and controls the global locks and global resources for the block buffer cache in a cluste

Lock process


Used in OPS and is the same as the LMD daemon but handles requests for all global resources other t

Dispatcher process


Dispatcher processes that are used when using a shared server environment

Shared Server process


Shared Server processes that are used when using a shared server environment

Oracle process


Process spawner has the job of creating and managing other Oracle processes.

Oracle shadow process


Oracle's shadow process, could not find much on this process

Streams Advanced
Queuing process

q000 I believe this is something to do with Oracle Streams Advanced Queuing


Used in OPS and monitors all instances in a cluster to detect a failure of an instance.

There are a number of useful command and views you can use to get information
regarding the running processes.
Useful SQL
Display all processes

select name, description from v$bgprocess;

Display process memory


select program, pid, spid, username, pga_used_mem, pga_alloc_mem, pga_max_mem from v

Display users process

substr(s.username,1,18) username,
substr(s.program,1,20) program,
0,'No Command',
1,'Create Table',
9,'Create Index',
15,'Alter Table',
21,'Create View',
23,'Validate Index',
35,'Alter Database',
39,'Create Tablespace',
41,'Drop Tablespace',
40,'Alter Tablespace',
53,'Drop User',
62,'Analyze Table',
63,'Analyze Index',
s.command||': Other') command
v$session s,
v$process p,
v$transaction t,
v$rollstat r,

v$rollname n
where s.paddr = p.addr
and s.taddr = t.addr (+)
and t.xidusn = r.usn (+)
and r.usn = n.usn (+)
order by 1

Useful Views

displays information about the background processes


contains information about the currently active processes

Oracle Memory Architecture

Oracle uses three kinds of memory structures
is a large part of memory that all the oracle background processes access.
(System Global Area)

This is memory that is private to a single process or thread and is not accessible by any other process or threa
(Process Global Area)

(User Global Area)

This is memory that is assoicated with your session, it can be found in the PGA or SGA depending on whether y
connected to the database via shared server
Shared Server - the UGA will be in the SGA
Dedicated Server - the UGA will be in the PGA

There are five memory structures that make up the System Global Area (SGA). The
SGA will store many internal data structures that all processes need access to, cache
data from disk, cache redo data before writing to disk, hold parsed SQL plans and so

Shared Pool
The shared pool consists of the following areas:

Library cache includes the shared SQL area, private SQL areas, PL/SQL procedures and packages the cont
such as locks and library cache handles. Oracle code is first parsed, then executed , this parsed code is st
library cache, oracle first checks the library cache to see if there is an already parsed and ready to execu
the statement in there, if there is this will reduce CPU time considerably, this is called a soft parse, If Or
parse it then this is called a hard parse. If there is not enough room in the cache oracle will remove olde
code, obviously it is better to keep as much parsed code in the library cache as possible. Keep an eye on
hits which is an indication that a lot of hard parsing is going on.

Dictionary cache is a collection of database tables and views containing information about the database,
structures, privileges and users. When statements are issued oracle will check permissions, access, etc an
this information from its dictionary cache, if the information is not in the cache then it has to be read in
and placed in to the cache. The more information held in the cache the less oracle has to access the slow

The parameter SHARED_POOL_SIZE is used to determine the size of the shared pool, there is no way to ad
caches independently, you can only adjust the shared pool size.

The shared pool uses a LRU (least recently used) list to maintain what is held in the buffer, see buffer cac
details on the LRU.
You can clear down the shared pool area by using the following command
alter system flush shared_pool;

Buffer cache

This area holds copies of read data blocks from the datafiles. The buffers in the cache contain two lists, t
and the least used list (LRU). The write list holds dirty buffers which contain modified data not yet writte
The LRU list has the following

free buffers hold no useful data and can be reused

pinned buffers actively being used by user sessions

dirty buffers contain data that has been read from disk and modified but hasn't been written to d

It's the database writers job to make sure that they are enough free buffers available to users session, if n
will write out dirty buffers to disk to free up the cache.
There are 3 buffer caches

Default buffer cache, which is everything not assigned to the keep or recycle buffer pools, DB_C

Keep buffer cache which keeps the data in memory (goal is to keep warm/hot blocks in the pool
possible), DB_KEEP_CACHE_SIZE.

Recycle buffer cache which removes data immediately from the cache after use (goal here is to
blocks as soon as it is no longer needed), DB_RECYCLE_CACHE_SIZE.

The standard block size is determined by the DB_CACHE_SIZE, if tablespaces are created with a different
then you must also create an entry to match that block size.

DB_2K_CACHE_SIZE (used with tablespace block size of 2k)

DB_4K_CACHE_SIZE (used with tablespace block size of 4k)

DB_8K_CACHE_SIZE (used with tablespace block size of 8k)
DB_16K_CACHE_SIZE (used with tablespace block size of 16k)
DB_32K_CACHE_SIZE (used with tablespace block size of 32k)
buffer cache hit ratio is used to determine if the buffer cache is sized correctly, the higher the value the
read from the cache.
hit rate = (1 - (physical reads / logical reads)) * 100
You can clear down the buffer pool area by using the following command
alter system flush buffer_cache;

The redo buffer is where data that needs to be written to the online redo logs will be cached temporarily
written to disk, this area is normally less than a couple of megabytes in size. These entries contain necess
information to reconstruct/redo changes by the INSERT, UPDATE, DELETE, CREATE, ALTER and DROP comm
The contents of this buffer are flushed:

Redo buffer

Every three seconds

Whenever someone commits a transaction

When its gets one third full or contains 1MB of cached redo log data.

When LGWR is asked to switch logs

Use the parameter LOG_BUFFER parameter to adjust but be-careful increasing it too large as it will reduc
but commits will take longer.

This is an optional memory area that provide large areas of memory for:

Shared Server - to allocate the UGA region in the SGA

Parallel execution of statements - to allow for the allocation of inter-processing message buffers
coordinate the parallel query servers.

Backup - for RMAN disk I/O buffers

Large Pool

The large pool is basically a non-cached version of the shared pool.

Use the parameter LARGE_POOL_SIZE parameter to adjust
Java Pool
Used to execute java code within the database.

Use the parameter JAVA_POOL_SIZE parameter to adjust (default is 20MB)

Streams are used for enabling data sharing between databases or application environment.
Streams Pool
Use the parameter STREAMS_POOL_SIZE parameter to adjust

The fixed SGA contains a set of variables that point to the other components of the
SGA, and variables that contain the values of various parameters., the area is a kind of
bootstrap section of the SGA, something that Oracle uses to find other bits and pieces
of the SGA
For more information regarding setting up the SGA click here.
The PGA (Process Global Area) is a specific piece of memory that is associated with a
single process or thread, it is not accessible by any other process or thread, note that
each of Oracles background processes have a PGA area. The UGA (User Global Area)
is your state information, this area of memory will be accessed by your current
session, depending on the connection type (shared server) the UGA can be located in
the SGA which is accessible by any one of the shared server processes, because a
dedicated connection does not use shared servers the memory will be located in the
Shared server - UGA will be part of the SGA
Dedicated server - UGA will be the PGA
Memory Area

Dedicated Server

Shared Server

Nature of session memory



Location of the persistent area



Location of part of the runtim area for select




Location of the runtime area for DML/DDL




Oracle creates a PGA area for each users session, this area holds data and control
information, the PGA is exclusively used by the users session. Users cursors, sort
operations are all stored in the PGA. The PGA is split in to two areas

Session Information PGA in an instance running with a shared server requires additional memory
(runtime area)
for the user's session, such as private SQL areas and other information.
Stack space
(private sql area)

The memory allocated to hold a sessions variables, arrays, etc and other
information relating to the session.

Automatic PGA Management

To reduce response times sorts should be performed in the PGA cache area (optimal
mode operation), otherwise the sort will spill on to the disk (single-pass / multiplepass operation) this will reduce performance, so there is a direct relationship between
the size of the PGA and query performance. You can manually tune the below to
increase performance
sort_area_size - total memory that will be used to sort information before
swapping to disk
sort_area_retained_size - memory that is used to retained data after a sort
hash_area_size - memory that will would be used to store hash tables
bitmap_merge_area_size - memory Oracle uses to merge bitmaps retrieved
from a range scan of the index.
Staring with Oracle 9i there is a new to manage the above settings that is to let oracle
manage the PGA area automatically by setting the parameter following parameters
Oracle will automatically adjust the PGA area basic on users demand.
workarea_size_policy - you can set this option to manual or auto (default)
pga_aggregate_target - controls how much to allocate the PGA in total
Oracle will try and keep the PGA under the target value, but if you exceed this value
Oracle will perform multi-pass operations (disk operations).

System Parameters

manual or auto (default)


total amount of memory allocated to the PGA

PGA/UGA amount used

select a.name, to_char(b.value, '999,999,999') value

from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and a.name like '%ga memory%';

Display if using memory or disk sorts

set autotrace traceonly statistics;

set autotrace off;

Display background process PGA memory


select program, pga_used_mem, pga_alloc_mem, pga_max_mem from v$process

Oracle Transactions
A transaction is logical piece of work consisting of one or more SQL statements. A
transaction is started whenever data is read or written and they are ended by a
COMMIT or ROLLBACK. DDL statements always perform a commit first this is
called an implicit commit this is because the user did not issue the commit.
Oracle uses transaction locking and multiversion concurrency control using undo
records to ensure serializability in transactions, this stops any user conflicts while
ensuring database consistency.
Transaction Properties
Database transactions should exhibit attributes described by the ACID properties:
Atomicity - A transaction either happens completely, or none of it happens
Consistency - A transaction takes the database from consistent state to the next
Isolation - The effects of a transaction may not be visible to other transaction
until the transaction has committed.
Durability - Once the transaction is committed, it is permanent all changes are
written to the redo log first then the data files.
Transaction Concurrent Control

Oracle uses locking to ensure data consistency but the locking is done via the least
restrictive fashion, with the goal of maintaining the maximum amount of concurrency.
Concurrency problems can be any of the following
Dirty Reads

Occurs when a transaction reads data that has been updated by an ongoing transaction but
has not been committed permanently to the database, it is possible that the transaction
may be rolled back.

Phantom Reads

Are caused by the appearance of new data in between two database operations in a

Lost Updates

Is caused by transactions trying to read data while it is being updated by other transaction.

When a transaction finds data that it has read previously has been modified by some other
transaction, you have a non-repeatable-read or fuzzy read. Basically when you read data at
one time and its different when you read it again.

To overcome the above problems you could serialize all the transactions making sure
that data is consistent, however this does not scale well. Oracle serializes the
transaction via isolation levels and the management of undo data.
Isolation Levels
The main isolation levels are the following

Then transaction will lock all the tables it is accessing to prevent other transactions
updating data until it either rollbacks or commits

Repeatable Read

A transaction that reads the data twice from a table at different points in time will find
the same values each time. Both dirty reads and non-repeatable are avoided with this
level of isolation.

Read uncommitted Allows a transaction to read another transaction's immediate value before it commits
Read committed

Guarantees that the row data won't change while you're accessing a particular row in a

Oracle uses locks and multiversion concurrency control system, it uses row-level
locking (it never uses lock escalation), it will automatically place the lock for you and
store the lock information in the data block, locks are held until the transaction is
either committed or rolled back. Multiversion concurrency is a timestamp approach to
read the original data, oracle will write the original data to a undo record in
the undo tablespace, queries then have a consistent view of the data which provide
read consistency- they only see data from a single point in time, for more information
see Oracle locking.
Oracle Locks
There are a number of different locks in Oracle and tables that can obtain information
regarding locks.

DML Locks

Oracle uses row-level locks, this is to protect the row while its being changed, the lock
will never block a reader of the same row. A table lock is also placed but this ensures
that no DDL is used on the table.

DDL Locks

When changing table attributes Oracle places a exclusive lock on the table to prevent
any modifications to the rows. This lock is also used during DML transactions to make
sure the table is not changed when changing or inserting data.


Latches protect the memory structure with the SGA, they control the processes that
access the memory area's.

Internal Locks

Are used by oracle to protect access to structures such as data files, tablespaces and
rollback segments.

Distributed Locks

Are specialized locking mechanisms used in distributed systems.

Blocking Locks

Occurs when a lock is placed on an object by a user to prevent other users accessing the
same object.


Occurs when two sessions block each other while each waits for a resource that the
other session is holding. Oracle always steps in to resolve the issue by killing one of the
sessions, check the alert.log for deadlocks.
Useful Views


lists all locks or latches held in the database, and all outstanding requests for a lock or


shows all the sessions that are waiting for a lock


displays a session if it is not waiting for a locked object but is holding a lock on an
object for which another session is waiting


This view lists the locks currently held by the Oracle Database and outstanding requests
for a lock or latch


This view lists session information for each current session

See Oracle Locking for more information.

Integrity Constraints and Transaction
See constraints for more information about deferable and immediate constraints.
Autonomous Transactions
You can create a transaction within a transaction that will rollback independently of its
parent transaction. They can be used in the following
top-level anonymous blocks
local (a procedure in a procedure), stand-alone or packaged functions and
methods of object types

database triggers


create or replace procedure autonomous_example

pragma autonomous_transaction;
insert into test_table ('Autonomous insert');

The pragma directive tells oracle that this is a new autonomous transaction and that it
is independent from its parent.
A trigger cannot contain a commit or rollback statement, however by using
autonomous transactions you can overcome this limitation, it is considered bad
practice but it is possible.
create table tab1 (col1 number);
create table log (timestamp date, operation varchar2(2000));

Trigger with a commit

create trigger test_trig

after insert on tab1
insert into log values (SYSDATE, 'Insert and commit via trigger');
SQL> insert into tab1 values (1);
insert into tab1 values (1)
ERROR at line 1:
ORA-04092: cannot COMMIT in a trigger
ORA-06512: at "TEST01.TEST_TRIG", line 3
ORA-04088: error during execution of trigger 'TEST01.TEST_TRIG'

create or replace trigger test_trig

after insert on tab1
pragma autonomous_transaction;
Overcome commit problem begin
insert into log values (SYSDATE, 'Insert and commit using autonomous transaction')
commit; -- only allowed in autonomous triggers

Undo Data

Undo data provides read consistency, there are two ways to control the undo manual
or automatic. see undo data for more details
Oracle Transaction
Simple Oracle transaction
1. User requests a connection to oracle
2. A new dedicated server process is started for the user
3. User executes a statement to insert data in to a table
4. Oracle checks the users privileges, it first checks the library cache (cache hit)
for the information and if not found retrieve it from disk.
5. Check to see if the SQL statement has been parsed before (library cache) if it
has then this is called a soft parse, otherwise the code has to be compiled a hard
6. Oracle creates a private SQL area in the users session's PGA
7. Oracle checks to see if the data is in the buffer cache, otherwise perform a read
from the data file
8. Oracle will then apply row-level locks where needed to prevent others
changing the row (select statements are still allowed on the row)
9. Oracle then writes the change vectors to the redo log buffer
10.Oracle then modifies the row in the data buffer cache
11. The user commits the transaction making it permanent, the row-level locks are
12.The log writer process immediately writes out the changed data in the redo log
buffers to the online redo log files, in other words the change is now
13.Oracle informs the user process that the transaction was completed successfully
14.It may be sometime before the data buffer cache writes out the change to the
data files.

Note: if the users transaction was an update then the before update row would have
been written to the undo buffer cache, this would be used if the user rolls back the
change of if another user run's a select on that data before the new update was

Oracle Locking and Concurrency

In oracle you will learn that:
Transactions are what databases are all about; they are good
You should defer committing as long as you have to. You should not do it
quickly to avoid stressing the system, it does not stress the system to have long
or large transactions. The rule is commit when you must, and not before. Things
should only be small or large as your business logic dictates
You should hold locks on data as long as you need to. There are tools you to
use, not things to be avoided. Locks are not a scarce resource.
There is no overhead involved with row-level locking in Oracle, none
You should never escalate a lock(for example, use a table lock instead of a row
lock) because it would be 'better on the system'. In Oracle it won't be better for
the system - it will save no resources
Concurrency and consistency can be achieved. You can get it fast and correct,
every time.
Oracle uses a different locking method then most other databases, Oracle locking
policy consists of the following:
Oracle locks data at the row level on modification only. There is no lock
escalation to a block or table level, ever.
Oracle never locks data just to read it. There are no locks placed on rows of
data by simple reads.
A writer of data does not block a reader of data.
A writer of data is blocked only when another writer of data has already locked
the row it was going after.

Pessimistic Locking
When a user queries some data and picks a row to change the below statement is used:

Pessimistic Locking

select empno, ename, sal

from emp
where empno = :empno
and ename = :ename
and sal = :sal
for update nowait

what the "for update nowait" statement does is to lock the row against updates by
other sessions. This is why this approach is called pessimistic locking. We lock the
row before we attempt to update it because we doubt that the row will remain
unchanged otherwise. We'll get three outcomes from this statement:
if the underlying data has not changed, we get our row back and this row will
be locked from updates by others (but not reads).
if another user is in the process of modifying that row, we will get a ORA00054 Resource Busy error. We are blocked and must wait for the other user to
finish with it.
if, in the time between selecting the data and indicating our intention to update,
someone has already changed the row then we will get zero rows back. The
data will be stale. The application needs to re-query the data and lock it before
allowing the end user to modify any of the data in order to avoid a lost update
Optimistic Locking
Optimistic locking, is to keep the old and new values in the application and apon
updating the data us a update like below:

Optimistic Locking

update table
set column1 = :new_column1, column2 = :new_column2, ....
where column1 = :old_column1
and column2 = :old_column2

We are optimistically hoping that the data has not changed, if we are lucky the row is
updated, if not we update zero rows and now we have two options get the user to rekey the data back in or should be we try and merge the data (lots of code to do this)

So, the best method in Oracle would be to use pessimistic locking as the user can have
confidence that the data they are modifying on the screen is currently owned by them
- in other words the row is checked out and nobody could modify it. While you may
be thinking what if the user walks away the row is locked, in this scenario its would
be better to get the application to release the lock or use Resource Profiles in the
database to time out idle sessions. Remember that even if a row is locked you can
still read that row, it is never blocked for reading in Oracle.
Blocked Inserts
The only time an INSERT will block is when you have a table with a primary key or
unique constraint placed on it and two sessions simultaneously attempt to insert a row
with the same value, it is most avoided via the use of Oracle sequences in the
generation of primary keys as they are highly concurrent method of generating unique
keys in a multi-user environment.
Blocked Updates and Deletes
To avoid update and delete blocking use either one of the two locking
methods Pessimistic or Optimistic.
Deadlocks occur when two people hold a resource that the other wants. Oracle records
all deadlocks in a trace file. The number one cause of deadlocks is un-indexed foreign
if you update the parent table's primary key the child table will be locked in the
absence of an index
if I delete a parent table, the entire child table will be locked, again in the
absence of an index.
Lock Escalation
In other RDBMS when a users locks 100 rows (this may vary) the lock is escalated to
a table lock, however Oracle will never escalates a lock, NEVER. Oracle does
practice lock conversion/lock promotion they are synonymous.
If a user select a row using FOR UPDATE two locks are placed, one exclusive lock on
the row and the other a ROW SHARE LOCK on the table itself. This will prevent

other users placing a exclusive lock on the table, thus preventing them from altering
the table structure.
Type of Locks
There a number of different types of locks as listed below:
DML Locks - DML (data manipulation language), in general SELECT,
INSERT, UPDATE and DELETE. DML locks will be locks on a specific row
of data, or a lock at the table level, which locks every row in the table.
DDL locks - DDL (data definition language), in general CREATE, ALTER and
so on. DDL locks protect the definition of the structure of objects.
Internal locks and latches - These are locks that Oracle uses to protect its
internal data structure.
Distributed Locks - These are used by OPS to ensure that different nodes are
consistent with each other.
Deadlocks - Occurs when two sessions block each other while each waits for a
resource that the other session is holding.
PCM - PCM (Parallel Cache Management) These are locks that protect one or
more cached data blocks in the buffer cache across multiple instances, also used
in OPS.
DML Locks
There are two main types of DML locks TX (Transaction) and TM (DL Enqueue). A
TX lock is acquired when a transaction initiates its first change and is held until the
transaction performs a COMMIT or ROLLBACK. It is used as a queuing mechanism
so that other sessions can wait for the transaction to complete. A TM lock is used to
ensure that the structure of the table is not altered while you are modifying its
The complete set of DML locks are
Row Share

permits concurrent access but prohibits others from locking table for exclusive access

Row Exclusive

same as row share but also prohibits locking in share mode


permits concurrent queries but prohibits updates to the table

Share Row Exclusive

prevent others from locking in share mode or updating the rows on the whole table


permits queries but no DML against the table but select ok

The type of locked used when using DML are

RS (table) and RX (row)

select ... for update;

lock table ... in row share mode

RX (table) and RX (row)

any insert, update or delete

Below are tables that can be used to identify locks, transaction ID, etc, the code can be
used to obtain this information.
Useful SQL

Identify locks and

Transaction ID's

select username,
trunc(id1/power(2,16)) rbs,
bitand(id1, to_number('ffff', 'xxxx'))+0 slot,
id2 seq,
from v$lock, v$session
where v$lock.type = 'TX'
and v$lock.sid = v$session.sid
and v$session.username = USER;

Identify who is blocking


select (select username from v$session where sid=a.sid) blocker,

' is blocking ',
(select username from v$session where sid=b.sid) blockee,
from v$lock a, v$lock b
where a.block = 1
and b.request > 0
and a.id1 = b.id1
and a.id2 = b.id2;


select * from employee for update nowait;

select * from employee for update wait 10;

Note: the above commands will abort if the lock is not release in the specified time

Useful Views

lists the active transactions in the system


lists session information for each current session.


lists the locks currently held by the Oracle Database and outstanding requests for a lock or latch.


lists all locks acquired by every transaction on the system. It shows which sessions are holding DML locks (
type enqueues) on what objects and in what mode.


lists all locks or latches held in the database, and all outstanding requests for a lock or latch


displays a session if it is not waiting for a locked object but is holding a lock on an object for which anoth


lists all DDL locks held in the database and all outstanding requests for a DDL lock


lists all DML locks held in the database and all outstanding requests for a DML lock.

DDL Locks
DDL locks are automatically placed against objects during a DDL operation to protect
them from changes by other sessions.
There are three types of DDL locks
Exclusive DDL Locks - these prevent other sessions from gaining a DDL lock
or TM lock themselves. You can query a table but not modify it. Exclusive
locks will normally lock the object until the statement has finished. However in
some instances you can use the ONLINE option which only uses a low-level
lock this still locks DDL operations but allows DML to occur normally.
Share DDL Locks - This protect the structure of the referenced object against
modification by other sessions, but allows modification to the data. Shared
DDL locks allow you to modify the contents of a table but not their structure.
Breakable Parse Locks - This allows an object, such as a query plan cached in
the shared pool to register its reliance on some objects. If you perform a DDL
against that object, Oracle will review the list of objects that have registered
their dependence, and invalidate them. Hence these locks are breakable, they do
not prevent the DDL from occurring. Breakable parse locks are used when a
session parses a statement, a parse lock is taken against every object referenced
by that statement. These locks are taken in order to allow the parsed, cached
statement to be invalidated (flushed) in the shared pool if a reference object is
dropped or alter in some way. Use the below SQL to identify any parse locks on
views, procedures,grants, etc
Identify Locks

select * from dba_ddl_locks;

Latches and Internal Locks (Enqueues)

Latches are locks that are held for short period of time, for example the time it takes to
modify an in-memory data structure. They are used to protect certain memory

structures such as the database block buffer cache or the library cache in the shared
pool. They are lightweight low-level serialization mechanism to protect the inmemory data structures of the SGA. They do not support queuing and do not protect
database objects such as tables or data files.
Enqueues are another more sophisticated serialized device used when update rows in a
database table. The requestor will queue up and wait for the resource to become
available, hence these are not as fast as a latch.
It is possible to use manual locking using the FOR UPDATE statement or LOCK
TABLE statement, or you can create your own locks by using the DBMS_LOCK
Occurs when two sessions block each other while each waits for a resource that the
other session is holding.
Oracle operates a multi-version read-consistent concurrency model. Oracle provides:
Read-consistent queries: Queries that produce consistent results with respect
to a point in time by using rollback segments.
Non-Blocking queries: Queries are never blocked by writers of data, as they
would be in other databases.
When Oracle reads a table its uses the rollback segment of any rows that data has
changed since when it started the read. This allows a point in time read of a table. This
also allows Oracle not to lock a table while reading large tables.
Transaction and Row Locks
I am now going to describe in detail how a lock works, you don't need this details but
its good to understand what is going on under the covers. Row level locks protect
selected rows in a data block during a transaction, a transaction acquires a enqueue
and an exclusive lock for each individual row modified by one of the following

Select with for update
These locks are stored within the data block and each lock refers to the transaction
enqueue and as they are stored in the blocck they have a database wide view. The lock
is held until either a commit or a rollback is executed, SMON also acquires it in
exclusive mode when recovering (undo-ing) a transaction. Transaction locks are used
as a queuing mechanism for processes awaiting release of an object locked by a
transaction process.
Every data block (except for temp and rollback segements) will have a number of
predefined transaction slots. Undo segments have a different type of transaction slot
called transaction tables, the transaction slots are otherwise know as interested
transaction lists (ITLs) and are controlled by the INITRANS parameter (default is 2
for tables and 3 for indexes). A transaction slots uses 24 bytes of free space in the
block, and the maximum number of transactions slots is controlled by the
MAXTRANS parameter, hwoever you can only use upto 50% of the block for
transactions slots.
ITL slots are required for every transaction, it contains the transaction ID (XID) which
is a pointer to an entry in the transaction table of a rollback segment. You can still read
the data but other processes wanting to change the data must wait until the lock is
released (commit or rollback). The ITL entry contains a XID, undo byes address
(UBA) information, flags indicating the transaction status (Flag) and lock count (Lck)
showing the number of rows locked by this transaction within the block and SCN at
which the transaction is updated. Basically the XID identifies the undo information
about that transaction.

You can use the view X$KTUXE to obtain information on the number of rows that are
affected, also the view V$TRANSACTION can be used to get more details on a
When the transaction completed Oracle performs the bare minimum to commit the
transaction, it involves updating the flag in the transaction table and the block is not
revisisted. This is know as a fast commit, during this time the ITL in the data block is
still pointing to the transaction table of the corresponding rollback segment. If an
transaction wants to use the block (persume the change has not been comiited) it see's
that its point to a rollback segement makes a copy of the block in memory, gets the
UBA from the ITL, reads the data from the undo and uses it to rollback the change
defined by the undo. If the transaction is committed the rows are no longer locked but
the lock byte is the row header is not cleared until the next time a DML action is
performed on the block. The block cleanout is delayed by some discrete time interval
because of the fast commit, this is called delay block cleanout. The cleanout
operation closes the open ITLs and generates the redo information as a block cleanout
many involve updating the block with a new SCN, this is why you see redo generation
for some select statements.

All the Oracle changes made to the db are recorded in the redo log files, these files
along with any archived redo logs enable a dba to recover the database to any point in
the past. Oracle will write all commited changes to the redo logs first before applying
them to the data files. The redo logs guarantee that no committed changes are ever
lost. Redo log files consist of redo records which are group of change vectors each
referring to specific changes made to a data block in the db. The changes are first kept
in the redo buffer but are quickly written to the redo log files.
There are two types of redo log files online and archive. Oracle uses the concept of
groups and a minimum of 2 groups are required, each group having at least one file.
They are used in a circular fashion when one group fills up oracle will switch to the
next log group.
The LGWR process writes redo information from the redo buffer to the online redo
logs when
user commits a transaction
redo log buffer becomes 1/3 full

redo buffer contains 1MB of changed records

switch of the log files
The log group can be in one of four states

log group that is being actively being written too.


the files in the log group are required for instance recovery


the files in the log group are not required for instance recovery and can be over written


log group has never been written too, a new group.

A log file can be in one of four states


the file is corrupt or missing


the log file is new and never been used


the log file is no longer being used


the log file is currently being used

Log group and log files commands

Creating new log group

alter database add logfile group 4 ('c:\oracle\redo3a.log','c:\oracle\redo3b.l

Adding new log file to existing


alter database add logfile member 'c:\oracle\redo3c.log' to group3;

shutdown database
rename file
startup database in mount mode
Renaming log file in existing group alter database rename file 'old name' to'new name'
open database
backup controlfile

Drop log group

alter database drop logfile group 3;

Drop log file from existing group

alter database drop logfile member 'c:\oracle\redoc.log'


Clearing Log groups

alter database clear logfile group 3;

alter database clear unarchived logfile group 3;
Note: used the unarchived option when a loggroup has not ben archived

Logswitch and Checkpointing

alter system checkpoint;
alter system switch logfile;
alter system archive log current;

alter system archive log all;

# Difference between them are

switch logfile - will switch logfile and return prompt immediately, archiving
log current - will switch logfile and return prompt only when logfile has been
log all - will only archiving full log files
Note: I have discussed checkpoints

Display the redo usage

select le.leseq "Current log sequence No",

100*cp.cpodr_bno/le.lesiz "Percent Full",
cp.cpodr_bno "Current Block No",
le.lesiz "Size of Log in Blocks"
from x$kcccp cp, x$kccle le
where le.leseq =CP.cpodr_seq
and bitand(le.leflg,24) = 8

Useful Views

displays log file information from the control file.


contains information about redo log files.

Archived Logs
When a redo log file fills up and before it is used again the file is archived for safe
keeping, this archive file with other redo log files can recover a database to any point
in time. It is best practice to turn on ARCHIVELOG mode which performs the
archiving automatically.
The log files can be written to a number of destinations (up to 10 locations), even to a
standby database, using the
parameters log_archive_dest_n andlog_archive_min_succeed_dest you can control
how Oracle writes its log files.
alter system set log_archive_dest_1 = 'location=c:\oracle\archive' scope=spfile;
alter system set log_archive_format = 'arch_%d_%t_%r_%s.log' scope=spfile;


shutdown database
startup database in mount mode
alter database archivelog;
startup database in open mode
Archive format options
%r - resetlogs ID (required parameter)
%s - log sequence number (required parameter)
%t - thread number (required parameter)
%d - database ID (not required)


alter database noarchivelog;


archive log list;

select name, log_mode from v$database;
select archiver from v$instance;

Display system parameters

show parameter log_archive_dest

show parameter log_archive_format
show parameter log_archive_min_succeed_dest


Useful Views

Display the archived log files


Display if database is in archive mode


Display if database is in archive mode

I have a more detailed section on redo in my Data Guard section called Redo

Undo Data
Undo data provides read consistency, Oracle provides two ways to allocate and
manage undo(rollback) space among transactions. If you use the manual approach you
will be using traditional rollback segments but is easier to let oracle automatically
control the rollback segments which is called AUM (automatic undo management),
the only part on the DBA side is to size the undo tablespace, then oracle will
automatically create the undo segments within the tablespace.
Using AUM you can take advantage of flashback recovery, flashback query, flashback
versions query, flashback transaction query and flashback table - seeflashback for
further details.
There are three parameters associated with AUM
(default manual)

This is the only mandatory parameter and can be set to either auto or manual.

(default undo

This specifies the tablespace to be used, of course the tablespace needs to be a undo tablespace. If you do n
value oracle will automatically pick the one available. If no undo tablespace exists then oracle will use the s
tablespace which is not a good idea (always create one).


Once a transaction commits the undo data for that transaction stays in the undo tablespace until space is req
which case it will be over written.

When a transaction commits the undo data is not required anymore, the undo data
however will stay in the undo tablespace unless space is required then newer
transactions will overwrite it. During a long running query that need to retain older
undo data for consistency purposes , there might be a possibility that some data it
needs has been over written by other new transactions, this would produce the
"snapshot too old" error message, which indicates that the before image has been
overwritten. To prevent this oracle uses the undo_retention system parameter which
try's and keeps the data in the undo tablespace for as long a possible meeting
the undo_retention target, however this is not guaranteed.
Undo data can be in 3 states
uncommitted undo

When is undo data over written

undo data that supports active transactions and
required in the event of rollback


committed undo information also known as unexpired undo, required to support after undo_retention period or undo tablespace spa
undo_retention interval
unless guaranteed option is set (see below)
expired undo information

undo information that is no longer needed


There are times when you want to guarantee the undo retention at any cost even if it
means transactions fail, the option retention guarantee will guarantee that the data
will stay in the undo tablespace until the interval has expired, even if there are space
pressure problems in the undo tablespace, the default is not to set the guarantee
retention period.
I have have noticed on my travels that once undo is expired it is no longer available
even if the undo tablespace is not under any space pressure, the only way to keep it is
to increase the undo_retention parameter. You can prove this by checking the oldest
undo data avilable via the dba_hist_undostat view, the oldest data will match the
undo_retention period you set via the undo_retention parameter.
Undo Sizing
Depending on how much undo data you want to keep will determine the size of the
undo tablespace, a simple formula is used when calculating the undo tablespace size
undo tablespace size

UR = undo retention (system parameter undo_retention)

UPS = maximum undo blocks used/sec (obtain from v$undostat)
DB_BLOCK_SIZE = the default block size (obtained from dba_tablespaces)

The Oracle Enterprise Manager uses the desired time period for undo retention and
analyses the impact of the desired undo retention setting.
Undo Commands
Undo System Management

alter system set undo_management=auto;


alter system set undo_tablespace = 'undotbs02';


alter system set undo_retention = 43200; (it's in seconds)

Undo Control

create undo tablespace undotbs2 datafile 'c:\oracle\undo02.dbf' size 2G;


drop undo tablespace undotbs02;


alter tablespace undotbs02 retention guarantee;

alter tablespace undotbs02 retention noguarantee;

See current undo blocks

select begin_time, undotsn, undoblks, activeblks, unexpiredblks, expiredblks f


select begin_time, undotsn, undoblks, activeblks, unexpiredblks, expiredblks f

Contains snapshots of v$undostat dba_hist_undostat;

(use obtain the oldest undo
NOTE: If your current undo_retention period is 6 days then the oldest undo dat
dba_hist_undo should be 6 days old.

Useful Views

describes rollback segments


describes all tablespaces in the database


describes the extents comprising the segments in all undo tablespaces in the database


displays the history of histograms of statistical data to show how well the system is working. The a
statistics include undo space consumption, transaction concurrency, and length of queries execute
instance. This view contains snapshots of V$UNDOSTAT.


displays a histogram of statistical data to show how well the system is working. The available stati
undo space consumption, transaction concurrency, and length of queries executed in the instance.
this view to estimate the amount of undo space required for the current workload. Oracle uses thi
tune undo usage in the system. The view returns null values if the system is in manual undo manag


lists the names of all online rollback segments. It can only be accessed when the database is open.


contains rollback segment statistics


lists the active transactions in the system


Flashback recovery, flashback query, flashback versions query, flashback transaction

query and flashback table all use undo data for more details see flashback.

FlashBack Architecture
There are a number of flashback levels
row level

flashback query, flashback versions query, flashback transaction query

table level

flashback table, flashback drop

database level

flashback database

Oracle 10g has several error-correction techniques that use undo data, however they
are only available if you use automatic undo management (AUM),
Flashback query - retrieves data from a past point in time
Flashback versions query - shows you different versions of data rows, plus start
and end times of a particular transaction that created that row
Flashback transaction query - lets you retrieve historical data for a given
transaction and the SQL code to undo the transaction.
Flashback table - recovers a table to its state at a past point in time, without
having to perform a point in time recovery.
There are two other flashback technologies that do not use the undo data, they use
flashback logs and recyclebin instead.
flashback database - restore the whole database back to a point in time.
flashback drop - allows you to reverse the effects of a drop table statement,
without resorting to a point-in-time recovery
DBMS_FLASHBACK, flashback table query, flashback transaction query, flashback
version query and select .. as of .. statements all use the undo segments.Flashback
database uses the flashback logs and flashback drop uses the recycled bin.
When using flashback, if any operations violate a constraint the flashback operation
will be rolled back, you can disable constraints but its probably not a good idea. If
you have a table using a foreign key it is a good idea to flashback both tables.

Flashback technology requires you to lock the whole table if it cannot it will fail
RMAN can only do flashback database and no other flashback technology.
Flashback Query
Using flashback query involves using a select statement with an AS OF clause. you
can select data from a past point in time. If you get a ORA-08180 it means that the
data is no longer available in the undo segments.

Flashback query (time)

grant flashback on table_test to pvalle;

grant flashback on any table to pvalle;
select * from employees as of timestamp
to_timestamp('03-12-2007 08:00:00', 'DD-MM-YYYY HH:MI:SS')
where last_name = valle;

select employee_id, name from hr.employee as of timestamp (systimestamp - inte

select * from employees as of scn 4542;

Flashback query (SCN)

Note: using a scn will put you with 3 secs if you need to be dead accurate use

grant execute on dbms_flashback to test01;

Take the whole session back in


execute dbms_flashback.enable_at_time(to_timestamp('03-12-2007 08:00:00', 'DDexecute dbms_flashback.disable;


insert into employees

select * from employees as of timestamp
to_timestamp('03-12-2007 08:00:00', 'DD-MM-YYYY HH:MI:SS')
where last_name = valle;
insert into employees select * from employees as of scn = 4542;

List the flashback entries

select * from flashback_transaction_query;

Obtain a time from an SCN

select scn_to_timestamp(1408640) as ts from dual;

Obtain SCN from time point

select timestamp_to_scn(to_timestamp('03-12-2007 08:00:00', 'DD-MM-YYYY HH:MI:

select begin_time, end_time, tuned_undoretention from v$undostat;

What is in the undo tablespace

Note: the time is in seconds

Flashback Version Query

Flashback version query provides you with all the versions of a row between two
points in time or SCN, this is useful if you want to audit a table finding out what
happened to a row. However there are some points to remember:
You can only retrieve committed rows
They query will retrieve all deleted rows as well as current rows
The query will retrieve any rows that were deleted and reinserted later on
Query result is table format and contains a row for each version of a row during
the time or SCN interval you specify
The limitations of flashback version query are:
You can only query actual tables not views
you cannot apply the versions clause across DDL operations
The query will ignore physical row changes for example during a segment
shrink operation
You cannot use against external or temporary tables.
The most useful columns to obtain are below:
VERSIONS_STARTTIME - start timestamp of version
VERSIONS_STARTSCN - start SCN of version
VERSIONS_ENDTIME - end timestamp of version
VERSIONS_ENDSCN - end SCN of version
VERSIONS_XID - transaction ID of version
VERSIONS_OPERATION - DML operation of version

Flashback version query


select versions_xid as xid, versions_startscn as start_scn, versions_endscn as end_sc

as operation, empname
from employees
versions between timestamp minvalue and maxvalue
order by versions_startscn;

Flashback version query


select versions_xid as xid, versions_startscn as start_scn, versions_endscn as end_sc

as operation, empname
from employees
versions between scn minvalue and maxvalue
where emp_id = 863;

Obtain a time from an


select scn_to_timestamp(1408640) as ts from dual;

Obtain SCN from time


select timestamp_to_scn(to_timestamp('03-12-2007 08:00:00', 'DD-MM-YYYY HH:MI:SS')) a

Flashback Transaction Query

Identifies which transaction or transactions were responsible for a certain change in a
table's data during a specified time period. Basically it queries
theflashback_transaction_query view. It provides the SQL code that will undo the
change, flashback transaction query can use an index path to retrieve data instead of
reading the entire redo log file.
Flashback transaction considerations:
Turn on minimal supplemental logging if your operations involve chained rows
and special storage structures, such as clustered tables
When querying IOT, an update is shown as a delete/insert operation.
If the query involves a dropped table or a dropped user, it returns object
numbers and user ID's instead of the object names and usernames.
Consider setting the retention guarantee option for the undo tablespace, this will
ensure that the unexpired data in the undo segments is preserved.
Flashback transaction query will contain the following columns
start_scn and start_timestamp - identify when a certain was created
commit_scn and commit_timestamp - tell you when a certain row was

xid_row_id and undo_change# - identify the row, transaction and change

operation - tells you what sort of operation occurred insert, delete or update.
logon_user, table_name and table_owner - username, table name and schema
undo_sql - the exact SQL code to undo the change
If you have chained rows or use clustered tables then oracle recommends that you
should turn on supplemental logging on at the database level.

grant select any transaction to pvalle;

Supplemental logging

alter database add supplemental log data;

Display undo segments

select operation, undo_sql, table_name from flashback_transaction_query;

Flashback transaction

select operation, undo_sql, table_name

from flashback_transaction_query
where start_timestamp >= to_timestamp ('04-12-2007 05:00:00', 'DD-MM-YYYY HH:MI:SS')
and commit_timestamp <= to_timestamp ('04-12-2007 08:00:00', 'DD-MM-YYYY HH:MI:SS')
and table_owner='VALLEP';

Note: this will give you the SQL to reserver the change that was applied to the data
to use.

Obtain a time from an SCN select scn_to_timestamp(1408640) as ts from dual;

Obtain SCN from time

select timestamp_to_scn(to_timestamp('03-12-2007 08:00:00', 'DD-MM-YYYY HH:MI:SS'))

You use flashback version query to obtain the xid and then use the xid in the flashback
transaction query statement

flashback version and

flashback transaction query select versions_xid as xid, versions_startscn as start_scn, versions_endscn as end_s
empname from employees
versions between scn minvalue and maxvalue
as of scn 7920
where emp_id = 222;
operation empname

Now use the XID in the flashback transaction query to obtain the SQL to undo the cha
select xid, start_scn, commit_scn, operation, logon_user, undo_sql
from flashback_transaction_query

where xid=hextoraw('0003002F00038BA9');

insert into H
values ('222'

Flashback Table
There are two distinct table related flashback table features in oracle, flashback table
which relies on undo segments and flashback drop which lies on the recyclebin not the
undo segments.
Flashback table lets you recover a table to a previous point in time, you don't have to
take the tablespace offline during a recovery, however oracle acquires exclusive DML
locks on the table or tables that you are recovering, but the table continues to be
When using flashback table oracle does not preserve the ROWIDS when it restores
the rows in the changed data blocks of the tables, since it uses DML operations to
perform its work, you must have enabled row movement in the tables that you are
going to flashback, only flashback table requires you to enable row movement.
If the data is not in the undo segments then you cannot recover the table by using
flashback table, however you can use other means to recover the table.
Restriction on flashback table recovery
You cannot use flashback table on SYS objects
You cannot flashback a table that has had preceding DDL operations on the
table like table structure changes, dropping columns, etc
The flashback must entirely exceed or it will fail, if flashing back multiple
tables all tables must be flashed back or none.
Any constraint violations will abort the flashback operation
You cannot flashback a table that has had any shrink or storage changes to the
table (pctfree, initrans and maxtrans)


grant flashback on table_test to pvalle;

grant flashback on any table to pvalle;

Enable row movement

alter table test enable row movement;

Display row movement

select table_name, row_movement from user_tables;

Flashback table (SCN)

flashback table test to scn 4587309871;

flashback table employees to timestamp to_timestamp
('04-12-2007 05:00:00', 'DD-MM-YYYY HH:MI:SS')

Flashback table (time)

flashback table employees to timestamp (systimestamp - interval '15' minute);

flashback table employess to timestamp (sysdate - 1);
flashback table employees, depts to timestamp (sysdate - 1);

flashback table employees to timestamp (sysdate -1) enable triggers;

Enable triggers
Note: Oracle disables triggers by default when falshing back a table

Flashback Drop
Flashback drop lets you reinstate previously dropped tables exactly as it was before
the drop, below is a table of what is kept where when a table is dropped:
Recyclebin: tables and indexes
Data dictionary: unique keys, primary key, not-null constraints, triggers and
Not recovered: foreign key constraints
If two tables exist in the recyclebin with the same name the newest one will be
restored unless you state which one you want to restore. If you restore a table it is
removed from the recyclebin.

flashback table <table_name> to before drop

flashback table <table_name> to before drop rename to <new name>;

Recover and rename

Note: use if table name already exists

Recover same name table

flashback table 'BIN$hfkjdshfkhs' to before drop;

List recycle bin

select object_name, original_name, type from user_recyclebin;

select object_name, original_name, type from dba_recyclebin;

select * from recyclebin; (shows only current user info)

show recyclebin;

drop table completely

drop table <table_name> purge;

Note: table will not be in recyclebin

purge table <table_name>

drop table from r/bin

Note: table will be gone from recyclebin

truncate table

truncate table
Note: table will not be in the recyclebin

drop user test cascade

drop user
Note: will not store anything in the recyclebin

purge recyclebins

purge recyclebin; (purge user recyclebin)

purge dba_recyclebin;(purges all recyclebins)
purge tablespace test user test03; (purge test03 from tablespace test)
select * from "BIN$NwM/FEjuSUORrgxHUPR3WA==$0";

List dropped table contents

Note: the double quotes

Naming Convention


Space pressure on a tablespace will cause it to purge the recyclebins of the users
within that tablespace, it is based on a FIFO order. When a tablespace has the auto
extend feature turned on it will clear down the recyclebin first, then auto extend.
Limitations on flashback drop:
Recyclebin is only available to non-system, locally managed tablespaces.
There is no guaranteed timeframe for how long an object will be stored in the
DML and DDL cannot be used on objects in the recyclebin
Must use the recyclebin name to query the table

All dependent objects are retrieved when you perform a flashback drop.
Virtual private database (VPD) and FGA policies defined on tables are not
protected for security reasons
Partitioned index-organised tables are not protected by the recycle bin.
Referential constraints are not protected by the recycle bin. They must be recreated after table has been rebuilt.
Flashback Database
The database can be taken back in time by reversing all work done sequentially. The
database must be opened with resetlogs as if an incomplete recovery has happened.
This is ideal if you have a database corruption (wrong transaction, etc) and require the
database to be rewound before the corruption occurred. If you have media or a
physical problem a normal recovery is required.
Flashback database is not enabled by default, when enabled flashback database a
process (RVWR recovery Writer) copies modified blocks to the flashback buffer.
This buffer is then flushed to disk (flashback logs). Remember the flashback logging
is not a log of changes but a log of the complete block images. Not every changed
block is logged as this would be too much for the database to cope with, so only as
many blocks are copied such that performance is not impacted. Flashback database
will construct a version of the data files that is just before the time you want. The data
files probably will be in a inconsistent state as different blocks will be at
different SCNs, to complete the flashback process, Oracle then uses the redo logs to
recover all the blocks to the exact time requested thus synchronizing all the data files
to the same SCN. Archiving mode must be enabled to use flashback database. An
important note to remember is that Flashback can never reserve a change only to redo
The advantage in using flashback database is speed and convenience with which you
can take the database back in time.
You can use rman, sql and Enterprise manager to flashback a database. If the flash
recovery area does not have enough room the database will continue to function but
flashback operations may fail. It is not possible to flashback one tablespace, you must
flashback the whole database. If performance is being affected by flashback data
collection turn some tablespace flashbacking off.

You cannot undo a resized data file to a smaller size. When using backup recovery
area and backup recovery files controlfiles , redo logs, permanent files and
flashback logs will not be backed up.


select log_mode from v$database; (must use archivelog mode)

alter system set db_recovery_file_dest=c:/flash_recovery_area;
alter system set dba_recovery_file_dest_size=8G;
alter system set db_flashback_retention_target=4320; (3 days time in minutes)
shutdown immediate;
startup mount;
alter database flashback on; (RVWR process will start)
alter database open;


alter session set nls_date_format = 'dd-mon-yyyy hh24:mi:ss';

select flashback_on from v$database; (check its enabled)
select retention_target, estimated_flashback_size, flashback_size from v$flashb
select oldest_flashback_scn, oldest_flashback_time from v$flashback_database_lo
select end_time, flashback_data, db_data, redo_data from v$flashback_database_s

Flashback Buffer

select * from v$sgastat where name like 'flashback%';

Flashback database example

startup mount;
flashback database to timestamp to_timestamp('15-02-07 10:00:00', 'dd-mm-yy hh2
alter database open read only; (check schema ok start db, if not continue, opti
shutdown abort;
startup mount;
flashback database to timestamp to_timestamp('15-02-07 10:02:00', 'dd-mm-yy hh2
alter database open read only; (check schema ok start db, if not continue, opti
When happy
alter database open resetlogs;

Flashback using RMAN

flashback database to time = to_date('15-02-07 10:00:00', 'dd-mm-yy hh24:mi:ss'

flashback database to scn = 2765665;
flashback database to sequence=2123 thread=1;

Tablespace Flashback Configuration

Turn flashback on (tablespace)

alter tablespace <tablespace_name> flashback on; (must be in mount mode)

Turn flashback off (tablespace)

alter tablespace <tablespace_name> flashback off; (this can be done in open or

Display tablespaces with


select name, flashback_on from v$tablespace;

Flashback space usage

select * from v$flash_recovery_area_usage;

Note: if one or more tablespaces are not generating flashback data, then before
carrying out a flashback operation the files making up the tablespace must be taken
offline. Offline files are ignored by recover and flashback. Remember that you must

make these files to the same point as the flashback otherwise the database will not
Flashback Recovery Area
The alert log and DBA_OUTSTANDING_ALERTS will hold status information
regarding the flash recovery area. You can use the commands backup copy or backup
for flash recovery area. Controlfiles and redo logs are permanently stored in the flash
recovery area.
select * from v$recovery_file_dest;

Note: this details space_used, space_limit, space_reclaimable, # of files

What is using the Flashback

recovery area

Backing up flashback area

select * from v$flash_recovery_area_usage;

rman> backup recovery area;

Note: includes backup sets, datafile and archive, controlfile excludes: flashback
current controlfile, online redo logs
rman> backup recovery files;
Note: includes all files whether or not they are in the flash recovery area

Using Restore Points

There is a new enhancement to recovery techniques by allowing you to create restore
points, a restore point is alias for an SCN, which elimates the need to research and
record SCN's or timestamps which you need for flashback database and flashback
table operations, all you need to do is refer to the restore point when recovering.
The restore point does not guarantee that the data will remain in the flashback logs
necessary for a flashback database operation to succeed under all circumstances. By
creating a restore point that will guarantee that all data will be preserved, you can be
assured that the restore will be successful, remember that you will need enough space
in the flash recovery area to hold the necessary data. A guaranteed restore point does
not depend on flashback logs, thus you can create a guaranteed restore point with
flashback logging turned off, they use a logging mechanism that's similar to flashback
logs but it's separate from them, thus if you are using guaranteed restore points it
better to turn off flashback logging otherwise you may end up filling up the flash
recovery area.

One note is that you can only restore back to the restore point, you cannot restore back
to a point in time using restore points, you must then use the backups and archived
logs to do a point in time recovery.
Not Guaranteed

create restore point large_update;


drop restore point large_update;

create guaranteed restore point create restore point test_guarantee guarantee flashback database;
remove guaranteed restore

drop restore point test_guarantee;

Using a restore point

flashback database to restore point large_update;

flashback table test to restore point table_update;

Displaying restore points

select name, scn, storage_size, time, guarantee_flashback_database from v$restor

Other Operations

select flashback_on from v$database;

Flashback databse running

Note: you should get a restore point only reply if using restore points

Resumable Space
If you were running a long batch program and the tablespace run out of space, this
would cause a error, you would increase the amount of space in the tablespace and
rerun your job, this could take quite a bit of time.
Oracle's resumable space will suspend the running job that has run into problems due
to lack of space and will automatically continue when the space issue has been fixed.
You can make all operations run in resumable space allocation mode by using a alter
session command. The following database operations are resumable
Queries - they can always be resumed after the temporary tablespace has run
out of space.
DML Operations - insert, delete and update can all be resumed
DDL Operations - index operations involving creating, rebuilding and altering
are resumable as are create table as select operations

import and export operations - SQL loader jobs are resumable but to must use
the resumable parameter in the SQL loader job.
You can resume operations that are of the following types of errors
out of space - typical error message is the ORA-01653
maximum extents errors - typical error message is the ORA-01628
users space quota errors - typical error message is the ORA-01536
grant resumable to vallep;

grant execute on dbms_resumable to vallep;

Who has privilege

select grantee, privilege from dba_sys_privs where privilege='RESUMABLE

Who has set resumable mode

select user_id, session_id, status, timeout, name from dba_resumable;

select user_id, session_id, name, timeout, start_time, suspend_time fro

Who is in resumable mode waiting for space

select username, event from v$session where event like '%sus%';


Resumable space across entire database

Note: default is 0, time is in seconds

Resumable space in session

alter session enable resumable;

alter session enable resumable timeout 18000;

Resumable space in session with timeout

execute dbms_resumable.set_timeout(18000);
Note: time is in seconds

Resumable space in session and adding a


alter session enable resumable name 'pauls_resumable';

Display resumable space mode options

select dbms_resumable.get_timeout() from dual;

Disable resumable space mode

alter session disable resumable;

Useful Views

describes system privileges granted to users and roles.


lists all resumable statements executed in the system.


lists session information for each current session