Vous êtes sur la page 1sur 11


When two or more transactions execute concurrently, their database operations are
executed in an interleaved fashion in order to make efficient use of the CPU. The
operations from one transaction may execute in between two operations from another
transaction. This interleaving may cause incorrect results leading to
an inconsistent database.

Concurrency control coordinates simultaneous execution of transactions in a

multiprocessing database. The objective of concurrency control is to ensure
the serializability of transactions in a multi-user database environment, that is, the
result of executing concurrent transactions in an interleaved fashion should be the
same as if the transactions were executed serially.

Simultaneous execution of transactions over a shared database can create several data
integrity and consistency problems when the transactions operate on the same
attributes. Some of the problems are :

1. Lost updates

2. Uncommitted Data

3. Inconsistent Retrievals

Lost Updates

This problem may occur when two concurrent transactions are updating the same data
values and one transaction overwrites the update made by the other transaction.

For Example:

Two concurrent transactions update PROD_QOH of the same product:

Transaction Computation
T1: Purchase 100 units PROD_QOH = PROD_QOH + 100
T2: Sell 30 units PROD_QOH = PROD_QOH - 30
Serial Execution of the Two Transactions:


1 T1 Read PROD_QOH 35
2 T1 PROD_QOH = 35 + 100
3 T1 Write PROD_QOH 135
4 T2 Read PROD_QOH 135
5 T2 PROD_QOH = 135 - 30
6 T2 Write PROD_QOH 105

The Lost Updates:


1 T1 Read PROD_QOH 35
2 T2 Read PROD_QOH 35
3 T1 PROD_QOH = 35 + 100
4 T2 PROD_QOH = 35 - 30
5 T1 Write PROD_QOH (Lost Update) 135
6 T2 Write PROD_QOH 5

Uncommitted Data

The Uncommitted Data problem may occur when two transaction T1 and T2 are
executed concurrently and the first transaction is rolled back after the second
transaction has already accessed the uncommitted data - thus violating the isolation
property of the transaction.

Transaction Computation

T1: Purchase 100 units PROD_QOH = PROD_QOH + 100 (Rolled back)

T2: Sell 30 units PROD_QOH = PROD_QOH - 30

Serial Execution of these two Transactions:


1 T1 Read PROD_QOH 35
2 T1 PROD_QOH = 35 + 100
3 T1 Write PROD_QOH 135
4 T1 ** ROLLBACK ** 35
5 T2 Read PROD_QOH 35
6 T2 PROD_QOH = 35 - 30
7 T2 Write PROD_QOH 5

An Uncommitted Data Problem:

1 T1 Read PROD_QOH 35
2 T1 PROD_QOH = 35 + 100
3 T1 Write PROD_QOH 135
Read PROD_QOH (Read uncommitted
4 T2 135
5 T2 PROD_QOH = 135 - 30
6 T1 ** ROLLBACK ** 35
7 T2 Write PROD_QOH 105

Inconsistent Retrievals

Inconsistent retrievals occur when a transaction calculates some summary (aggregate)

functions over a set of data while other transactions are updating the data.

TRANSACTION T1 TRANSACTION T2 (data entry correction)


Transaction Results :

104XCV 100 100
110YGH 120 120
125TYZ 70 (70 + 30) -> 100
345TYX 35 (35 - 30) -> 5
350TYX 100 100
355TYX 30 30
T1 : SUM 455 455

Inconsistent Retrievals:


Read PROD_QOH for
1 T1 PROD_CODE = 100 100
Read PROD_QOH for
2 T1 120 220
Read PROD_QOH for
3 T2 70
4 T2 PROD_QOH = 70 + 30
Write PROD_QOH for
5 T2 100
Read PROD_QOH for
6 T1 100 320 (After)
Read PROD_QOH for 355
7 T1 35
PROD_CODE = '345TYX' (Before)
Read PROD_QOH for
8 T2 35
9 T2 PROD_QOH = 35 - 30
Write PROD_QOH for
10 T2 5
11 T2 ** COMMIT **
Read PROD_QOH for
12 T1 100 455
Read PROD_QOH for
13 T1 30 485

The Scheduler

 The scheduler establishes the order in which the operations within concurrent
transactions are executed.
 The scheduler interleaves the execution of database operations to ensure
 To determine the appropriate order, the scheduler bases its actions on
concurrency control algorithms, such as locking or time stamping methods.
 The scheduler also makes sure that the computer's CPU is used efficiently.

Locking Methods

 Concurrency can be controlled using locks.

 A lock guarantees use of a data item to a current transaction.
 A transaction acquires a lock prior to data access; the lock is released
(unlocked) when the transaction is completed.
 All lock information is managed by a lock manager.

Lock Granularity

Lock granularity indicates the level of lock use at:

 Database Level
Prevents the use of any tables in the database by transaction T2 while
transaction T1 is being executed.

 Table Level
Prevents access to any row by transaction T2 while transaction T1 is using the

 Page Level
A diskpage or page is the equivalent of a disk block that can be described as a
(referenced) section of a disk. A table can span several pages, and a page can
contain several rows of one or more tables.
Page-level locks are currently the most frequent used multi-user DBMS locking

 Row Level
It is much less restrictive than the locks discussed above.
The DBMS allows concurrent transactions to access different rows of the same
table, even if the rows are located on the same page.
Although the row-level locking approach improves the availability of data, its
management requires high overhead cost.

 Field Level
Allows concurrent transactions to access the same row, as long as they require
the use of different fields (attributes) within that row.
Although field-level locking clearly yields the most flexible multi-user data
access, it requires a high level or computer overhead.

Types of Locks

Shared/ Exclusive Locks

The labels shared and exclusive indicate the nature of the lock.

Exclusive Locks Shared Locks

An exclusive lock exists when access is A shared lock exists when concurrent
specially reserved for the transaction that transactions are granted READ access on
locked the object. the basis of a common lock.
The exclusive lock must be used when the A shared lock produces no conflict as long
potential for conflict exists. as the concurrent transactions are read only.
An exclusive lock is issued when a
A shared lock is issued when a transaction
transaction wants to write (update) a data
wants to read data from the database and no
item and no locks are currently held on that
exclusive lock is held on that data item.
data item.

Shared/Exclusive Lock :

TIME T1 T2 Value
1 Request lock on PROD_QOH (granted) -- --
2 Read PROD_QOH (granted) -- 35
3 PROD_QOH = 35 + 100 Request lock on PROD_QOH (wait) 35
4 Write PROD_QOH Request lock on PROD_QOH (wait) 135
5 Release Lock on PROD_QOH Request lock on PROD_QOH (wait) 135
6 -- Lock on PROD_QOH granted 135
7 -- Read PROD_QOH 135
8 -- PROD_QOH = 135 - 30 135
9 -- Write PROD_QOH 105

The DBMS uses a transaction log to keep track of all transactions that update the
database. The data in the log is used by the Recovery Manager to restore the database
to a consistent state after failure. DBMSs use one of two protocols in relation to when
data are saved to the physical database:

 Deferred-write
Transaction operations do not immediately update the database. Instead, all
changes are written to the transaction log. The database is updated only after
the transaction reaches its commit point.

 Immediate-write
The database is immediately updated by transaction operations during the
transaction's execution, even before the transaction reaches its commit point.
The transaction log is also updated. If a transaction fails, the database uses the
log information to roll back the database.

Types of Failures

1. Transaction Failures
2. System Failures
3. Media Failures

Transaction Failures

A transaction failure occurs when a transaction aborts, due to:

 Some unplanned operation in the transaction which may cause it to fail, such
o Illegal operation (e.g. integer overflow, division by zero)
o Interrupt (e.g. unintended keyboard error: user presses an escape
 Exception condition detected by the transaction (a planned failure), such as:
o Withdrawal of funds when there are insufficient funds in an account
o Data for transaction not found

 System detects deadlock, that is, concurrency control mechanisms aborts one
transaction and reschedules it

The system aborts a failed transaction (that is, not committed) by rolling back or
undoing all the changes made by the aborted transaction.

No use of back up copy of database required.

System Failures

A system failure refers to the loss or corruption of the contents of volatile storage (that
is, main memory). For example:

 Power failures (most common)

 Software induced failure traceable to operating system, DBMS software,

viruses, etc.

 Network failure

The critical point regarding system failure is that the contents of main memory are
The precise state of any transaction that was in progress at the time of the failure is
therefore no longer known; such a transaction can therefore never be successfully
completed, and so must be undone (rolled back) when the system restarts (in
immediate-write environment).
It may also be necessary to redo certain transactions at restart time that did
successfully complete prior to the crash but did not manage to get their updates
transferred to the physical database.

System failures can be handled by the following procedure. At certain prescribed

intervals, for example, when a certain number of entries have been written to the log,
the system automatically takes a checkpoint.
This involves:

 Physically writing the contents of the database buffers out to the physical
database, and
 Physically writing a special checkpoint record out to the physical log, listing all
transactions that were in progress at the time the checkpoint was taken.

Five Transaction Categories

Assuming an Immediate Write protocol, when the system is restarted, transactions of

types T3 and T5 must be undone , and transactions of types T2 and T4 must
be redone .
Transactions of type T1 do not enter the restart process at all. At restart time, the
system goes through the following steps to identify the transactions of types T2 - T5:

UNDO/ REDO Algorithm

1. Start with two lists of transactions : UNDO and REDO.

o Set the UNDO list (T2, T3) equal to the list of all transactions given in
the recent checkpoint record.
o Set the REDO list to empty.

2. Search forward through the log, starting from the checkpoint record.

3. If a BEGIN TRANSACTION log entry is found for transaction T, add T to the

UNDO list (T4, T5).

4. If a COMMIT log entry is found for transaction T, move T from the UNDO list
to the REDO list (T2, T4).
5. When the end of the log is reached, the UNDO and REDO lists identify,,
respectively, transactions of types T3 and T5, transactions of types T2 and T4.

The system now works backwards through the log, undoing the transactions in the
UNDO list writing the 'before' values in the transaction log to the physical database. It
then works forward again, redoing the transaction in the REDO list writing the 'after'
values in the transaction log to the physical database. When all such recovery is
complete, the system is ready to accept new work.

Media Failures

A media failure occurs when any part of the stable storage is destroyed. For instance:

 Hardware induces failures, including disk crashes, bad disk sectors, disk full
errors, etc.

 Major catastrophes, such as: fire, earthquakes, flood, etc.

It is a major task to recover from these failures, which requires use of backup copy of

In order to recover from a failure, it is important to maintain one or more redundant

copies of the database. These copies should be kept on devices with independent
failure modes, meaning that no single failure event can destroy more than one copy. In
practice, keeping two copies of each data item on two different devices is deemed to
be sufficient protection for most applications.

One approach for dealing with media failures is archiving. Periodically, the value of
each data item is written or dumped to an archive database. The log contains (at least)
all of the updates applied to the database since the last dump. It is mportant to start a
new transaction log file immediately upon making an archive copy of the database. If
a media failure destroys the contents of the stable database, then we execute a Restart
algorithm using the log and archive database to bring the archive to the committed
state with respect to the log.