Vous êtes sur la page 1sur 12

HomeWork 1

Homework Title/ No:4 Course Code : CSE 301


Course Instructor : Miss Amandeep kaur

D.O.A: D.O.S : 18 nov, 2010

Student’s Roll No: 46 Section No. : D2801

Declaration:

I declare that this assignment is my individual work. I have not


copied from any other student’s work or from any other source
except where due acknowledgment is made explicitly in the text, nor
has any part been written for me by another person.

Student’s Signature : ASHWANI KAUSHAL

Evaluator’s comments:

_______________________________________________

Marks obtained : ___________ out of ______________________


Question 1: What variations are possible on two-
phase locking protocol?
Ans:Variations are possible on two-phase locking protocol are
following:
Conservative 2PL It is possible to construct a 2PL scheduler that never aborts
transactions. This technique is known as Conservative 2PL or Static 2X. As we have
seen, 2PL causes abortions because of deadlocks. Conservative 2PL avoids deadlocks
by requiring each transaction to obtain all of its locks before any of its operationsare
submitted to the DM. This is done by having each transaction predeclare its readset
and writeset. Specifically, each transaction T, first tells the scheduler all the data items
it will want to Read or Write, for example as part of its Start operation. The scheduler
tries to set all of the locks needed by T,. It can do this providing that none of these
locks conflicts with a lock held by any other transaction.

If the scheduler succeeds in setting all of T,‘s locks, then it submits Ti’s operations to
the DM as soon as it receives them. After the DM acknowledges the processing of
T,‘s last database operation, the scheduler may release all of l-j’s locks.

If, on the other hand, an)’ of the locks requested in T,‘s Start
conflicts with locks presently held by other transactions, then the
scheduler does not grant any of T,‘s locks. Instead, it inserts T,
along with its lock requests into a waiting queue. Every time the
scheduler releases the locks of a completed transaction, it examines
the waiting queue to see if it can grant all of the lock requests of
any waiting transactions. If so, it then sets all of the locks for each
such transaction and continues processing as just described.

In Conservative 2PL, if a transaction T; is waiting for a lock held by


Tj, then T; is holding no locks. Therefore, no other transaction Th
can be waiting for T,, so there can be no WFG edges of the form Tk -
+ T,. Since there can be no such edges, T, cannot be in a WFG cycle,
and hence cannot become part of a deadlock. Since deadlock is the
only reason that a 2PL scheduler ever rejects an operation and
thereby causes the corresponding transaction to abort,
Conservative 2PL never aborts a transaction. (Of course, a
transaction may abort for other reasons.) This is a classic case of a
conservative scheduler. By delaying operations sooner than it has
to, namely, when the transaction begins executing, the scheduler
avoids abortions that might otherwise be needed for concurrency
contro1 reasons.
Strict 2PL
Almost all implementations of 2PL use a variant called Strict 2PL.
This differs from the Basic 2PL scheduler described in Section 3.2 in
that it requires the scheduler to release all of a transaction’s locks
together, when the transaction terminates. More specifically, Tls
locks are released after the DM acknowledges the processing of ci
or aj, depending on whether T; commits or aborts (respectively).
There are two reasons for adopting this policy. First, consider when
a 2PL scheduler can release some o&[x]. To do so the scheduler
must know th;it: (1) T, has set all of the locks it will ever need, and
(2) T; will not subsequently issue operations that refer to X. One
point in time at which the scheduler can be sure of (1) and (2) is
when T, terminates, that is, when the scheduler receives the ci or ai
operation. In fact, in the absence of any information from the TM
aside from the operations submitted, this is the earliest time at
which the scheduler can be assured that (1) and (2) hold.

A second reason for the scheduler to keep a transaction’s locks until


it ends, and specifically until after the DM processes the
transaction’s Commit or Abort, is to guarantee a strict execution. To
see this, let history H represent an execution produced by a Strict
2PL scheduler and suppose wi[x] < oj[x]. By rule (1) of 2PL
(Proposition 3.1) we must have
1. w&[x] < WJX] < wtii[x], and
2, Olj[X] < Oj[X] < OZfj[X].
Because wli[~] and olj[~] conflict (whether o is r or w, we must have
either wtti[x] < oI,[x] or oUj[x] < wlJx] (by Proposition 3.2). The
latter, together with (1) and (2), would contradict that wi[x] < ol[x]
and, therefore, 3. Wzfj[X] < olj[x].

Question 2: “Thomas write rule modifies the time-stamp ordering


protocol”. Do you agree? Justify your answer.

Ans: the Thomas Write rule is a rule in timestamp-based concurrency control.

It states that, if a more recent transaction has already written the value of an object,
then a less recent transaction does not need to have written its change.

For example:

Assuming that the timestamp of T1 is less than that of T2, T1's write is discarded

Thomas’ Write Rule modified the timestsmp-ordering


protocol:
Modified version of the timestamp-ordering protocol in which obsolete write
operations may be ignored under certain circumstances.
When Ti attempts to write data item Q, if TS(Ti) < W-timestamp(Q), then Ti is
attempting to write an obsolete value of {Q}. Hence, rather than rolling back Ti as the
timestamp ordering protocol would have done, this {write} operation can be ignored.
Otherwise this protocol is the same as the timestamp ordering protocol.
Thomas' Write Rule allows greater potential concurrency. Unlike previous protocols,
it allows some view-serializable schedules that are not conflict-serializable.

Question 3:”In databases is there a possibility of deadlocks”. If yes


why? In how a many ways, deadlocks can be handled?
Ans: Yes, in database there is a possibility of deadlock

Deadlock handling techniques can fall into


one of these three classes.
1. Deadlock Prevention
2. Deadlock Avoidance
3. Deadlock Detection and Recovery.

1. Deadlock Prevention :

The basic idea of deadlock prevention is to deny at least one of the


four criterion that are necessary for deadlocks. Out of these
conditions. Mutual exclusion is usually very difficult to deny as it will
effect the system performance but, we can consider other three
conditions.
(a) Eliminating Hold and Wait :-
Hold and wait condition can be eliminated by forcing a process to
release all its resources when it requests for an unavailable
resource. It can be done by using two categories :-
1. Process can request for resources when it is having no resource
at all.
2. A process can request for resource step by step in such a manner
that it release the resources acquired by it before requesting
another resource that is unavailable.
The first is, looks very easy to implement but if we follow this
strategy, a process will block all the resources in advance and all
the other processes will have to wait for these resources. This leads
to major system degradation.
The second approach requires careful holding and releasing of
resources. It avoids the disadvantage of first approach but some
resources can't be reacquired later for example, files in the
temporary memory.
(b) Eliminating No Preemption :-

No preemption condition can be avoided by by following


preemption. That means the system can revoke ownership of
resources from a process but this requires storing the state of the
process before revoking a resources from it.

Preemption is possible for some type of resource e.g. CPU &


memory whereas it can't be applied to some resources like printer.

(c) Eliminating Circular Wait :-

Circular Wait can be avoided by linear ordering of resources in a


system. In linear ordering of resources, the system resources are
divided into different classes Ci where the value of i can range from
1 to N.

2. Deadlock Avoidance :

The basic idea of deadlock avoidance is to grant only those resource


request that can't possibly result into a state of deadlock. This
strategy is implemented by having a resource allocator in the
system, which examines the effects of allocating resources and
grant the access only if it will not result into a state of deadlock,
otherwise the requesting process will be suspended till the time it
would be safe to grant access to the required resource.

To avoid the deadlock, system requires each process to specify their


maximum resource needs before their execution. The processes
requesting resources more than the pre-stated limit are not
admitted for execution.
A graph-based algorithm may also be used. In this, a graph of
current system state and after grant system state is plotted to
decide about granting access to a requested resources.

3. Deadlock Detection and Recovery:

In deadlock detections and recovery, the system grants the access


to each requesting process freely. It occasionally, checks for
deadlocks in order to reclaim resources held by processes in
deadlock.

At a stage, when system is checked for deadlock, the detection


algorithm examines all possible sequences for incomplete
processes. If competition sequence exists then the system is not in
deadlock stage otherwise the system is in deadlock stage and all
incomplete processes are blocked.

Deadlock detection is only a part of deadlock detection and


recovery process. Deadlock detection only reveals a problems does
not solves it. Now, the system must break the deadlock so that
processes may be processed.

The first step in deadlock recovery is to identify deadlock processes.


The next step is to roll back or restart one or more processes
causing deadlock. Restarting leads to the loss of work done by the
particular process. So generally those processes are chosen which
are less costly to roll back. The process is rolled back to the point
where the deadlock is released.
Such facility is required by systems needing high reliability and or
availability but these algorithms will be dangerous when processes
have made changes, which can be rolled back.

PART-B

Question 4:Compare deferred and immediate database modifications


with help of an example.

Ans: Deferred Database Modification

 The deferred-modification technique ensures transaction atomicity by


recording all database modifications in the log, but deferring all write operations of a
transaction until the transaction
 When we make changes in the Database it doesn’t reflec the new values or
updated values of data until or unless transcations parailly is in commited state.
In deffered database the transaction is committed after the transaction is completed
i.e(the transaction has completed its partially committed state) where as for immediate
database the transaction is commited when tha transaction is in the active state.
The deferred database modification scheme records all
modifications to the log, but defers all the writes to after partial commit. (final action
of the tx. Is executed)
Assume that transactions execute serially
Transaction starts by writing <Ti start> record to log.
A write(X) operation results in a log record <Ti, X, V> being written, where V is the
new value for X
Note: old value is not needed for this scheme
The write is not performed on X at this time, but is deferred.
When Ti partially commits, <Ti commit> is written to the log
Finally, the log records are read and used to actually execute the previously deferred
writes.
During recovery after a crash, a transaction needs to be redone if and only if both <Ti
start> and<Ti commit> are there in the log.
Redoing a transaction Ti ( redoTi) sets the value of all data items updated by the
transaction to the new values.
Crashes can occur while
the transaction is executing the original updates, or
while recovery action is being taken
Example of transactions T0 and T1 (T0 executes before T1):
T0: read (A) T1 : read (C)
A: - A - 50 C:-C- 100
Write (A) write (C)
read (B)
B:- B + 50
write (B)

Below we show the log as it appears at three instances of time.

If log on stable storage at time of crash is as in case:


(a) No redo actions need to be taken
(b) redo(T0) must be performed since <T0 commit> is present
(c) redo(T0) must be performed followed by redo(T1) since
<T0 commit> and <Ti commit> are present
Immediate Database Modification

The immediate-update technique allows database modifications to be output to the


database while the transaction is still in the active state. These modifications are
called uncommitted modifications. In the event of a crash or transaction failure, the
system must use the old-value field of the log records to restore the modified data
items.

The immediate database modification scheme allows database updates of an


uncommitted transaction to be made as the writes are issued
since undoing may be needed, update logs must have both old value and new value
Update log record must be written before database item is written
We assume that the log record is output directly to stable storage
Can be extended to postpone log record output, so long as prior to execution of an
output(B) operation for a data block B, all log records corresponding to items B must
be flushed to stable storage
Output of updated blocks can take place at any time before or after transaction
commit
Order in which blocks are output can be different from the order in which they are
written.
Immediate Database Modification Example
Log Write Output
<T0 start>
<T0, A, 1000, 950>
<To, B, 2000, 2050>
A = 950
B = 2050
<T0 commit>
<T1 start>
<T1, C, 700, 600>
C = 600
BB, BC
<T1 commit>
BA
Recovery procedure has two operations instead of one:
undo(Ti) restores the value of all data items updated by Ti to their
old values, going backwards from the last log record for Ti
redo(Ti) sets the value of all data items updated by Ti to the new
values, going forward from the first log record for Ti
Both operations must be idempotent
That is, even if the operation is executed multiple times the effect is
the same as if it is executed once
Needed since operations may get re-executed during recovery
When recovering after failure:
Transaction Ti needs to be undone if the log contains the record
<Ti start>, but does not contain the record <Ti commit>.
Transaction Ti needs to be redone if the log contains both the record
<Ti start> and the record <Ti commit>.
Undo operations are performed first, then redo operations.

Below we show the log as it appears at three instances of time.

Recovery actions in each case above are:


(a) undo (T0): B is restored to 2000 and A to 1000.
(b) undo (T1) and redo (T0): C is restored to 700, and then A and B are
set to 950 and 2050 respectively.
(c) redo (T0) and redo (T1): A and B are set to 950 and 2050
respectively. Then C is set to 600
Question 5: Assume that the Railway reservation system is implemented using an
RDBMS. What are the concurrency control measures one has to take, in order to
avoid concurrency related problems in the above system? How can the deadlock be
avoided in this system?
Ans: Suppose Railway reservation system is implemented using an RDBMS.first of
all what is
Concurrency Control:
Concurrency control refers to the process of managing simultaneous operations on the
database without having them interfere with one another.
Concurrency control measures
Lock-Based Protocol:

A Lock is a variable associated with each data item hat describes the status of the
item with respec to possible operations hat can be applied to it.The process of
manipulaing the value of locks s called as locking.

TWO TYPES OF LOCKS:

 binary locks
 shared/exlusive locks

Binary locks Types


 Locked
 Unlocked

 If lock(A) =1 Then A is Locked


 If lock(A) =0 Then A is Unlocked
Shared/Exclusive Locks: Exclusive mode: denoted by X.If a transcation acquires an
Exclusive_mode lock on
ay data item than the transcation can both read and write that data item.

Shared mode: denoted by S. if the transcation requires shared lock on any data item
then the ranscation can read but cannot write tht data item.

TIME-STAMP BASED PROTOCOL

Ways For generating Time-stamp:

 Use a counter that is incremented each time when value assigned to


Transcation.
 Use current date/time of the system clock

Database have two Time-stamp(TS) values:

READ_TS(X) : This is Largest Time-stamp among all the time-stamps of


Transcations that have successfully read item X.

READ_TS(X) =TS(T)

WRITE_TS(X) : Largest of all the timestamps of transcations that have successfully


written item X

WRITE_TS(X) =TS(T)

VALIDATION CONCURRENCY CONTROL

Validation technique is also known as optimistic or certification technique.

Three phases

 Read Phase: In this phase a transcation reads he iems from the database into
it’s the data local variable.
 Validation Phase:In this phase validation test is performed to check whether
the updated values of local variables can be copied into the database wihou
causing any inconsencies.
 Write Phase: If the validation phase is successful for a transcation then the
database can be updated with the new updated values of local variables.

Deadlocks occurs when each transcation T in a set of two or more Transcation is


waiting for some item hat is locke by some ther transcation T’ in the set.

DEADLOCK AVOIDANCE:

 One way to prevent Deadlock is to use a Deadlock prevention


protocol.Deadlock prevention protocol which is used in wo phase locking
equires tha every transcation lock all the iems it needs in advance.if any of the
item cannot be obtained none of the items are locked.

 A simple way o detect a state of deadlocks is for the system to construct and
maintain wait-for-graph,

 Another simple scheme is to use of time outs.this method is practical because


of its low overhead and simplicity.

Question 6: “Shadow paging uses the concept of paging scheme (in


operating system)”. Do you agree? Justify your answer.

Ans: Shadow paging is an alternative to log-based recovery techniques, which has


both advantages and disadvantages. It may require fewer disk accesses, but it is hard
to extend paging to allow multiple concurrent transactions. The paging is very similar
to paging schemes used by the operating system for memory management.

The idea is to maintain two page tables during the life of a transaction: the current
page table and the shadow page table. When the transaction starts, both tables are
identical. The shadow page is never changed during the life of the transaction. The
current page is updated with each write operation. Each table entry points to a page
on the disk. When the transaction is committed, the shadow page entry becomes a
copy of the current page table entry and the disk block with the old data is released. If
the shadow is stored in nonvolatile memory and a system crash occurs, then the
shadow page table is copied to the current page table. This guarantees that the shadow
page table will point to the database pages corresponding to the state of the database
prior to any transaction that was active at the time of the crash, making aborts
automatic.

There are drawbacks to the shadow-page technique:


• Commit overhead. The commit of a single transaction using shadow paging
requires multiple blocks to be output -- the current page table, the actual data
and the disk address of the current page table. Log-based schemes need to
output only the log records.
• Data fragmentation. Shadow paging causes database pages to change
locations (therefore, no longer contiguous.
• Garbage collection. Each time that a transaction commits, the database pages
containing the old version of data changed by the transactions must become
inaccessible. Such pages are considered to be garbage since they are not part
of the free space and do not contain any usable information. Periodically it is
necessary to find all of the garbage pages and add them to the list of free
pages. This process is called garbage collection and imposes additional
overhead and complexity on the system.

To commit a transaction :
* Flush all modified pages in main memory to disk.
* Output current page table to disk.
* Make the current page table the new shadow page table, as follows:
1. Keep a pointer to the shadow page table at a fixed (known) location on disk.
2. To make the current page table the new shadow page table, simply update the
pointer to point to current page table on disk.
- Once pointer to shadow page table has been written, transaction is committed.
- No recovery is needed after a crash: New transactions can start right away, using the
shadow page table.
- Pages not pointed to from current/shadow page table should be freed

Vous aimerez peut-être aussi