CONCURRENCY CONTROL
• Serial transactions result into inadequate resource utilization
• Hence concurrent transactions are used
• Concurrent transactions involve interleaved transactions and
may lead to inconsistency of the database
• Hence concurrency control techniques are required to control
the interaction among concurrent transactions and achieve
serializability
Concurrency Control Protocols
•
• Timestamp Based Protocols
• Validation Based Protocols
• Multi Version Techniques
LOCKING (Lock Based Approach to Control Concurrency)
• Whenever a data item is being accessed by a transaction, no
other transaction must be able to modify the data item
• In order to ensure this a transaction must acquire a LOCK on
the data item
• A lock is a variable associated with each data item that
indicates whether a READ or WRITE operation can be applied
to the data item (Discuss Data Item)
• Acquiring the lock by modifying its value is called locking
• It controls the concurrent access and manipulating of locked
data item by other transactions and thus maintains the
consistency of the database
• Data items can be locked in two modes :
1. exclusive (X) mode (or Write or Update Lock)
– Data item can be both read as well as written
– No other transaction can read or write the data item
2. shared (S) mode (or Read Lock)
– Data item can only be read
– Other transactions can also read the data item
– No transaction can write to the data item
• Lock requests are made to concurrency-control manager by
the transactions
• Transaction can proceed only after request is granted.
• If a transaction Ti is granted a shared mode lock on a data item
Q, then Ti can read, but not write Q
• If a transaction Ti has obtained an exclusive mode lock on a
data item Q, then Ti can both read and write Q
• X-lock is requested using lock-X instruction.
• E.g. Lock-X(Q), Q is the data item being locked
• S-lock is requested using lock-S instruction.
• E.g. Lock-S(Q), Q is the data item being locked
• Lock is released using the Unlock instruction
• E.g. Unlock(Q), Q is the data item being unlocked
• Any number of transactions can acquire the shared lock on
the same data item simultaneously
• Only one transaction can acquire an exclusive lock on a data
item at a time
– No transaction is allowed to access a data item which is exclusive
locked by another transaction (not even for reading)
– The transaction has to wait for the exclusive lock to be released
Lock-compatibility matrix
• A transaction may be granted a lock on an item if the requested lock
is compatible with locks already held on the item by other
transactions
• If a lock cannot be granted, the requesting transaction is made to
wait till all incompatible locks held by other transactions have been
released. The lock is then granted.
• Example of a transaction performing locking:
T2: lock-S(A);
read (A);
unlock(A);
lock-S(B);
read (B);
unlock(B);
display(A+B)
• A locking protocol is a set of rules followed by all transactions
while requesting and releasing locks.
• Locking protocols restrict the set of possible schedules.
Pitfalls in Locking
1. Deadlock
2. Starvation
Deadlock
• Consider the partial schedule
• Neither T3 nor T4 can make progress — executing lock-S(B)
causes T4 to wait for T3 to release its lock on B, while
executing lock-X(A) causes T3 to wait for T4 to release its
lock on A.
• Such a situation is called a DEADLOCK.
– To handle a deadlock one of T3 or T4 must be rolled back and its
locks released.
• Deadlock is a situation that occurs when all the transactions in
a set of two or more transactions are in a simultaneous wait
state and each of them is waiting for the release of a data
item held by one of the other waiting transaction in the set
• None of the transactions can proceed until atleast one of the
waiting transactions releases lock on the data item
Starvation
• Whenever a transaction requests a lock on a data item in a
particular mode, and no other transaction has a lock on the
same data item in a conflicting mode, the lock can be granted
• Now consider the following scenario:
• Transaction T2 has a shared mode lock on data item Q
• Another transaction T1 requests an exclusive mode lock on Q. It
has to wait for T2 to release the shared mode lock
• In the meantime transaction T3 requests a shared mode lock on Q.
Lock is granted (due to compatibility with lock held by T2)
• At this time T2 releases the lock, but T1 still has to wait for T3 to
release the lock
• Now T4 requests a shared mode lock on Q and lock is granted
• T3 releases the shared mode lock but T1 still waits as T4 is holding
a shared mode lock on Q
• Such similar granting of shared mode locks on Q to transactions
and their subsequent releasing of locks may result in a situation
where T1 never gets the exclusive mode lock on Q
• Thus T1 is said to starve and the phenomenon is called STARVING
T1 T2 T3 T4
Lock-S(Q)
Lock-X(Q) STARVED
Lock-S(Q)
Unlock(Q)
Lock-S(Q)
Unlock(Q)
• Starvation may be avoided by getting locks in the following
manner:
– When a transaction Ti requests a lock on data item Q in a particular
mode M, the concurrency control manager grants the lock provided
that
• There is no other transaction holding a lock on Q in a mode that conflicts with
M
• There is no other transaction that is waiting for a lock on Q and that made its
lock request before Ti
– Thus a lock request will never get blocked by a lock request that is
made later
THE TWO-PHASE LOCKING PROTOCOL
• This is a protocol which ensures conflict-serializable schedules.
• Each transaction issues lock and unlock requests in two phases :
• Phase 1: Growing Phase (Expanding)
– transaction can only obtain locks
– transaction cannot release locks
• Phase 2: Shrinking Phase (Contracting)
– transaction can only release locks
– transaction cannot obtain locks
Consider the following transaction
Lock-X(B)
R(B)
W(B)
Growing Phase
Lock-X(A)
R(A) Shrinking
Phase W(A)
Unlock(B)
Unlock(A)
The transaction is in two phase
• This transaction is not in two phase
Lock-X(B)
R(B)
W(B)
Unlock(B)
Lock-X(A)
R(A)
W(A)
Unlock(A)
• The unlock instructions may not appear only at
the end of the transaction
• This transaction is also in two phase
Lock-X(B)
R(B)
W(B)
Lock-X(A)
Unlock(B)
R(A)
W(A)
Unlock(A)
• The protocol assures serializability.
• It can be proved that the transactions can be serialized in the
order of their lock points (i.e. the point where a transaction
acquired its final lock).
• Two-phase locking does not ensure freedom
from deadlocks
T3 T4
Lock-X(B)
R(B)
W(B)
Lock-S(A)
R(A)
Lock-S(B) waiting
Lock-X(A) waiting
• Transactions T3 and T4 are in two phase, but
they are deadlocked
• Cascading roll-back is possible under
two-phase locking.
T5 T6 T7
Lock-X(A)
R(A)
W(A)
Lock-S(B)
R(B)
W(B)
Unlock(A)
Lock-X(A)
R(A)
W(A)
Unlock(A)
Lock-S(A)
R(A)
• All transactions are in two phase
• But failure of T5 after Read(A) of T7 leads to a cascading roll
back of T6 and T7
• Cascading rollbacks can be avoided by a two phase locking
protocol called the strict two phase locking protocol
strict two-phase locking:
• Here a transaction must hold all its exclusive locks till it
commits/aborts.
• Ensures that any data item written by an uncommitted
transaction are locked in an exclusive mode until the
transaction commits, preventing any other transaction from
reading the data
Rigorous two-phase locking
• Another stricter variant of the two phase locking protocol
• here all locks are held till commit/abort.
• In this protocol transactions can be serialized in the order in
which they commit.
• Most database systems implement either strict or rigorous two
phase locking protocols
T8
T9
Lock X(A1)
Lock S(A1)
LOCK CONVERSIONS Lock S(A2)
Lock X(A2)
• Say T9 also wants to read the value of A1 Lock S(A3)
Lock X(A3)
while T8 is reading the value of A1, R(A1)
then it cannot do so becauseT8 is R(A1)
R(A2)
holding exclusive (X) lock on A1 R(A2)
R(A3)
R(A3)
W(A1)
• However T8 requires X lock on A1 only
towards the end of its transaction when it writes A1
• Thus if T8 could initially lock A1 in shared mode and later change the lock to exclusive mode,
we could get more concurrency, since then T8 and T9 would be able to access A1
simultaneously for reading
• This observation leads to the refinement of the two phase locking protocol in which lock
conversions are permitted
• Upgradation
– A shared lock can be upgraded to an exclusive lock
– Can take place only in the growing phase
• Downgrading
– An exclusive lock can be downgraded to a shared lock
– Can take place only during the shrinking phase
T1 T2
Lock S(A1)
Lock S(A2)
Lock S(A1)
Lock S(A2)
Unlock(A1)
Unlock(A2)
Upgrade(A1)
• Two-phase locking with lock conversions:
First Phase (Growing Phase)
– can acquire a lock-S on item
– can acquire a lock-X on item
– can convert a lock-S to a lock-X (upgrade)
Second Phase (Shrinking Phase)
– can release a lock-S
– can release a lock-X
– can convert a lock-X to a lock-S (downgrade)
Acquisition of Locks
A transaction Ti issues the standard read/write instruction,
without explicit locking calls.
• The operation read(D) is processed as:
if Ti has a lock on D
then
read(D)
else begin
if necessary wait until no other
transaction has a lock-X on D
grant Ti a lock-S on D;
read(D)
end
• write(D) is processed as:
if Ti has a lock-X on D
then
write(D)
else begin
if necessary wait until no other transaction has any lock
on D,
if Ti has a lock-S on D
then
upgrade lock on D to lock-X
else
grant Ti a lock-X on D
write(D)
end;
• All locks are released after commit or abort
Implementation of Locking (Lock Manager)
• A lock manager is a process used for implementation of locking
• The lock manager can receive messages from the transactions and also send message
in reply
• The lock manager replies to a lock request by either
– sending a lock grant messages or
– a message asking the transaction to roll back, in case of a deadlock
• The requesting transaction waits until its request is answered
• For Unlock messages, the lock manager only sends an acknowledgement
• Also the unlock may result in grant message to a waiting transaction
• The lock manager maintains a data-structure called a lock
table to record granted locks and pending requests
White rectangles – 17, 123, 1912, 14, 144 are
the data items
• For each data item that is currently locked, the lock table maintains a linked list
of records, one for each request, in order in which the requests arrived
• lock table also shows which transactions have been granted locks and which are
waiting
• Lock table also records the type of lock granted or requested (not shown in this
table)
• New request is added to the end of the queue of requests for the data item, and
granted if it is compatible with all earlier locks
• Unlock requests result in the request being deleted, and later requests are
checked to see if they can now be granted
• If transaction aborts, all waiting or granted requests of the transaction are
deleted
– lock manager may keep a list of locks held by each transaction, to implement
this efficiently
• The algorithm guarantees freedom from starvation because a
request is never granted while a request received earlier is
waiting
Multiple Granularity
• The granularity of a data item is defined as its size
– Coarse or high granularity – e.g. entire database
– Fine granularity – e.g. a field or a record
• The multiple granularity scheme allows data items to be of various sizes
and defines a hierarchy of data granularities, where the small
granularities are nested within larger ones
• Can be represented graphically as a tree
• The database may be considered a hierarchy consisting of the
following nodes
– The entire database
– Some designated portion of the database (Area)
– A relation
– A record (tuple)
– A field of a record (attribute)
Example of Granularity Hierarchy
– coarse granularity low locking overhead as the lock manager has to handle only
one data item
• Drawback is low concurrency as only one transaction can run an exclusive mode at a time even
though it may need a very small portion of the database
– fine granularity: high concurrency
• Drawback is the overhead of locking as the transaction that needs many records and fields will
have to request many locks high locking overhead
• When a transaction locks a node in the tree explicitly, it
implicitly locks all the descendents of the node in the same
mode.
• The various locking modes are summarised below
• S or Shared Lock :
– Node in question is locked explicitly in shared mode
– All descendent nodes are locked implicitly in shared mode
– Locked nodes (explicitly or implicitly) are accessible only for reading
– No transaction can update these nodes
• X or Exclusive Lock :
– Node in question is locked explicitly in exclusive mode
– All descendent nodes are locked implicitly in exclusive mode
– No transaction can concurrently access these nodes
• Hierarchical organization of the database however increases the
overhead in determining whether or not a request for a lock from a
transaction can be accepted
• Consider a portion of the database that is under the hierarchy specified
by the node N
• Suppose a transaction To needs a share lock on this portion of the
database
• How can the lock manager know efficiently if any other transaction has
already locked some portion of the database rooted by N, and if so,
whether the node is compatible with the request of transaction To
• Checking each data item under N is inefficient
• A new mode of locking, the intention mode, is introduced in this
scheme in order to achieve higher degree of concurrency
• A transaction can lock a hierarchically structured data item in
intention mode
– This implies that the transaction intends to explicitly lock a lower
portion of the hierarchy
– In effect intention locks are placed on all ancestors of a node until the
node that is to be locked explicitly is reached
– The intention mode simply indicates that the transaction intends to lock
the lower level in some mode
– intention Share (IS)
– intention Exclusive (IX)
• IS or Intention Share :
– A node is locked in IS mode when a transaction Ta intends to lock
the lower level in the shared mode
– Node locked in IS mode and its descendents cannot be locked in
exclusive mode
– Descendents of the node (locked in IS mode) may be locked
individually in shared or IS mode
– Descendents of the node that is locked in IS mode are not locked
implicitly
• IX or Intention Exclusive
– A node is locked in IX mode when a transaction Ta intends to lock the lower
level in the exclusive mode
– The node itself cannot be exclusively locked
– However any descendents, if not already locked, can be locked in any of
the locking modes (S or X)
– Descendents of the node that is locked in IX mode are not locked implicitly
SIX or Shared and Intention Exclusive
• The intention lock locks a node to indicate that the lower level nodes
are being locked either in shared or exclusive mode, but it does no
implicit locking of the lower level nodes
• Each lower level has to be locked explicitly in the mode required by
the transaction
• This adds a fairly large overhead if a transaction needs to access a
subtree of the database and modify only a small portion of the
subtree rooted at the intentionally locked node
• SIX lock locks the node in question in SIX mode and all lower
level nodes implicitly in shared mode
• Any of the descendants can be explicitly locked in
– Exclusive mode
– Intention exclusive mode
– SIX mode
• Higher concurrency can be achieved in this manner
• Example
– A transaction To needs to lock a subtree in the exclusive mode
– Using intention locks, the locks will be applied as follows
• The root node and all nodes in the path leading to the subtree are locked are
locked in IX lock (Implicitly)
• The subtree to be modified in the exclusive mode (Explicitly)
• Using SIX locks, the locks will be applied as follows
• Root node is locked in SIX mode (Implicitly)
• All descendants are locked in shared mode (Implicitly)
• Any of the descendents can be explicitly locked in the exclusive mode, IX mode
or SIX mode
– Thus more concurrency is achieved as the descendant nodes can be
locked exclusively in any one of following modes
• IX
• Exclusive
• The privilege of the modes of locking is as shown
• The exclusive mode has the highest privilege and the IS mode
has the lowest privilege
Compatibility Matrix with Intention Lock Modes
• The compatibility matrix for all lock modes is:
Multiple Granularity Locking Scheme
• Transaction Ti can lock a node Q, using the following rules:
1. Two phase locking protocol is followed i.e. a transaction is not allowed to request for additional
locks if it has released a lock
2. The lock compatibility matrix must be observed.
3. A transaction is required to request a lock in the root-to leaf direction and release a lock in the
leaf-to-root direction. Consequently a transaction cannot unlock a node if it is currently holding a
lock on one of the descentants of the node. Similarly a transaction cannot lock a node unless it
already has a compatible lock on the ancestor of the node
4. A node Q can be locked by Ti in S or IS mode only if the parent of Q is currently locked by Ti in
either IX or IS mode.
5. A node Q can be locked by Ti in X, SIX, or IX mode only if the parent of Q is currently locked by Ti in
either IX or SIX mode.
This locking scheme ensures serializability
TIMESTAMP BASED PROTOCOLS
• Two phase locking ensures serializability of schedules based on
the order in which transactions acquire locks on data items
• Another method to ensure serializability is the timestamp
ordering protocol
– A serial order is created among the concurrent transactions by
assigning to each transaction a unique timestamp
• A unique fixed timestamp TS(Ti) is associated with each
transaction when it enters the system
• If an old transaction Ti has time-stamp TS(Ti), a new
transaction Tj is assigned time-stamp TS(Tj) such that TS(Ti)
<TS(Tj).
• Ti is said to be older than Tj i.e. a transaction with a smaller
timestamp value is considered older than another
transaction with a larger timestamp value
• Timestamps are assigned to transactions in two ways
– Value of the system clock
– A logical counter
• The serializability that the system enforces is the
chronological order of the timestamps of the concurrent
transactions
• If two transactions Ti and Tj with timestamps TS(Ti) and
TS(Tj) respectively such that TS(Ti) < TS(Tj) are to run
concurrently, then the schedule produced by the system is
equivalent to running the older transaction Ti first,
followed by the younger transaction Tj
• The contention problem between the two transactions in the
TSO system is resolved by rolling back one of the conflicting
transactions
• A conflict is said to occur when
– An older transaction tries to read a value that is written by a younger
transaction
– OR when an older transaction tries to modify a value already read or
written by a younger transaction
• Both actions indicate that the older transaction was “too late”
in performing the required read/write operations and it could
be using values from different “generations” for different data
items
• In order for the system to determine if an older transaction is attempting to
process(read/write) a data item already read or written by a younger transaction,
each data item is assigned two time stamps
– W-timestamp(Q) or Write TS is the largest time-stamp of any transaction that
executed write(Q) successfully., also denoted by W-TS(Q)
– R-timestamp(Q) or Read TS is the largest time-stamp of any transaction that
executed read(Q) successfully, also denoted by R-TS(Q)
• Q in both cases is the data item
• These Timestamps are updated whenever a new read(Q) or write(Q) instruction is
executed
Timestamp-Based Protocol
• Whenever a transaction Ti needs to perform a Read(Q) or a Write(Q) operation, it requests
permission from the system to perform the operation
• The system handles the request in the following manner
When Ti issues a Read(Q)
1. If TS(Ti) ≤ W-timestamp(Q), then Ti needs to read a value of Q that was already written by
another transaction (younger)
● Hence, the read operation is rejected, and Ti is rolled back and restarted with a new
system generated timestamp
2. If TS(Ti)≥ W-timestamp(Q), then the read operation is executed, and R-timestamp(Q) is set to
max(R-timestamp(Q), TS(Ti)).
When Ti issues a write(Q).
1. If TS(Ti) < R-timestamp(Q), then another younger transaction has already read the value
of Q and it would be an error to update the value of Q
• Ti is not allowed to update the value of Q and is rolled back and restarted with a new
system generated timestamp
2. If TS(Ti) > R-Timestamp(Q), this means that the last transaction that read the value of Q
is older than Ti and hence Ti is allowed to write the value of Q.
W-TS(Q) = TS(Ti)
2. If TS(Ti) > W-Timestamp(Q), this means that the last transaction that modified the
value of Q is older than Ti and hence Ti is allowed to write the value of Q
W-TS(Q) = TS(Ti)
4. If TS(Ti) < W-timestamp(Q), then another younger transaction has already written a
value of Q and Ti is attempting to write an obsolete value of Q.
Hence, this write operation is rejected, and Ti is rolled back and restarted with a new
system generated timestamp
• A transaction Ti that is rolled back, is automatically restarted
by the system and assigned a new timestamp
Correctness of Timestamp-Ordering Protocol
• The timestamp-ordering protocol guarantees serializability since all the arcs in the precedence
graph are of the form:
Thus, there will be no cycles in the precedence graph
• Timestamp protocol ensures freedom from deadlock as no transaction ever waits.
• The read or write is either successfully performed or the transaction is rolled back
• But the schedule may not be free from cascading roll back
– The cascading rollback could be avoided by disallowing the values modified by a
transaction until the transaction commits (Strict or Rigorous Locking Scheme).
This adds additional overhead and waiting as in the case of locking scheme. Also
waiting can cause a deadlock
• Advantages of TSO protocol
– Deadlock free
– Free from locking overheads
• Disadvantages of TSO Protocol
– Cascading Rollback
– Non recoverable schedules
– There is a possibility that a transaction may be rolled back again and
again and hence face starvation
– When a transaction is rolled back, the work performed by it is
undone. This requires additional work , hence bringing down the
system throughput
Thomas’ Right Rule
• A modification of the TSO protocol that allows greater level
of concurrency
• Consider the following schedule
T1 T2
R(Q)
W(Q)
W(Q)
• Assuming that TS(T1) < TS(T2) and Applying TSO protocol,
– R(Q) of T1 succeeds and R-TS(Q) = TS(T1)
– W(Q) of T2 succeeds as TS(Ti) > R-TS(Q)
• W-TS(Q) = TS(T2)
– When T1 attempts W(Q), then
• TS(T1) < W-TS(Q)
• W(Q) of T1 is rejected and transaction T1 is rolled back
• Thus we see that the TSO protocol requires the rollback of T1
• But this rollback is unnecessary as the value of Q written by T1
will never be read because
– Any transaction Ti with TS(Ti) < TS(T1) that attempts to perform R(Q)
will be rolled back as TS(Ti) < W-TS(Q)
– Any transaction Tj with TS(Tj) > TS(T2) will read the value of Q written
by T2 rather than the value written by T1
• This observation leads to a modified version of the TSO called
the Thomas’ Right Rule
Thomas’ Write Rule
• Under this protocol, the obsolete write operations can be
ignored under certain circumstances
• The protocol rules for the Read operations remain
unchanged
• The protocol rules for the Write operations are as follows
– Suppose transaction Ti issues a W(Q)
• If TS(Ti) < R-TS(Q), then the value of Q that Ti is producing was previously
needed, hence W(Q) is rejected and Ti is rolled back
• If TS(Ti) < W-TS(Q), then Ti is attempting to write an obsolete value of Q.
Hence the W(Q) is ignored
• Otherwise the W(Q) is executed and W-TS(Q) = TS(Ti)
• Thomas’ Write Rule makes use of View Serializability to achieve serializable schedules
• the schedule below is not conflict serializable (draw precedence graph and check) and hence not possible under two phase locking or TSO protocols as
these protocols ensure conflict serializability
T1 T2
R(Q)
W(Q)
W(Q)
• Under Thomas’ Write Rule, W(Q) of T1 is ignored.
• The result is a schedule that is view equivalent to the serial schedule T1-> T2
VALIDATION BASED PROTOCOLS (Optimistic Techniques)
• Lock based and timestamp based concurrency control
techniques require that the transaction either waits or rolls
back and these are referred to as pessimistic techniques
• Also these techniques require a check before execution a read
or write and this is expensive
• An alternative technique is the validation based technique
• The validation based protocol assumes that each transaction
executes in three phases as follows :
• Read and execution phase:
– Starts with the activation of the transaction
– All data items are read into local variables and any modifications that
are made are to these local copies only
• Validation phase:
– The DBMS checks the that modifications made to the data
items will not cause any inconsistencies to the database
• Any possibility of inconsistency due to modifications causes
the transaction to rollback
• Write Phase :
– If the transaction has passed the validation phase, the
modifications made by the transaction are committed
• Each transaction Ti has 3 timestamps
– Start(Ti) : the time when Ti started its execution and hence the Read phase
– Validation(Ti): the time when Ti finishes the Read phase and entered its validation
phase
• All modifications are made to the local copies of the database items and these local copies will not
be accessible to other concurrent transactions
– Finish(Ti) : the time when Ti finished its write phase
• After the write phase, all modifications are reflected in the database
• Serializability order is determined by timestamp given at validation time, to
increase concurrency.
– Thus TS(Ti) is given to the value of Validation(Ti).
• This protocol is useful and gives greater degree of concurrency I because relatively
few transactions will have to be rolled back.
Validation for Transaction Tj
• A transaction Tj can complete its validation phase successfully If for all Ti with TS (Ti) < TS
(Tj) either one of the following condition holds:
1. finish(Ti) < start(Tj) i.e. older transaction Ti must complete all its phases before start
of Tj
2. Finish(Ti) < Validation(Tj) so that write of Ti does not overlap with Read of Tj or Tj
reads after Ti has finished writing
3. Validation(Ti) < Validation(Tj) i.e. Ti completes its Read before Tj completes its Read
so that Ti does not overlap with Read of Tj
• For the set of conditions, to validate Tj the conditions are
checked in the sequence of 1,2,3
– If the first condition does not hold, then the second is checked. If the
second condition is false, the third is checked
• If any of the validation conditions is true, then there is no
conflict and Tj is validated successfully
• If none of the conditions holds, then it means conflict, the
validation of Tj fails and it is rolled back and re-started
MULTIVERSION SCHEMES
• In concurrency control schemes discussed so far, any
modification to the data required that the transaction have
exclusive use of the data and other transactions would be
locked out or aborted until the data item could be accessed
• In the multi version scheme, each Write on the data item X is
achieved by making a new copy or version of the data item
• Thus a history of the evolution of the value of the data item is
recorded in the database
• As far as users are concerned, their transaction running on a
system with multiversions will work in an identical manner as a
single version system
Multiversion Schemes
• When a transaction needs to read a data item, say Q, for which multi versions exist,
the DBMS selects one of the versions of Q
• The value read by the transaction must be consistent with some serial execution of
the transaction with a single version of the data item
• Thus the concurrency control problem is transferred into the selection of the
correct version from the multiple versions of the data item
• Many schemes such as
– Multiversion Timestamp Ordering Protocol
– Multiversion Locking Protocol
– And others have been proposed using the multiversion scheme
• We will discuss the Multiversion Timestamp Ordering Protocol
• Suppose that transaction Ti issues a read(Q) or write(Q) operation. Let Qk denote the
version of Q whose write timestamp is the largest write timestamp less than or equal to
TS(Ti).
1. If transaction Ti issues a read(Q), then the value returned is the content of version
Q k.
2. If transaction Ti issues a write(Q)
1. if TS(Ti) < R-timestamp(Qk), then transaction Ti is rolled back.
2. if TS(Ti) = W-timestamp(Qk), the contents of Qk are overwritten
3. if TS(Ti) >= R-timestamp(Qk), then new version of Q is created and R-timestamp(Q)
= TS(Ti) and W-timestamp(Q) = TS(Ti)
• The multiversion scheme never allows a read operation to be
delayed
• However the overhead of the read is the updation of read
timestamp of the data item.
– This is advantageous if majority of the operations are read and hence and
only one version is likely to exist for most data items
• Serializability is achieved by rollback which may result in
cascading rollback and hence expensive.
– Cascading rollback can be avoided by not allowing other transactions
to use the versions created by uncommitted transactions
• Problem of deadlock does not occur as no transaction is made
to wait
Multiversion Two-Phase Locking
• Differentiates between read-only transactions and update transactions
• Update transactions acquire read and write locks, and hold all locks up to the end of the
transaction. That is, update transactions follow rigorous two-phase locking.
– Each successful write results in the creation of a new version of the data item written.
– each version of a data item has a single timestamp whose value is obtained from a
counter ts-counter that is incremented during commit processing.
• Read-only transactions are assigned a timestamp by reading the current value of
ts-counter before they start execution; they follow the multiversion timestamp-ordering
protocol for performing reads.
• When an update transaction wants to read a data item:
– it obtains a shared lock on it, and reads the latest version.
• When it wants to write an item
– it obtains X lock on; it then creates a new version of the item and sets this version's
timestamp to ∞.
• When update transaction Ti completes, commit processing occurs:
– Ti sets timestamp on the versions it has created to ts-counter + 1
– Ti increments ts-counter by 1
• Read-only transactions that start after Ti increments ts-counter will see the values
updated by Ti.
• Read-only transactions that start before Ti increments the
ts-counter will see the value before the updates by Ti. (According to TSO)
DEADLOCK HANDLING
• System is deadlocked if there is a set of transactions such that every transaction in the set is waiting for another
transaction in the set.
• The following is a Schedule with deadlock
• Two principal methods for dealing with deadlocks
– Deadlock Prevention
• Ensures that the system will never enter a deadlock
– Deadlock Detection and Recovery from Deadlock
• Allows the system to enter a deadlock state and then tries to recover from
deadlock
– Both methods involve transaction rollback
– Rollback may be partial i.e. transaction is rolled back to the point
where it obtained a lock whose release resolves the deadlock
• Prevention is commonly used if probability that the system will
enter a deadlock is relatively high; else detection and recovery
are more efficient
• The overhead of ensuring that there is no deadlock is very
high; so if deadlocks are rare then detection of deadlock and
recovery from it are preferred
• Also, deadlock avoidance/prevention schemes avoid all
potential deadlocks, even those that do not translate into
actual deadlock
Deadlock Prevention
• Care is taken to ensure that deadlock never occurs
• Two phase locking protocol ensures serializability but does not ensure deadlock free
situation
• Some prevention strategies :
1. Require that each transaction locks all its data items it requires before it begins
execution (predeclaration) :
• Disadvantage is that degree of concurrency is lowered considerably
• A transaction may need a data item for a very short period and locking all data
items during the entire duration of the transaction may make the data items
inaccessible to other concurrent transactions
2. Impose partial ordering of all data items and require that a transaction can lock data items
only in the order specified by the partial order (graph-based protocol):
– Say data items may be ordered having ranks 1,2,3…
– A transaction requiring data items say A(with rank i) and B(with rank j, j>i) must first
request a lock for A(with lower rank) and then only it can request a lock for B(with
higher rank)
– All transactions follow the same protocol even though within a transaction the
requirement for the data item may not be in the same order
– This scheme too reduces concurrency, but not to the same extent as the previous
one
3. Another set of approaches is to decide whether to make a
transaction wait or roll back. The decision is controlled by
timestamp values
– Rolled back transactions retain their timestamp values and
hence their “seniority”
– So in subsequent recurrence they get a “higher priority”
– Two protocols under this approach
» WAIT DIE
» WOUND WAIT
wait-die scheme — non-preemptive
– If the requesting transaction Ti is older than the transaction Tj that is holding
a lock on the requested data item, the requesting transaction (older) is
allowed to wait
• Ti waits if TS(Ti) < TS(Tj)
– If the requesting transaction is younger than the transaction that is holding a
lock on the requested data item, the requesting transaction (younger) is
rolled back (it “dies”)
• Ti is rolled back if TS(Ti) >TS(Tj)
– The rolled back transaction retains its old timestamp when
restarted
– a transaction may die several times before acquiring needed data item
• Example :
• Say transactions T1, T2, T3 have timestamps 5, 10 and 15
respectively
– If T1 requests a data item held by T2, then T1 waits
– If T3 requests a data item held by T2, then T3 will be rolled back
• wound-wait scheme — preemptive
– If a younger transaction Ti holds a data item requested by
an older transaction Tj, then the younger transaction is
rolled back
• Ti is rolled back if TS(Ti) > TS(Tj)
• The younger transaction is wounded by the older
transaction and dies
• Ti is rolled back if TS(Ti) >TS(Tj)
• The rolled back transaction retains its old timestamp
when restarted
– If a younger transaction Ti requests a data item held by an
older transaction Tj, the younger transaction is allowed to
wait
• Example :
• Say transactions T1, T2, T3 have timestamps 5, 10 and 15
respectively
– If T1 requests a data item held by T2, then T1 is rolled back
– If T3 requests a data item held by T2, then T3 will wait
• In both schemes, the older transaction never rolls back
• Thus the older transaction would eventually get all the data
items it needs
• Thus problem of starvation is minimized
WAIT-DIE WOUND WAIT
• If requesting transaction Ti • Tj (younger) is rolled back
(younger) is rolled back, it because Tj(older) is
re-starts with the same requesting the data item it
timestamp holds
• If the transaction Tj(older) • When Tj (younger) restarts
is still holding the lock on and requests the data item
the data item, Ti will be being held by Tj(older), Ti
rolled back again waits
• Thus Ti may die several • Thus fewer rollbacks
times before acquiring the • Thus Wound–Wait is a
needed data item better scheme
Deadlock Detection and Recovery
• Algorithm for deadlock detection is called the Wait-for Graph
• a wait-for graph, which consists of a pair G = (V,E),
– V is a set of vertices (all the transactions in the system)
– E is a set of edges; each element is an ordered pair Ti →Tj.
• If Ti → Tj is in E, then there is a directed edge from Ti to Tj,
implying that Ti is waiting for Tj to release a data item.
• When Ti requests a data item currently being held by Tj, then
the edge Ti -> Tj is inserted in the wait-for graph.
• This edge is removed only when Tj is no longer holding a data
item needed by Ti.
• The system is in a deadlock state if and only if the wait-for
graph has a cycle.
• Each transaction involved in the cycle is said to be deadlocked
• T17 is waiting for T18 • Transactions T18, T19
and T19 and T20 are deadlocked
• T18 is waiting for T20
• T19 is waiting for T18
• No cycle, hence there
exists no deadlock
Recovery from Deadlock
• To recover from deadlock, the cycles in the
wait for graph must be broken
• The common method for doing this is to roll
back one or more transactions involved in the
deadlock until the system exhibits no further
deadlock situation
• The selection of the transaction (victim) to be rolled back is based
on the following considerations so that transaction that has
incurred minimum cost is rolled back:
– The progress of the transaction : It is generally preferable to roll back a
transaction that has just started
– The number of data items that the transaction has modified : It is
generally preferable to roll back a transaction that has not modified any
data item or has modified few data items
– The amount of computing remaining for the transaction : It is preferable
not to roll back a transaction if it has almost run to completion
– The number of data items that have yet to be accessed by the
transaction : It is preferable not to roll back a transaction if it needs very
few additional data items before completion
– The number of transactions that will be involved in the roll back
• Once the victim has been selected, the particular transaction
must be rolled back
• The rollback may be total or partial
– Rollback the transaction to its start i.e. abort the transaction and
restart it (Total Rollback)
– More effective to roll back transaction only as far as necessary to
break deadlock (Partial Rollback)
• Starvation happens if same transaction is always chosen as victim.
• The system must ensure that a transaction is picked up as a victim only a small
number of times
•
• In order to detect deadlocks, the system needs to maintain the
wait-for graph and invoke a deadlock-detection algorithm
periodically to look for cycles.