Chapter 7: Process Synchronization
Background
The Critical-Section Problem
Semaphores
Classical Problems of Synchronization
Background
Concurrent access to shared data may result in data
inconsistency.
Maintaining data consistency requires mechanisms to
ensure the orderly execution of cooperating processes.
Shared-memory solution to bounded-butter problem
(Chapter 4) allows at most n – 1 items in buffer at the
same time. A solution, where all N buffers are used is not
simple.
Suppose that we modify the producer-consumer code by
adding a variable counter, initialized to 0 and incremented
each time a new item is added to the buffer
Bounded-Buffer
Shared data
#define BUFFER_SIZE 10
typedef struct {
...
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
int counter = 0;
Bounded-Buffer
Producer process
item nextProduced;
while (1) {
while (counter == BUFFER_SIZE)
; /* do nothing */
buffer[in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
counter++;
}
Bounded-Buffer
Consumer process
item nextConsumed;
while (1) {
while (counter == 0)
; /* do nothing */
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;
}
Bounded Buffer
The statements
counter++;
counter--;
must be performed atomically.
Atomic operation means an operation that completes in
its entirety without interruption.
Bounded Buffer
The statement “count++” may be implemented in
machine language as:
register1 = counter
register1 = register1 + 1
counter = register1
The statement “count—” may be implemented as:
register2 = counter
register2 = register2 – 1
counter = register2
Bounded Buffer
If both the producer and consumer attempt to update the
buffer concurrently, the assembly language statements
may get interleaved.
Interleaving depends upon how the producer and
consumer processes are scheduled.
Bounded Buffer
Assume counter is initially 5. One interleaving of
statements is:
producer: register1 = counter (register1 = 5)
producer: register1 = register1 + 1 (register1 = 6)
consumer: register2 = counter (register2 = 5)
consumer: register2 = register2 – 1 (register2 = 4)
producer: counter = register1 (counter = 6)
consumer: counter = register2 (counter = 4)
The value of count may be either 4 or 6, where the
correct result should be 5.
Race Condition
Race condition: The situation where several processes
access – and manipulate shared data concurrently. The
final value of the shared data depends upon which
process finishes last.
To prevent race conditions, concurrent processes must
be synchronized.
The Critical-Section Problem
n processes all competing to use some shared data
Each process has a code segment, called critical section,
in which the shared data is accessed.
Problem – ensure that when one process is executing in
its critical section, no other process is allowed to execute
in its critical section.
Solution to Critical-Section Problem
1. Mutual Exclusion. If process Pi is executing in its critical
section, then no other processes can be executing in their
critical sections.
2. Progress. If no process is executing in its critical section
and there exist some processes that wish to enter their
critical section, then the selection of the processes that
will enter the critical section next cannot be postponed
indefinitely.
3. Bounded Waiting. A bound must exist on the number of
times that other processes are allowed to enter their
critical sections after a process has made a request to
enter its critical section and before that request is
granted.
Assume that each process executes at a nonzero speed
No assumption concerning relative speed of the n
processes.
Initial Attempts to Solve Problem
Only 2 processes, P0 and P1
General structure of process Pi (other process Pj)
do {
entry section
critical section
exit section
reminder section
} while (1);
Processes may share some common variables to
synchronize their actions.
Algorithm 1
Shared variables:
int turn;
initially turn = 0
turn =0 P0 can enter its
critical section
Process P0 Process P1
do do
{ {
while (turn != 0) ; while (turn != 1) ;
critical section critical section
turn = 1; turn = 0;
reminder section reminder section
} while (1); } while (1);
Satisfies mutual exclusion,
but not progress
Algorithm 2
Shared variables
boolean flag[2];
initially flag [0] = flag
[1] = false.
flag [0] = true P0
ready to enter its critical
section
Process P0 Process P1
do { do {
flag[0] := true; flag[1] := true;
while (flag[1]); while (flag[0]);
critical section critical section
flag [0] = false; flag [1] = false;
remainder section remainder section
} while (1); } while (1);
Satisfies mutual
exclusion, but not
progress requirement.
Algorithm 3
Combined shared variables of
algorithms 1 and 2.
Process P0 Process P1
do { do {
flag [0]:= true; flag [1]:= true;
turn = 1; turn = 0;
while (flag [1] and turn = 1) ; while (flag [0] and turn = 0) ;
critical section critical section
flag [0] = false; flag [1] = false;
remainder section remainder section
} while (1); } while (1);
Meets all three requirements;
solves the critical-section
problem for two processes.
Semaphores
Synchronization tool that does not require busy waiting.
Semaphore S – integer variable
can only be accessed via two indivisible (atomic)
operations
wait (S):
while S 0 do no-op;
S--;
signal (S):
S++;
Critical Section of n Processes
Shared data:
semaphore mutex; //initially mutex = 1
Process Pi:
do {
wait(mutex);
critical section
signal(mutex);
remainder section
} while (1);
Semaphore Implementation
Define a semaphore as a record
typedef struct {
int value;
struct process *L;
} semaphore;
Assume two simple operations:
block suspends the process that invokes it.
wakeup(P) resumes the execution of a blocked process P.
Implementation
Semaphore operations now defined as
wait(S):
S.value--;
if (S.value < 0) {
add this process to S.L;
block;
}
signal(S):
S.value++;
if (S.value <= 0) {
remove a process P from S.L;
wakeup(P);
}
Semaphore as a General Synchronization Tool
Execute B in Pj only after A executed in Pi
Use semaphore flag initialized to 0
Code:
Pi Pj
A wait(flag)
signal(flag) B
Deadlock and Starvation
Deadlock – two or more processes are waiting indefinitely for
an event that can be caused by only one of the waiting
processes.
Let S and Q be two semaphores initialized to 1
P0 P1
wait(S); wait(Q);
wait(Q); wait(S);
signal(S); signal(Q);
signal(Q) signal(S);
Starvation – indefinite blocking. A process may never be
removed from the semaphore queue in which it is suspended.
Two Types of Semaphores
Counting semaphore – integer value can range over
an unrestricted domain.
Binary semaphore – integer value can range only
between 0 and 1; can be simpler to implement.
Can implement a counting semaphore S as a binary
semaphore.
Implementing S as a Binary Semaphore
Data structures:
binary-semaphore S1, S2;
int C:
Initialization:
S1 = 1
S2 = 0
C = initial value of semaphore S
Implementing S
wait operation
wait(S1);
C--;
if (C < 0) {
signal(S1);
wait(S2);
}
signal(S1);
signal operation
wait(S1);
C ++;
if (C <= 0)
signal(S2);
else
signal(S1);
Classical Problems of Synchronization
Bounded-Buffer Problem
Readers and Writers Problem
Dining-Philosophers Problem
Bounded-Buffer Problem
Shared data
semaphore full, empty, mutex;
Initially:
full = 0, empty = n, mutex = 1
Bounded-Buffer Problem Producer Process
do {
…
produce an item in nextp
…
wait(empty);
wait(mutex);
…
add nextp to buffer
…
signal(mutex);
signal(full);
} while (1);
Bounded-Buffer Problem Consumer Process
do {
wait(full)
wait(mutex);
…
remove an item from buffer to nextc
…
signal(mutex);
signal(empty);
…
consume the item in nextc
…
} while (1);
Readers-Writers Problem
Shared data
semaphore mutex, wrt;
Initially
mutex = 1, wrt = 1, readcount = 0
Readers-Writers Problem Writer Process
wait(wrt);
…
writing is performed
…
signal(wrt);
Readers-Writers Problem Reader Process
wait(mutex);
readcount++;
if (readcount == 1)
wait(rt);
signal(mutex);
…
reading is performed
…
wait(mutex);
readcount--;
if (readcount == 0)
signal(wrt);
signal(mutex):
Dining-Philosophers Problem
Shared data
semaphore chopstick[5];
Initially all values are 1
Dining-Philosophers Problem
Philosopher i:
do {
wait(chopstick[i])
wait(chopstick[(i+1) % 5])
…
eat
…
signal(chopstick[i]);
signal(chopstick[(i+1) % 5]);
…
think
…
} while (1);
Chapter 8: Deadlocks
System Model
Deadlock Characterization
Methods for Handling Deadlocks
Deadlock Prevention
Deadlock Avoidance
Deadlock Detection
Recovery from Deadlock
Combined Approach to Deadlock Handling
The Deadlock Problem
A set of blocked processes each holding a resource and
waiting to acquire a resource held by another process in
the set.
Example
System has 2 tape drives.
P1 and P2 each hold one tape drive and each needs another
one.
Example
semaphores A and B, initialized to 1
P0 P1
wait (A); wait(B)
wait (B); wait(A)
Bridge Crossing Example
Traffic only in one direction.
Each section of a bridge can be viewed as a resource.
If a deadlock occurs, it can be resolved if one car backs
up (preempt resources and rollback).
Several cars may have to be backed up if a deadlock
occurs.
Starvation is possible.
System Model
Resource types R1, R2, . . ., Rm
CPU cycles, memory space, I/O devices
Each resource type Ri has Wi instances.
Each process utilizes a resource as follows:
request
use
release
Deadlock Characterization
Deadlock can arise if four conditions hold simultaneously.
Mutual exclusion: only one process at a time can use a
resource.
Hold and wait: a process holding at least one resource
is waiting to acquire additional resources held by other
processes.
No preemption: a resource can be released only
voluntarily by the process holding it, after that process
has completed its task.
Circular wait: there exists a set {P0, P1, …, P0} of
waiting processes such that P0 is waiting for a resource
that is held by P1, P1 is waiting for a resource that is held
by
P2, …, Pn–1 is waiting for a resource that is held by
Pn, and P0 is waiting for a resource that is held by P0.
Resource-Allocation Graph
A set of vertices V and a set of edges E.
V is partitioned into two types:
P = {P1, P2, …, Pn}, the set consisting of all the processes in
the system.
R = {R1, R2, …, Rm}, the set consisting of all resource types
in the system.
request edge – directed edge P1 Rj
assignment edge – directed edge Rj Pi
Resource-Allocation Graph (Cont.)
Process
Resource Type with 4 instances
Pi requests instance of Rj
Pi
Rj
Pi is holding an instance of Rj
Pi
Rj
Example of a Resource Allocation Graph
Resource Allocation Graph With A Deadlock
Resource Allocation Graph With A Cycle But No Deadlock
Basic Facts
If graph contains no cycles no deadlock.
If graph contains a cycle
if only one instance per resource type, then deadlock.
if several instances per resource type, possibility of
deadlock.
Methods for Handling Deadlocks
Ensure that the system will never enter a deadlock state.
Allow the system to enter a deadlock state and then
recover.
Ignore the problem and pretend that deadlocks never
occur in the system; used by most operating systems,
including UNIX.
Deadlock Prevention
Restrain the ways request can be made.
Mutual Exclusion – not required for sharable resources;
must hold for nonsharable resources.
Hold and Wait – must guarantee that whenever a
process requests a resource, it does not hold any other
resources.
Require process to request and be allocated all its
resources before it begins execution, or allow process to
request resources only when the process has none.
Low resource utilization; starvation possible.
Deadlock Prevention (Cont.)
No Preemption –
If a process that is holding some resources requests
another resource that cannot be immediately allocated to it,
then all resources currently being held are released.
Preempted resources are added to the list of resources for
which the process is waiting.
Process will be restarted only when it can regain its old
resources, as well as the new ones that it is requesting.
Circular Wait – impose a total ordering of all resource
types, and require that each process requests resources
in an increasing order of enumeration.
Deadlock Avoidance
Requires that the system has some additional a priori information
available.
Simplest and most useful model requires that each
process declare the maximum number of resources of
each type that it may need.
The deadlock-avoidance algorithm dynamically examines
the resource-allocation state to ensure that there can
never be a circular-wait condition.
Resource-allocation state is defined by the number of
available and allocated resources, and the maximum
demands of the processes.
Safe State
When a process requests an available resource, system must
decide if immediate allocation leaves the system in a safe state.
System is in safe state if there exists a safe sequence of all
processes.
Sequence <P1, P2, …, Pn> is safe if for each Pi, the resources
that Pi can still request can be satisfied by currently available
resources + resources held by all the Pj, with j<I.
If Pi resource needs are not immediately available, then Pi can wait
until all Pj have finished.
When Pj is finished, Pi can obtain needed resources, execute,
return allocated resources, and terminate.
When Pi terminates, Pi+1 can obtain its needed resources, and so
on.
Basic Facts
If a system is in safe state no deadlocks.
If a system is in unsafe state possibility of deadlock.
Avoidance ensure that a system will never enter an
unsafe state.
Safe, Unsafe , Deadlock State
Resource-Allocation Graph Algorithm
Claim edge Pi Rj indicated that process Pj may request
resource Rj; represented by a dashed line.
Claim edge converts to request edge when a process
requests a resource.
When a resource is released by a process, assignment
edge reconverts to a claim edge.
Resources must be claimed a priori in the system.
Resource-Allocation Graph For Deadlock Avoidance
Unsafe State In Resource-Allocation Graph
Banker’s Algorithm
Multiple instances.
Each process must a priori claim maximum use.
When a process requests a resource it may have to wait.
When a process gets all its resources it must return them
in a finite amount of time.
Data Structures for the Banker’s Algorithm
Let n = number of processes, and m = number of resources types.
Available: Vector of length m. If available [j] = k, there are
k instances of resource type Rj available.
Max: n x m matrix. If Max [i,j] = k, then process Pi may
request at most k instances of resource type Rj.
Allocation: n x m matrix. If Allocation[i,j] = k then Pi is
currently allocated k instances of Rj.
Need: n x m matrix. If Need[i,j] = k, then Pi may need k
more instances of Rj to complete its task.
Need [i,j] = Max[i,j] – Allocation [i,j].
Safety Algorithm
1. Let Work and Finish be vectors of length m and n,
respectively. Initialize:
Work = Available
Finish [i] = false for i - 1,3, …, n.
2. Find and i such that both:
(a) Finish [i] = false
(b) Needi Work
If no such i exists, go to step 4.
3. Work = Work + Allocationi
Finish[i] = true
go to step 2.
4. If Finish [i] == true for all i, then the system is in a safe
state.
Resource-Request Algorithm for Process Pi
Request = request vector for process Pi. If Requesti [j] = k
then process Pi wants k instances of resource type Rj.
1. If Requesti Needi go to step 2. Otherwise, raise error
condition, since process has exceeded its maximum claim.
2. If Requesti Available, go to step 3. Otherwise Pi must
wait, since resources are not available.
3. Pretend to allocate requested resources to Pi by modifying
the state as follows:
Available = Available = Requesti;
Allocationi = Allocationi + Requesti;
Needi = Needi – Requesti;;
• If safe the resources are allocated to Pi.
• If unsafe Pi must wait, and the old resource-allocation
state is restored
Example of Banker’s Algorithm
5 processes P0 through P4; 3 resource types A
(10 instances),
B (5instances, and C (7 instances).
Snapshot at time T0:
Allocation Max Available
ABC ABC ABC
P0 010 753 332
P1 200 322
P2 302 902
P3 211 222
P4 002 433
Example (Cont.)
The content of the matrix. Need is defined to be Max –
Allocation.
Need
ABC
P0 743
P1 122
P2 600
P3 011
P4 431
The system is in a safe state since the sequence < P1, P3, P4,
P2, P0> satisfies safety criteria.
Example P1 Request (1,0,2) (Cont.)
Check that Request Available (that is, (1,0,2) (3,3,2)
true.
Allocation Need Available
ABC ABC ABC
P0 0 1 0 743 230
P1 3 0 2 020
P2 3 0 1 600
P3 2 1 1 011
P4 0 0 2 431
Executing safety algorithm shows that sequence <P1, P3, P4,
P0, P2> satisfies safety requirement.
Can request for (3,3,0) by P4 be granted?
Can request for (0,2,0) by P0 be granted?
Deadlock Detection
Allow system to enter deadlock state
Detection algorithm
Recovery scheme
Single Instance of Each Resource Type
Maintain wait-for graph
Nodes are processes.
Pi Pj if Pi is waiting for Pj.
Periodically invoke an algorithm that searches for a cycle
in the graph.
An algorithm to detect a cycle in a graph requires an
order of n2 operations, where n is the number of vertices
in the graph.
Resource-Allocation Graph and Wait-for Graph
Resource-Allocation Graph Corresponding wait-for graph
Several Instances of a Resource Type
Available: A vector of length m indicates the number of
available resources of each type.
Allocation: An n x m matrix defines the number of
resources of each type currently allocated to each
process.
Request: An n x m matrix indicates the current request
of each process. If Request [ij] = k, then process Pi is
requesting k more instances of resource type. Rj.
Detection Algorithm
1. Let Work and Finish be vectors of length m and n,
respectively Initialize:
(a) Work = Available
(b) For i = 1,2, …, n, if Allocationi 0, then
Finish[i] = false;otherwise, Finish[i] = true.
2. Find an index i such that both:
(a) Finish[i] == false
(b) Requesti Work
If no such i exists, go to step 4.
Detection Algorithm (Cont.)
3. Work = Work + Allocationi
Finish[i] = true
go to step 2.
4. If Finish[i] == false, for some i, 1 i n, then the system is in
deadlock state. Moreover, if Finish[i] == false, then Pi is
deadlocked.
Algorithm requires an order of O(m x n2) operations to detect
whether the system is in deadlocked state.
Example of Detection Algorithm
Five processes P0 through P4; three resource types
A (7 instances), B (2 instances), and C (6 instances).
Snapshot at time T0:
Allocation Request Available
ABC ABC ABC
P0 0 1 0 000 000
P1 2 0 0 202
P2 3 0 3 000
P3 2 1 1 100
P4 0 0 2 002
Sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true
for all i.
Example (Cont.)
P2 requests an additional instance of type C.
Request
ABC
P0 0 0 0
P1 2 0 1
P2 0 0 1
P3 1 0 0
P4 0 0 2
State of system?
Can reclaim resources held by process P0, but insufficient
resources to fulfill other processes; requests.
Deadlock exists, consisting of processes P1, P2, P3, and P4.
Detection-Algorithm Usage
When, and how often, to invoke depends on:
How often a deadlock is likely to occur?
How many processes will need to be rolled back?
one for each disjoint cycle
If detection algorithm is invoked arbitrarily, there may be
many cycles in the resource graph and so we would not
be able to tell which of the many deadlocked processes
“caused” the deadlock.
Recovery from Deadlock: Process Termination
Abort all deadlocked processes.
Abort one process at a time until the deadlock cycle is
eliminated.
In which order should we choose to abort?
Priority of the process.
How long process has computed, and how much longer to
completion.
Resources the process has used.
Resources process needs to complete.
How many processes will need to be terminated.
Is process interactive or batch?
Recovery from Deadlock: Resource Preemption
Selecting a victim – minimize cost.
Rollback – return to some safe state, restart process for
that state.
Starvation – same process may always be picked as
victim, include number of rollback in cost factor.
Combined Approach to Deadlock Handling
Combine the three basic approaches
prevention
avoidance
detection
allowing the use of the optimal approach for each of
resources in the system.
Partition resources into hierarchically ordered classes.
Use most appropriate technique for handling deadlocks
within each class.
Reference
ABRAHAM SILBERSCHATZ, PETER BAER GALVIN,
GREG GAGNE, “OPERATING SYSTEM CONCEPTS”, Sixth
Edition, JOHN WILEY & SONS, INC, 2002.