OS - Unit-2
OS - Unit-2
Process Synchronization: Critical Section Problem, Bakery Algorithm and Peterson’s Solution;
Synchronization Hardware, Semaphores; Classic Problems of Synchronization- Readers and
Writers Problem, Dining Philosophers Problem; Monitors.
Process Synchronization
Process Synchronization is the task of coordinating the execution of processes in a way that no
two processes can have access to the same shared data and resources.
It is specially needed in a multi-process system when multiple processes are running together, and
more than one processes try to gain access to the same shared resource or data at the same time.
This can lead to the inconsistency of shared data. So the change made by one process not necessarily
reflected when other processes accessed the same shared data. To avoid this type of inconsistency
of data, the processes need to be synchronized with each other.
On the basis of synchronization, processes are categorized as one of the following two types:
Independent Process: The execution of one process does not affect the execution of
other processes.
Cooperative Process: A process that can affect or be affected by other processes executing
in the system.
Process synchronization problem arises in the case of Cooperative process also because
resources are shared in Cooperative processes
Race Condition:
A race condition happens when two or more processes try to use or change the same data at the same
time. This can cause mistakes because the result depends on which process acts first.
To prevent this, we can use locks or make the important part of the code (called the critical section)
run like a single step that can't be interrupted. This keeps the data safe and correct.
Sections of a Program
Here, are four essential elements of the critical section:
Entry Section: It is part of the process which decides the entry of a particular process.
Critical Section: This part allows one process to enter and modify the shared variable.
Exit Section: Exit section allows the other process that are waiting in the Entry Section,
to enter into the Critical Sections. It also checks that a process that finished its execution
should be removed through this Section.
Remainder Section: All other parts of the Code, which is not in Critical, Entry, and
Exit Section, are known as the Remainder Section.
In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
Mutual Exclusion: If a process is executing in its critical section, then no other process is
allowed to execute in the critical section.
Progress: If no process is executing in the critical section and other processes are waiting
outside the critical section, then only those processes that are not executing in their remainder
section can participate in deciding which will enter in the critical section next, and the
selection cannot be postponed indefinitely.
Bounded Waiting: A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its critical
section and before that request is granted.
There are two main types of kernels to manage critical sections in operating systems:
1. Preemptive Kernel:
o Allows a running process to be paused and replaced by another process, even when it's
in kernel mode.
o This ensures better responsiveness but may require more complex synchronization
methods.
2. Non-Preemptive Kernel:
o Does not allow a process in kernel mode to be interrupted.
o Since only one process runs in the kernel at a time, it's easier to manage and avoids
race conditions.
Preemptive kernels are common in systems like Linux and Solaris, while non-preemptive kernels are
simpler but less flexible.
Peterson’s Solution:
Peterson’s Solution is a classical software-based solution to the critical section problem. In
Peterson’s solution, we have two shared variables:
How it Works:
1. Each process sets a flag when it wants to enter the critical section.
2. A TURN variable decides whose turn it is to enter.
3. If both processes want to enter, the one whose turn it is gets priority.
4. Once a process finishes, it gives turn to the other process.
This ensures mutual exclusion (one process at a time), progress, and bounded waiting (each process
gets a turn).
Code Explanation:
1. flag[i] = true;
o This process (Process Pi) sets its own flag to true, indicating that it wants to enter the
critical section.
2. turn = j;
o This gives the turn to the other process (Process Pj), signaling that Pj should get
priority if both processes want to enter.
3. while (flag[j] && turn == j);
o This is a waiting loop (also called a busy wait).
o The process will keep waiting if:
The other process (Pj) has its flag set to true (meaning it also wants to enter).
The turn is set to Pj (giving Pj priority).
o The process will only proceed when either:
Pj no longer wants to enter (flag[j] = false), or
It's Pi's turn.
4. Critical Section
o Once the loop condition is false, Pi enters the critical section and performs its task.
5. flag[i] = false;
o After finishing its work in the critical section, Pi sets its flag back to false, indicating it
no longer needs the resource.
6. Remainder Section
o This is the remaining part of the code where no shared resources are involved.
7. while (true);
o The process keeps running continuously, repeating the above steps.
Peterson’s Solution preserves all three conditions:
Mutual Exclusion is assured as only one process can access the critical section at any time.
Progress is also assured, as a process outside the critical section does not block
other processes from entering the critical section.
Bounded Waiting is preserved as every process gets a fair chance.
It involves busy waiting.(In the Peterson’s solution, the code statement- ―while(flag[j] &&
turn == j);‖ is responsible for this. Busy waiting is not favored because it wastes CPU
cycles that could be used to perform other tasks.)
It is limited to 2 processes.
Peterson’s solution cannot be used in modern CPU architectures.
Synchronization Hardware:
Sometimes the problems of the Critical Section are also resolved by hardware. Some operating
system offers a lock functionality where a Process acquires a lock when entering the Critical section
and releases the lock after leaving it.
So when another process is trying to enter the critical section, it will not be able to enter as it is
locked. It can only do so if it is free by acquiring the lock itself.
1. TestAndSet() 2.Swap()
TestAndSet():
The TestAndSet() hardware instruction is atomic instruction. Atomic means both test operation and
set operation are executed in one machine cycle at once.
If the two different processes are executing simultaneously each on different cpu. They will be
executed sequentially in some random order.
How It Works:
1. Test: It checks the value of a memory location (like a lock).
2. Set: If the location’s value is 0, it changes it to 1 (indicating the lock is taken).
Do
{
while(TestAndSet(&lock));
//critical section
lock = FALSE;
//remainde section
}
while(TRUE);
boolean lock;
boolean TestAndSet (Boolean & target)
{
boolean rv = target;
target = true;
return rv;
}
while(1)
{
while(TestAndSet(lock));
critical section
lock=false;
remainder section
}
How It Works:
1. A process tries to enter the critical section.
2. If TestAndSet(lock) returns false, the process enters the critical section.
3. While inside, the lock remains true, so no other process can enter.
4. When the process finishes, it sets the lock back to false, allowing other processes to proceed.
Ensures only one process enters at a time (Mutual Exclusion).
No queue system; processes keep checking until the lock becomes available.
Swap() :
Swap algorithm is a lot like the TestAndSet algorithm. Instead of directly setting lock to true in
the swap function key is set to true and then swapped with lock.
How It Works:
1. A variable called key is set to true.
2. The key value is then swapped with the lock value.
3. If lock = false initially, swapping makes lock = true and key = false. This means the process
can enter the critical section.
4. While one process is in the critical section, the next process will keep swapping but will get
key = true, meaning they must wait.
5. When the first process exits the critical section, it sets lock = false, allowing other processes to
proceed.
Ensures Mutual Exclusion (only one process in the critical section at a time).
Uses swapping logic for synchronization.
Swap() pseudocode:
boolean lock;
Individual key;
void swap(boolean &a, boolean &b)
{
boolean temp = a;
a=b;
b=temp;
}
while(1)
{
key=true;
while(key);
swap(lock,key);
critical section
lock=false;
remainder section
}
Mutex Locks
A mutex (short for mutual exclusion) is a software-based locking mechanism used to control access
to a critical section (a part of code where shared resources are accessed).
How It Works:
1. A lock is placed on the critical section.
2. When a process wants to enter, it must acquire the lock.
3. Once inside, other processes are blocked from entering.
4. When the process finishes, it releases the lock, allowing another process to enter.
Ensures mutual exclusion (one process at a time).
Useful in multi-threading and multi-processor environments.
do
{
acquire lock
critical section
release lock
reminder section
}
while(TRUE)
Acquire:
acquire()
{
while(!available)
; /*busy wait */
available=false ;;
}
Release:
release()
{
available=true;
}
How It Works:
1. Acquire Lock:
o Before entering the critical section, a process tries to get the lock.
o If the lock is already taken, the process keeps waiting (this is called busy waiting).
2. Critical Section:
o The process executes its important code safely.
3. Release Lock:
o After finishing, the process releases the lock so others can enter.
Key Problem (Spinlock):
In the acquire phase, if a process keeps waiting, it wastes CPU cycles. This is called a
spinlock.
Spinlocks are bad for single-CPU systems since they block the entire system.
Semaphores:
Semaphores refer to the integer variables that are primarily used to solve the critical section
problem via combining two of the atomic procedures, wait() and signal(), for the synchronization.
A semaphore is a signaling mechanism and a thread that is waiting on a semaphore can be signaled
by another thread. This is different than a mutex, as the mutex can be signaled only by the thread
that is called the wait function.
A semaphore uses two atomic operations, ―wait‖ and ―signal‖ for process synchronization.
1. wait() – Decreases the semaphore value and may block a thread if the resource is not
available.
2. signal() – Increases the semaphore value and allows waiting threads to proceed.
Wait
The wait operation decrements the value of its argument S, if it is positive. If S is negative or zero,
then no operation is performed.
wait(S)
{
while (S<=0);
S--;
}
Signal
Binary Semaphores: A binary semaphore (or mutex lock) is a synchronization tool that ensures
only one process accesses a shared resource at a time. It has two states:
1. 1 (Unlocked) – The resource is available. A process can enter its critical section.
2. 0 (Locked) – The resource is in use. Other processes must wait.
How it works:
Initially, the semaphore is set to 1 (resource is free).
Counting Semaphores: A counting semaphore is used to manage access to a resource that has
multiple instances available. Unlike a binary semaphore (which is just 0 or 1), a counting semaphore
can have any positive value.
How it works:
1. The semaphore is initialized to the total number of available resource instances.
2. When a process wants to use the resource:
o It checks if the semaphore value is greater than 0.
o If yes, it decreases the semaphore by 1 and enters the critical section.
o If no, it waits until a resource becomes available.
3. When the process finishes using the resource:
o It increases the semaphore by 1, making the resource available for others.
This allows multiple processes to use the resource simultaneously, as long as the limit is not
exceeded.
Advantages of Semaphores
Semaphores allow only one process into the critical section. They follow the mutual
exclusion principle strictly and are much more efficient than some other methods of
synchronization.
There is no resource wastage because of busy waiting in semaphores as processor time is
not wasted unnecessarily to check if a condition is fulfilled to allow a process to access
the critical section.
Semaphores are implemented in the machine independent code of the microkernel. So
they are machine independent.
Disadvantages of Semaphores
Semaphores are complicated so the wait and signal operations must be implemented in
the correct order to prevent deadlocks.
Semaphores are impractical for last scale use as their use leads to loss of modularity. This
happens because the wait and signal operations prevent the creation of a structured layout
for the system.
Semaphores may lead to a priority inversion where low priority processes may access
the critical section first and high priority processes later.
A bounded buffer lets multiple producers and multiple consumers share a single buffer. Producers
write data to the buffer and consumers read data from the buffer.
Producer Process:
do
{
//PRODUCE ITEM
wait(empty);
wait(mutex);
signal(mutex);
signal(full);
}
while(1);
Consumer Process:
do
{
wait(full);
wait(mutex);
signal(mutex);
signal(empty);
//CONSUME ITEM
}
while(1);
Code explanation:
1. Multiple readers can read simultaneously (since reading does not modify data).
2. Only one writer can write at a time (to prevent data corruption).
3. If a writer is writing, no other process (reader or writer) can access the resource until it
finishes.
This problem occurs in operating systems, databases, and file systems, where efficient handling of
read and write operations is necessary to avoid data inconsistency and performance bottlenecks.
Semaphore w: Ensures that only one writer or multiple readers access the resource at a
time.
Variable read_count: Keeps track of how many readers are currently accessing the resource.
Writer Process:
while(TRUE)
{
wait(w);
signal(w) ;
}
Reader Process:
while(TRUE)
{
//Acquire lock
wait(m);
read_count++:
if(read_count ==1)
wait(w);
//Release lock
signal(m);
//Acquire lock
wait(m);
read_count--;
if(read_count ==0)
signal(w);
//Release lock;
signal(m);
}
Code Explanation:
Writer Process:
1. Wait (wait(w)) – Ensures no readers are reading.
2. Write the data.
3. Release (signal(w)) – Allows readers/writers to proceed.
Reader Process:
1. Acquire lock (wait(m)), increase read_count.
2. If it’s the first reader, it blocks writers (wait(w)).
3. Release lock (signal(m)).
4. Read the data.
5. Acquire lock (wait(m)), decrease read_count.
6. If it’s the last reader, it unblocks writers (signal(w)).
7. Release lock (signal(m)).
Bakery Algorithm
The Bakery Algorithm is a method used in computer science to ensure that multiple processes (or
programs) can take turns accessing a shared resource without interfering with each other. This is
called solving the mutual exclusion problem.
In computing, this method helps multiple processes take turns using a resource (like a printer or a file)
in a fair and orderly way, without causing errors or conflicts.
The Bakery Algorithm works like a real bakery where customers take a ticket before being served.
Here’s how it works step by step:
1. Taking a Ticket: When a process (a program trying to use a shared resource) wants to enter
the critical section (the part where it uses the shared resource), it picks a ticket number. This
number is like a queue number in a bakery.
2. Waiting for Turn: All processes compare their ticket numbers. The process with the smallest
number gets to go first.
3. Handling Ties: If two processes get the same number, the one with the smaller ID (like a
smaller customer number in line) gets priority.
4. Entering the Critical Section: The process with the smallest number enters the critical
section and uses the shared resource.
5. Finishing Up: Once done, the process leaves, and the next smallest-numbered process gets its
turn.
This method ensures fairness—first come, first served—so no process is left waiting forever!
Algorithm Pseudocode –
repeat
choosing[i] := true;
number[i] := max(number[0], number[1], ..., number[n - 1])+1;
choosing[i] := false;
for j := 0 to n - 1
do begin
while choosing[j] do no-op;
while number[j] != 0
and (number[j], j) < (number[i], i) do no-op;
end;
critical section
number[i] := 0;
remainder section
until false;
Code Explanation:
DEADLOCKS
A Deadlock is a situation where each of the computer process waits for a resource which is being
assigned to some another process. In this situation, none of the process gets executed since the
resource it needs, is held by some other process which is also waiting for some other resource to be
released.
Let us assume that there are three processes P1, P2 and P3. There are three different resources
R1, R2 and R3. R1 is assigned to P1, R2 is assigned to P2 and R3 is assigned to P3.
After some time, P1 demands for R1 which is being used by P2. P1 halts its execution since it can't
complete without R2. P2 also demands for R3 which is being used by P3. P2 also stops its execution
because it can't continue without R3. P3 also demands for R1 which is being used by P1 therefore
P3 also stops its execution.
In this scenario, a cycle is being formed among the three processes. None of the process is
progressing and they are all waiting. The computer becomes unresponsive since all the processes
got blocked.
Consider an example when two trains are coming toward each other on the same track and
there is only one track, none of the trains can move once they are in front of each other. A
similar situation occurs in operating systems when there are two or more processes that hold
some resources and wait for resources held by other(s).
For example, in the below diagram, Process 1 is holding Resource 1 and waiting for resource 2
which is acquired by process 2, and process 2 is waiting for resource 1.
Figure: Deadlock in OS
Difference between Starvation and Deadlock
Deadlock Characterization
A deadlock is a situation where a set of processes are blocked because each process is waiting for a
resource that another process is holding. This prevents any of the processes from making progress.
Causes of Deadlocks
Deadlocks occur when system resources are tied up and cannot be used by other processes.
If the system cannot resolve the deadlock, processes may never complete execution
b. Hold and Wait : A process waits for some resources while holding another resource at the
same time. A process can hold multiple resources and still request more resources from other
processes which are holding them.
In the diagram given below, Process 2 holds Resource 2 and Resource 3 and is requesting the
Resource 1 which is held by Process 1.
c. Circular Wait : All the processes must be waiting for the resources in a cyclic manner so that
the last process is waiting for the resource which is being held by the first process.
A process is waiting for the resource held by the second process, which is waiting for the
resource held by the third process and so on, till the last process is waiting for a resource held
by the first process. This forms a circular chain.
For example: Process 1 is allocated Resource2 and it is requesting Resource 1. Similarly,
Process 2 is allocated Resource 1 and it is requesting Resource2. This forms a circular wait
loop
d. No Preemption: The process which once scheduled will be executed till the completion. No
other process can be scheduled by the scheduler meanwhile.
A resource cannot be preempted from a process by force. A process can only release a resource
voluntarily.
In the diagram below, Process 2 cannot preempt Resource 1 from Process 1. It will only be
released when Process 1 relinquishes it voluntarily after its execution is complete.
Operating System Unit-II
a. Introduction
A Resource-Allocation Graph (RAG) is a directed graph that represents the allocation of
resources to processes in a system. It helps in analyzing the possibility of deadlocks, where
processes wait indefinitely for resources.
c. Representation of Resources
If a resource has multiple instances, each instance is represented as a dot inside the
rectangle.
When a process requests a resource, a request edge is added.
When the resource is allocated, the request edge converts into an assignment edge.
Resource Instances
R₁ has one instance.
Process States
P₁ is holding R₂ and waiting for R₁.
Deadlock Condition
P₂ is waiting for R₃, which is held by P₃.
The cost of preventing a deadlock might be higher than the inconvenience of dealing with it
when it occurs.
Example: If a deadlock occurs once in 100 years, a simple system reboot might be more
practical than implementing strict deadlock prevention mechanisms.
Operating System Unit-II
3. Deadlock Avoidance
The system ensures that deadlocks never occur by scheduling resource allocation carefully.
It uses algorithms like the Banker's Algorithm, which checks before granting resources
whether it could lead to a deadlock.
This requires prior knowledge of maximum resource demands from all processes.
4. Deadlock Prevention
The system prevents deadlocks before they occur by enforcing strict rules.
Deadlock Prevention
A deadlock happens when processes (tasks) get stuck, waiting for each other to release resources. We
can prevent this by removing one of the four conditions that cause deadlocks.
Instead of letting a process hold some resources and wait for more, we prevent it by:
1. Giving all required resources at once before execution.
2. Forcing the process to release all resources before asking for new ones.
This stops deadlocks but wastes resources because processes may hold resources they don’t
need immediately.
Drawback: It may waste resources because a process gets all resources even if it doesn’t
need them immediately.
To prevent deadlocks, we can force a process to release its resources if it asks for more and
they are unavailable.
This allows other processes to use those resources, but it can cause delays (starvation) if a
process keeps releasing and re-requesting resources.
Drawback: Processes may have to restart their work, causing delays.
To prevent this, we force processes to request resources in a fixed order (e.g., always
request lower-numbered resources first).
This stops circular waiting and prevents deadlocks.
Drawback: It may be inflexible for some systems.
Deadlock Avoidance
Deadlock avoidance prevents deadlocks before they happen by checking each resource request
before granting it.
The system only gives a resource if it ensures that no deadlock will occur.
If granting a resource leads to an unsafe state, the system denies the request and waits for a
safe condition.
Operating System Unit-II
The system uses special algorithms (like the Banker’s Algorithm) to check if granting
resources will keep the system in a safe state.
This method dynamically manages resource allocation to avoid circular waiting.
a. Safe State
A system is in a safe state if it can allocate resources to processes in some order without
leading to a deadlock.
This means that even if all processes request their maximum needed resources, the system can
still manage them safely.
In simple words, a safe state ensures that every process will eventually complete without
getting stuck.
Safe state = No deadlock risk.
b. Unsafe State
A system is in an unsafe state if it might lead to a deadlock.
This does not mean a deadlock has already happened, but there is a risk.
In this state, the system cannot guarantee that all processes will finish safely.
Unsafe state = Possible deadlock risk.
Operating System Unit-II
Key Concept
The system represents processes and resources as a graph with nodes and edges
(connections).
Processes (P) and Resources (R) are connected based on requests and allocations.
The system checks for cycles in the graph to determine if a deadlock might occur.
3. Assignment Edge (R → P)
o Shows that a resource has been allocated to a process.
o If a requested resource is assigned, the request edge turns into an assignment edge.
o Example: R₁ → P₁ means resource R₁ is given to process P₁.
How it Works:
1. Before execution, processes must declare all possible resource requests (Claim edges must
be predefined).
2. When a process requests a resource:
o The system uses a Cycle Detection Algorithm to check if granting it will create a
cycle in the graph.
o If no cycle exists, the request is approved, and the resource is assigned.
o If a cycle exists in the graph, it means the system is in an unsafe state, and a
deadlock may occur, the request is denied, and the process must wait.
Operating System Unit-II
Example Scenario
Banker's Algorithm
The Banker's Algorithm is used for deadlock avoidance in systems where multiple instances of
each resource type exist. It ensures that resources are allocated safely so that the system never runs
out of resources and enters a deadlock.
1
Why is it Called "Banker's Algorithm"?
Just like a bank never lends all its money at once to avoid bankruptcy, the system does not
allocate all resources immediately.
It checks before granting a request to ensure there are enough resources left for other
processes to finish.
How It Works
Bankers Algorithm
1. Safety Algorithm
2. Resource Algorithm
1. Safety Algorithm:
The Algorithm for finding out whether or not a system is in safe state can be described as follows:
1. Let Work and Finish be vectors of length ‘m’ and ‘n’ respectively.
Work = Available
Finish[i] = true
go to step (2)
a. Resource-Request Algorithm
Operating System Unit-II
Let Request[i] be the request array for process Pᵢ. Request[i][j] = k means process Pᵢ wants k
instances of resource type Rⱼ. When a process requests resources, the following steps are taken:
1. if Request[i]<=need
Goto step(2); otherwise, raise an error condition, since the process has exceeded its maximum
claim.
Goto(3); otherwise, Pi must wait , since the resources are not available.
3. Have the system pretend to have allocated the requested resources to process p[i] by modifying
the state as follows:
Deadlock Detection
A deadlock occurs when a group of processes are stuck, waiting for resources that other processes are
holding. This means they can never proceed.
To detect deadlocks, the system regularly checks if processes are waiting indefinitely.
How it Works:
The Wait-for Graph is created by removing resource nodes from the Resource Allocation
Graph and keeping only process dependencies.
Operating System Unit-II
If a process P1 is waiting for another process P2 to release a resource, an edge is drawn from
P1 → P2.
Wait-for Graph:
It is a simplified version of the Resource Allocation Graph (RAG).
In this graph, only processes are shown, and edges represent waiting relationships between
processes.
If Process P1 is waiting for Process P2 to release a resource, an edge is drawn from P1 → P2.
Example
Process 1 (P1) has Resource 1 but is waiting for Resource 2, which is held by Process 2
(P2).
Process 2 (P2) has Resource 2 but is waiting for Resource 1, which is held by Process 1
(P1).
This forms a cycle: R1 → P1 → R2 → P2 → R1, confirming that a deadlock has occurred.
When a system has multiple instances of a resource type, the traditional wait-for graph (which works
for single-instance resources) is not sufficient for deadlock detection and resource management.
Instead, more sophisticated algorithms and data structures are used.
Operating System Unit-II
Algorithm:
Step 1:
Let Work and Finish be vectors of length m and n, respectively. Initialize:
Work = Available
Finish[i] = false for i = 0, 1, ..., n - 1.
If Allocation[i] is not equal to 0, then Finish[i] = false; else Finish[i] = true
Step 2:
Find an index i such that both
Finish[i] == false
Request[i] <= Work
If no such i exists then go to step 4.
Step 3:
Perform the following:
Work = Work + Allocation[i]
Finish[i] = true
Go to step 2.
Step 4:
If Finish[i] == false for some i, 0 <= i < n, then it means the system is in a deadlocked state.
Operating System Unit-II
The Deadlock Detection Algorithm checks whether a set of processes is stuck in a circular
wait, preventing system progress.
Deadlock Recovery
When a deadlock happens, the system must find a way to recover from it. There are two main ways
to do this:
This is a slower method because after stopping each process, we need to check again if the
deadlock is fixed.
But it helps in reducing data loss compared to stopping all processes at once.
Summary: Either stop all deadlocked processes quickly or remove them one by one slowly to fix
the issue.
Steps involved:
1. Choosing which process loses its resources
o The system decides which process should give up resources based on:
How many resources it holds
How much CPU time it used
How important the process is
2. Rollback (Restarting the process if needed)
o If a process loses its resources, it cannot continue normally.
o The system may need to restart the process from a previous checkpoint or
completely restart it.
3. Avoid starvation (Fairness in preemption)
o The system must ensure that the same process does not always lose resources.
o If a process is always chosen to give up resources, it will never finish (this is called
starvation).
Summary: Instead of stopping processes, this method moves resources around carefully to fix the
deadlock.
*********************************************************
Operating System Unit-II
Operating System Unit-II