KEMBAR78
OS - Unit-2 | PDF | Computing | Concurrency (Computer Science)
0% found this document useful (0 votes)
6 views37 pages

OS - Unit-2

Operating system

Uploaded by

raghuavabrath
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views37 pages

OS - Unit-2

Operating system

Uploaded by

raghuavabrath
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

UNIT-II

Process Synchronization: Critical Section Problem, Bakery Algorithm and Peterson’s Solution;
Synchronization Hardware, Semaphores; Classic Problems of Synchronization- Readers and
Writers Problem, Dining Philosophers Problem; Monitors.

Deadlocks: System Model; Deadlocks Characterization; Methods for Handling Deadlocks;


Deadlock Prevention; Deadlock Avoidance; Deadlock Detection; and Recovery from Deadlock.

Process Synchronization

Process Synchronization is the task of coordinating the execution of processes in a way that no
two processes can have access to the same shared data and resources.

It is specially needed in a multi-process system when multiple processes are running together, and
more than one processes try to gain access to the same shared resource or data at the same time.

This can lead to the inconsistency of shared data. So the change made by one process not necessarily
reflected when other processes accessed the same shared data. To avoid this type of inconsistency
of data, the processes need to be synchronized with each other.

How Process Synchronization Works?


For Example, process A changing the data in a memory location while another process B is trying
to read the data from the same memory location. There is a high probability that data read by the
second process will be erroneous.

On the basis of synchronization, processes are categorized as one of the following two types:
 Independent Process: The execution of one process does not affect the execution of
other processes.

 Cooperative Process: A process that can affect or be affected by other processes executing
in the system.

Process synchronization problem arises in the case of Cooperative process also because
resources are shared in Cooperative processes
Race Condition:
A race condition happens when two or more processes try to use or change the same data at the same
time. This can cause mistakes because the result depends on which process acts first.
To prevent this, we can use locks or make the important part of the code (called the critical section)
run like a single step that can't be interrupted. This keeps the data safe and correct.

Critical Section Problem:


A critical section is a code segment that can be accessed by only one process at a time. The critical
section contains shared variables that need to be synchronized to maintain the consistency of data
variables. So the critical section problem means designing a way for cooperative processes to
access shared resources without creating data inconsistencies.

Sections of a Program
Here, are four essential elements of the critical section:

 Entry Section: It is part of the process which decides the entry of a particular process.
 Critical Section: This part allows one process to enter and modify the shared variable.
 Exit Section: Exit section allows the other process that are waiting in the Entry Section,
to enter into the Critical Sections. It also checks that a process that finished its execution
should be removed through this Section.
 Remainder Section: All other parts of the Code, which is not in Critical, Entry, and
Exit Section, are known as the Remainder Section.

In the entry section, the process requests for entry in the Critical Section.

Any solution to the critical section problem must satisfy three requirements:

 Mutual Exclusion: If a process is executing in its critical section, then no other process is
allowed to execute in the critical section.
 Progress: If no process is executing in the critical section and other processes are waiting
outside the critical section, then only those processes that are not executing in their remainder
section can participate in deciding which will enter in the critical section next, and the
selection cannot be postponed indefinitely.
 Bounded Waiting: A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its critical
section and before that request is granted.

There are two main types of kernels to manage critical sections in operating systems:
1. Preemptive Kernel:
o Allows a running process to be paused and replaced by another process, even when it's
in kernel mode.
o This ensures better responsiveness but may require more complex synchronization
methods.
2. Non-Preemptive Kernel:
o Does not allow a process in kernel mode to be interrupted.
o Since only one process runs in the kernel at a time, it's easier to manage and avoids
race conditions.
Preemptive kernels are common in systems like Linux and Solaris, while non-preemptive kernels are
simpler but less flexible.

Common Solution to the Critical section Problem:


1. Peterson’s Solution
2. Synchronization Hardware
3. Mutex Lock
4. Semaphores Solution

Peterson’s Solution:
Peterson’s Solution is a classical software-based solution to the critical section problem. In
Peterson’s solution, we have two shared variables:

 boolean flag[i]: Initialized to FALSE, initially no one is interested in entering the


critical section
 int turn: The process whose turn is to enter the critical section.

How it Works:
1. Each process sets a flag when it wants to enter the critical section.
2. A TURN variable decides whose turn it is to enter.
3. If both processes want to enter, the one whose turn it is gets priority.
4. Once a process finishes, it gives turn to the other process.
This ensures mutual exclusion (one process at a time), progress, and bounded waiting (each process
gets a turn).
Code Explanation:
1. flag[i] = true;
o This process (Process Pi) sets its own flag to true, indicating that it wants to enter the
critical section.
2. turn = j;
o This gives the turn to the other process (Process Pj), signaling that Pj should get
priority if both processes want to enter.
3. while (flag[j] && turn == j);
o This is a waiting loop (also called a busy wait).
o The process will keep waiting if:
 The other process (Pj) has its flag set to true (meaning it also wants to enter).
 The turn is set to Pj (giving Pj priority).
o The process will only proceed when either:
 Pj no longer wants to enter (flag[j] = false), or
 It's Pi's turn.
4. Critical Section
o Once the loop condition is false, Pi enters the critical section and performs its task.
5. flag[i] = false;
o After finishing its work in the critical section, Pi sets its flag back to false, indicating it
no longer needs the resource.
6. Remainder Section
o This is the remaining part of the code where no shared resources are involved.
7. while (true);
o The process keeps running continuously, repeating the above steps.
Peterson’s Solution preserves all three conditions:

 Mutual Exclusion is assured as only one process can access the critical section at any time.
 Progress is also assured, as a process outside the critical section does not block
other processes from entering the critical section.
 Bounded Waiting is preserved as every process gets a fair chance.

Disadvantages of Peterson’s solution:

 It involves busy waiting.(In the Peterson’s solution, the code statement- ―while(flag[j] &&
turn == j);‖ is responsible for this. Busy waiting is not favored because it wastes CPU
cycles that could be used to perform other tasks.)
 It is limited to 2 processes.
 Peterson’s solution cannot be used in modern CPU architectures.

Synchronization Hardware:
Sometimes the problems of the Critical Section are also resolved by hardware. Some operating
system offers a lock functionality where a Process acquires a lock when entering the Critical section
and releases the lock after leaving it.

So when another process is trying to enter the critical section, it will not be able to enter as it is
locked. It can only do so if it is free by acquiring the lock itself.

There are two kinds of hardware instructions are there:

1. TestAndSet() 2.Swap()

TestAndSet():

The TestAndSet() hardware instruction is atomic instruction. Atomic means both test operation and
set operation are executed in one machine cycle at once.
If the two different processes are executing simultaneously each on different cpu. They will be
executed sequentially in some random order.
How It Works:
1. Test: It checks the value of a memory location (like a lock).
2. Set: If the location’s value is 0, it changes it to 1 (indicating the lock is taken).

The TestAndSet() instruction can be defined as in the code below:


The definition of the TestAndSet() Instruction:

Do
{
while(TestAndSet(&lock));
//critical section
lock = FALSE;
//remainde section
}
while(TRUE);

The TestAndSet() pseudocode:

boolean lock;
boolean TestAndSet (Boolean & target)
{
boolean rv = target;
target = true;
return rv;
}

while(1)
{
while(TestAndSet(lock));
critical section
lock=false;
remainder section
}

How It Works:
1. A process tries to enter the critical section.
2. If TestAndSet(lock) returns false, the process enters the critical section.
3. While inside, the lock remains true, so no other process can enter.
4. When the process finishes, it sets the lock back to false, allowing other processes to proceed.
Ensures only one process enters at a time (Mutual Exclusion).
No queue system; processes keep checking until the lock becomes available.

Swap() :

Swap algorithm is a lot like the TestAndSet algorithm. Instead of directly setting lock to true in
the swap function key is set to true and then swapped with lock.

How It Works:
1. A variable called key is set to true.
2. The key value is then swapped with the lock value.
3. If lock = false initially, swapping makes lock = true and key = false. This means the process
can enter the critical section.
4. While one process is in the critical section, the next process will keep swapping but will get
key = true, meaning they must wait.
5. When the first process exits the critical section, it sets lock = false, allowing other processes to
proceed.
Ensures Mutual Exclusion (only one process in the critical section at a time).
Uses swapping logic for synchronization.

Swap() pseudocode:

boolean lock;
Individual key;
void swap(boolean &a, boolean &b)
{
boolean temp = a;
a=b;
b=temp;
}

while(1)
{
key=true;
while(key);
swap(lock,key);
critical section
lock=false;
remainder section
}
Mutex Locks
A mutex (short for mutual exclusion) is a software-based locking mechanism used to control access
to a critical section (a part of code where shared resources are accessed).

How It Works:
1. A lock is placed on the critical section.
2. When a process wants to enter, it must acquire the lock.
3. Once inside, other processes are blocked from entering.
4. When the process finishes, it releases the lock, allowing another process to enter.
Ensures mutual exclusion (one process at a time).
Useful in multi-threading and multi-processor environments.

Structure of Mutex Lock Instruction:

do
{
acquire lock

critical section

release lock

reminder section
}
while(TRUE)

Acquire and Release pseudocode:

Acquire:
acquire()
{
while(!available)
; /*busy wait */
available=false ;;
}

Release:
release()
{
available=true;
}
How It Works:
1. Acquire Lock:
o Before entering the critical section, a process tries to get the lock.
o If the lock is already taken, the process keeps waiting (this is called busy waiting).
2. Critical Section:
o The process executes its important code safely.
3. Release Lock:
o After finishing, the process releases the lock so others can enter.
Key Problem (Spinlock):
 In the acquire phase, if a process keeps waiting, it wastes CPU cycles. This is called a
spinlock.
 Spinlocks are bad for single-CPU systems since they block the entire system.

Semaphores:

Semaphores refer to the integer variables that are primarily used to solve the critical section
problem via combining two of the atomic procedures, wait() and signal(), for the synchronization.
A semaphore is a signaling mechanism and a thread that is waiting on a semaphore can be signaled
by another thread. This is different than a mutex, as the mutex can be signaled only by the thread
that is called the wait function.

A semaphore uses two atomic operations, ―wait‖ and ―signal‖ for process synchronization.
1. wait() – Decreases the semaphore value and may block a thread if the resource is not
available.
2. signal() – Increases the semaphore value and allows waiting threads to proceed.

The definitions of wait and signal are as follows –

 Wait
The wait operation decrements the value of its argument S, if it is positive. If S is negative or zero,
then no operation is performed.
wait(S)
{
while (S<=0);
S--;
}

 Signal

The signal operation increments the value of its argument S.


signal(S)
{
S++;
}

There are two types of semaphores:


 Binary Semaphores
 Counting Semaphores.

Binary Semaphores: A binary semaphore (or mutex lock) is a synchronization tool that ensures
only one process accesses a shared resource at a time. It has two states:
1. 1 (Unlocked) – The resource is available. A process can enter its critical section.
2. 0 (Locked) – The resource is in use. Other processes must wait.

How it works:
 Initially, the semaphore is set to 1 (resource is free).

 A process that wants to use the resource checks the semaphore:


o If it’s 1, the process sets it to 0 and enters the critical section.
o If it’s 0, the process waits until it becomes 1.
 Once the process is done, it resets the semaphore to 1, allowing another process to enter.
This ensures mutual exclusion, preventing multiple processes from accessing the resource at the
same time.

Counting Semaphores: A counting semaphore is used to manage access to a resource that has
multiple instances available. Unlike a binary semaphore (which is just 0 or 1), a counting semaphore
can have any positive value.

How it works:
1. The semaphore is initialized to the total number of available resource instances.
2. When a process wants to use the resource:
o It checks if the semaphore value is greater than 0.
o If yes, it decreases the semaphore by 1 and enters the critical section.
o If no, it waits until a resource becomes available.
3. When the process finishes using the resource:
o It increases the semaphore by 1, making the resource available for others.
This allows multiple processes to use the resource simultaneously, as long as the limit is not
exceeded.

Advantages of Semaphores
 Semaphores allow only one process into the critical section. They follow the mutual
exclusion principle strictly and are much more efficient than some other methods of
synchronization.
 There is no resource wastage because of busy waiting in semaphores as processor time is
not wasted unnecessarily to check if a condition is fulfilled to allow a process to access
the critical section.
 Semaphores are implemented in the machine independent code of the microkernel. So
they are machine independent.

Disadvantages of Semaphores
 Semaphores are complicated so the wait and signal operations must be implemented in
the correct order to prevent deadlocks.
 Semaphores are impractical for last scale use as their use leads to loss of modularity. This
happens because the wait and signal operations prevent the creation of a structured layout
for the system.
 Semaphores may lead to a priority inversion where low priority processes may access
the critical section first and high priority processes later.

Classic Problems of Synchronization


These problems occur when multiple processes share resources and need to coordinate their
actions.

The following problems of synchronization are considered as classical problems:


1. Bounded-buffer (or Producer-Consumer) Problem
2. Readers and Writers Problem
3. Dining Philosophers Problem

Bounded-buffer (or Producer-Consumer) Problem:


Bounded Buffer problem is also called producer consumer problem. This problem is
generalized in terms of the Producer-Consumer problem. Solution to this problem is, creating
two counting semaphores ―full‖ and ―empty‖ to keep track of the current number of full and
empty buffers respectively. Producers produce a product and consumers consume the product,
but both use of one of the containers each time.

A bounded buffer lets multiple producers and multiple consumers share a single buffer. Producers
write data to the buffer and consumers read data from the buffer.

 Producers must block if the buffer is full.


 Consumers must block if the buffer is empty.
Key Rules for Synchronization
1. Producer cannot add if the buffer is full.
2. Consumer cannot remove if the buffer is empty.
3. Mutual exclusion must be maintained to avoid data corruption.

Solution Using Semaphores


 Semaphores Used:

o mutex → Ensures mutual exclusion.


o empty → Tracks available empty slots in the buffer.
o full → Tracks filled slots in the buffer.

Producer Process:
do
{
//PRODUCE ITEM

wait(empty);
wait(mutex);

//PUT ITEM IN BUFFER

signal(mutex);
signal(full);
}
while(1);

Consumer Process:

do
{
wait(full);
wait(mutex);

//REMOVE ITEM FROM BUFFER

signal(mutex);
signal(empty);

//CONSUME ITEM

}
while(1);

Code explanation:

Producer Process (Steps)


1. Produce an item.
2. Wait (empty) → Check if space is available.
3. Wait (mutex) → Ensure exclusive access.
4. Add the item to the buffer.
5. Signal (mutex) → Release access.
6. Signal (full) → Indicate an item is added.

Consumer Process (Steps)


1. Wait (full) → Check if an item is available.
2. Wait (mutex) → Ensure exclusive access.
3. Remove the item from the buffer.
4. Signal (mutex) → Release access.
5. Signal (empty) → Indicate an empty slot is available.
6. Consume the item.

Readers and Writers Problem:


The Readers-Writers Problem is a synchronization problem in concurrent computing where
multiple processes need to access a shared resource. The challenge is to manage access so that:

1. Multiple readers can read simultaneously (since reading does not modify data).
2. Only one writer can write at a time (to prevent data corruption).
3. If a writer is writing, no other process (reader or writer) can access the resource until it
finishes.
This problem occurs in operating systems, databases, and file systems, where efficient handling of
read and write operations is necessary to avoid data inconsistency and performance bottlenecks.

Solution Using Semaphores


We use:
 Semaphore m (mutex): Controls updates to read_count.

 Semaphore w: Ensures that only one writer or multiple readers access the resource at a
time.
 Variable read_count: Keeps track of how many readers are currently accessing the resource.

Writer Process:

while(TRUE)
{

wait(w);

//perform the write operation

signal(w) ;
}

Reader Process:

while(TRUE)
{

//Acquire lock

wait(m);
read_count++:
if(read_count ==1)
wait(w);

//Release lock

signal(m);

//perform reading operation

//Acquire lock

wait(m);
read_count--;
if(read_count ==0)
signal(w);

//Release lock;

signal(m);
}

Code Explanation:

Writer Process:
1. Wait (wait(w)) – Ensures no readers are reading.
2. Write the data.
3. Release (signal(w)) – Allows readers/writers to proceed.

Reader Process:
1. Acquire lock (wait(m)), increase read_count.
2. If it’s the first reader, it blocks writers (wait(w)).
3. Release lock (signal(m)).
4. Read the data.
5. Acquire lock (wait(m)), decrease read_count.
6. If it’s the last reader, it unblocks writers (signal(w)).
7. Release lock (signal(m)).

Bakery Algorithm

The Bakery Algorithm is a method used in computer science to ensure that multiple processes (or
programs) can take turns accessing a shared resource without interfering with each other. This is
called solving the mutual exclusion problem.
In computing, this method helps multiple processes take turns using a resource (like a printer or a file)
in a fair and orderly way, without causing errors or conflicts.

The Bakery Algorithm works like a real bakery where customers take a ticket before being served.
Here’s how it works step by step:

1. Taking a Ticket: When a process (a program trying to use a shared resource) wants to enter
the critical section (the part where it uses the shared resource), it picks a ticket number. This
number is like a queue number in a bakery.
2. Waiting for Turn: All processes compare their ticket numbers. The process with the smallest
number gets to go first.
3. Handling Ties: If two processes get the same number, the one with the smaller ID (like a
smaller customer number in line) gets priority.
4. Entering the Critical Section: The process with the smallest number enters the critical
section and uses the shared resource.
5. Finishing Up: Once done, the process leaves, and the next smallest-numbered process gets its
turn.
This method ensures fairness—first come, first served—so no process is left waiting forever!

Algorithm Pseudocode –

repeat
choosing[i] := true;
number[i] := max(number[0], number[1], ..., number[n - 1])+1;
choosing[i] := false;
for j := 0 to n - 1
do begin
while choosing[j] do no-op;
while number[j] != 0
and (number[j], j) < (number[i], i) do no-op;
end;
critical section
number[i] := 0;

remainder section
until false;

Code Explanation:

1. Process Requests a Ticket:


o choosing[i] := true; → The process marks that it is picking a ticket.
o number[i] := max(number[0], number[1], ..., number[n - 1]) + 1;
→ The process picks a ticket one higher than the current maximum ticket.
o choosing[i] := false; → The process finishes choosing its ticket.
2. Process Waits for Its Turn:
o It checks if any other process is still choosing a number:
while choosing[j] do no-op; (wait if another process is choosing).
o It waits until all processes with smaller tickets have entered first:
while number[j] != 0 and (number[j], j) < (number[i], i) do no-op;
(If another process has a smaller ticket number, wait).
3. Process Enters the Critical Section:
o The process now enters the critical section (the shared resource).
4. Process Finishes and Resets Ticket:
o number[i] := 0; → The process resets its ticket to 0, signaling it is done.
5. Process Does Other Work (Remainder Section):
o After leaving the critical section, the process does its own work before repeating the
loop.

DEADLOCKS

Deadlock in Operating System:


Every process needs some resources to complete its execution. However, the resource is granted in
a sequential order.

1. The process requests for some resource.


2. OS grant the resource if it is available otherwise let the process waits.
3. The process uses it and release on the completion.

A Deadlock is a situation where each of the computer process waits for a resource which is being
assigned to some another process. In this situation, none of the process gets executed since the
resource it needs, is held by some other process which is also waiting for some other resource to be
released.
Let us assume that there are three processes P1, P2 and P3. There are three different resources
R1, R2 and R3. R1 is assigned to P1, R2 is assigned to P2 and R3 is assigned to P3.

After some time, P1 demands for R1 which is being used by P2. P1 halts its execution since it can't
complete without R2. P2 also demands for R3 which is being used by P3. P2 also stops its execution
because it can't continue without R3. P3 also demands for R1 which is being used by P1 therefore
P3 also stops its execution.
In this scenario, a cycle is being formed among the three processes. None of the process is
progressing and they are all waiting. The computer becomes unresponsive since all the processes
got blocked.

Consider an example when two trains are coming toward each other on the same track and
there is only one track, none of the trains can move once they are in front of each other. A
similar situation occurs in operating systems when there are two or more processes that hold
some resources and wait for resources held by other(s).

For example, in the below diagram, Process 1 is holding Resource 1 and waiting for resource 2
which is acquired by process 2, and process 2 is waiting for resource 1.

Figure: Deadlock in OS
Difference between Starvation and Deadlock

Deadlock Characterization
A deadlock is a situation where a set of processes are blocked because each process is waiting for a
resource that another process is holding. This prevents any of the processes from making progress.

Causes of Deadlocks
 Deadlocks occur when system resources are tied up and cannot be used by other processes.

 If the system cannot resolve the deadlock, processes may never complete execution

1. Necessary Conditions for Deadlocks (Coffman Conditions)


For a deadlock to occur, the following four conditions (Coffman conditions) must hold
simultaneously. These conditions are not mutually exclusive, meaning all four must be present for a
deadlock to happen.

The Coffman conditions are given as follows −

a. Mutual Exclusion : A resource can only be shared in mutually exclusive manner. It


implies, two process cannot use the same resource at the same time. There should be a
resource that can only be held by one process at a time.
In the diagram below, there is a single instance of Resource 1 and it is held by Process 1
only.
Operating System Unit-II

b. Hold and Wait : A process waits for some resources while holding another resource at the
same time. A process can hold multiple resources and still request more resources from other
processes which are holding them.
In the diagram given below, Process 2 holds Resource 2 and Resource 3 and is requesting the
Resource 1 which is held by Process 1.

c. Circular Wait : All the processes must be waiting for the resources in a cyclic manner so that
the last process is waiting for the resource which is being held by the first process.
A process is waiting for the resource held by the second process, which is waiting for the
resource held by the third process and so on, till the last process is waiting for a resource held
by the first process. This forms a circular chain.
For example: Process 1 is allocated Resource2 and it is requesting Resource 1. Similarly,
Process 2 is allocated Resource 1 and it is requesting Resource2. This forms a circular wait
loop

d. No Preemption: The process which once scheduled will be executed till the completion. No
other process can be scheduled by the scheduler meanwhile.
A resource cannot be preempted from a process by force. A process can only release a resource
voluntarily.
In the diagram below, Process 2 cannot preempt Resource 1 from Process 1. It will only be
released when Process 1 relinquishes it voluntarily after its execution is complete.
Operating System Unit-II

2. Resource-Allocation Graph and Deadlocks

a. Introduction
A Resource-Allocation Graph (RAG) is a directed graph that represents the allocation of
resources to processes in a system. It helps in analyzing the possibility of deadlocks, where
processes wait indefinitely for resources.

b. Components of a Resource-Allocation Graph


 Processes (P): Represented as circles (P₁, P₂, P₃, ...).
 Resources (R): Represented as rectangles (R₁, R₂, R₃, ...).
 Edges (E):
 Request Edge (Pᵢ → Rⱼ): Indicates that process Pᵢ is requesting resource Rⱼ.
 Assignment Edge (Rⱼ → Pᵢ): Shows that resource Rⱼ is assigned to process Pᵢ.

c. Representation of Resources
 If a resource has multiple instances, each instance is represented as a dot inside the
rectangle.
 When a process requests a resource, a request edge is added.
 When the resource is allocated, the request edge converts into an assignment edge.

Example of a Resource-Allocation Graph

 Processes: P = {P₁, P₂, P₃}


 Resources: R = {R₁, R₂, R₃, R₄}
 Edges (Requests & Assignments):
 P₁ → R₁, P₂ → R₃, R₁ → P₂
 R₂ → P₂, P₃ → R₃
Operating System Unit-II

Resource Instances
 R₁ has one instance.

 R₂ has two instances.


 R₃ has one instance.
 R₄ has three instances.

Process States
 P₁ is holding R₂ and waiting for R₁.

 P₂ is holding R₁ and R₂ while waiting for R₃.


 P₃ is holding R₃.

d. Deadlock Detection using Resource-Allocation Graph


 If the graph has no cycles, then no process is deadlocked.

 If the graph has a cycle, deadlock may exist:


 If each resource type has one instance, a cycle always means deadlock.
 If resources have multiple instances, a cycle does not necessarily mean deadlock.

Example of a Deadlock Situation

The following cycles exist in the system:


 P₁ → R₁ → P₂ → R₃ → P₃ → R₂ → P₁
 P₂ → R₃ → P₃ → R₂ → P₂
Operating System Unit-II

Deadlock Condition
 P₂ is waiting for R₃, which is held by P₃.

 P₃ is waiting for P₁ or P₂ to release R₂.


 P₁ is waiting for P₂ to release R₁.
Since no process can proceed, a deadlock has occurred.

d. Cycle Without Deadlock

 Consider another resource-allocation graph with a cycle:


P₁ → R₁ → P₃ → R₂ → P₁
 Here, there is no deadlock because:
o P₄ may release its instance of R₂, allowing P₃ to proceed.
o The cycle can be broken, and the system can continue execution.
Conclusion
 If a cycle exists, the system may or may not be in a deadlock.

 If no cycle exists, the system is definitely not in a deadlock.

Methods for Handling Deadlocks


Deadlocks occur when processes are stuck waiting for resources that are held by other processes.
There are four main methods to handle deadlocks:

1. Ignoring the Issue


 Sometimes, it is preferable to ignore deadlocks if the likelihood of occurrence is very low.

 The cost of preventing a deadlock might be higher than the inconvenience of dealing with it
when it occurs.
 Example: If a deadlock occurs once in 100 years, a simple system reboot might be more
practical than implementing strict deadlock prevention mechanisms.
Operating System Unit-II

2. Deadlock Detection and Recovery


 The resource scheduler monitors resources and detects deadlocks.

 If a deadlock is detected, it can be resolved by:


o Terminating processes involved in the deadlock (but this destroys all progress made).
o Pre-empting resources, i.e., taking resources from some processes and assigning
them to others to break the deadlock.

3. Deadlock Avoidance
 The system ensures that deadlocks never occur by scheduling resource allocation carefully.

 It uses algorithms like the Banker's Algorithm, which checks before granting resources
whether it could lead to a deadlock.
 This requires prior knowledge of maximum resource demands from all processes.

4. Deadlock Prevention
 The system prevents deadlocks before they occur by enforcing strict rules.

 Every transaction is checked before execution to ensure it doesn’t lead to a deadlock.


 If there is any chance of deadlock, the transaction is not allowed to execute.
 Common prevention techniques include:
o Breaking one of the four necessary deadlock conditions (Mutual Exclusion, Hold
and Wait, No Preemption, Circular Wait).

Deadlock Prevention

A deadlock happens when processes (tasks) get stuck, waiting for each other to release resources. We
can prevent this by removing one of the four conditions that cause deadlocks.

1. Elimination of "Mutual Exclusion Condition:


 Some resources must be used by only one process at a time, like printers.

 But some resources can be shared, like a read-only file.


 If we make resources sharable, deadlocks won’t happen for those resources.

2. Elimination of "Hold and Wait" Condition:


Operating System Unit-II

 Instead of letting a process hold some resources and wait for more, we prevent it by:
1. Giving all required resources at once before execution.
2. Forcing the process to release all resources before asking for new ones.
 This stops deadlocks but wastes resources because processes may hold resources they don’t
need immediately.
 Drawback: It may waste resources because a process gets all resources even if it doesn’t
need them immediately.

3. Eliminating "No Pre-emption" Condition:


 Normally, a process keeps its resources until it’s done.

 To prevent deadlocks, we can force a process to release its resources if it asks for more and
they are unavailable.
 This allows other processes to use those resources, but it can cause delays (starvation) if a
process keeps releasing and re-requesting resources.
 Drawback: Processes may have to restart their work, causing delays.

4. Eliminating "Circular Wait" Condition:


 Deadlocks happen when processes wait for resources in a loop (e.g., P1 → P2 → P3 → P1).

 To prevent this, we force processes to request resources in a fixed order (e.g., always
request lower-numbered resources first).
 This stops circular waiting and prevents deadlocks.
 Drawback: It may be inflexible for some systems.

Deadlock Avoidance
Deadlock avoidance prevents deadlocks before they happen by checking each resource request
before granting it.
 The system only gives a resource if it ensures that no deadlock will occur.

 If granting a resource leads to an unsafe state, the system denies the request and waits for a
safe condition.
Operating System Unit-II

 The system uses special algorithms (like the Banker’s Algorithm) to check if granting
resources will keep the system in a safe state.
 This method dynamically manages resource allocation to avoid circular waiting.

1. Safe State and Unsafe State

a. Safe State
 A system is in a safe state if it can allocate resources to processes in some order without
leading to a deadlock.
 This means that even if all processes request their maximum needed resources, the system can
still manage them safely.
 In simple words, a safe state ensures that every process will eventually complete without
getting stuck.
 Safe state = No deadlock risk.
b. Unsafe State
 A system is in an unsafe state if it might lead to a deadlock.

 This does not mean a deadlock has already happened, but there is a risk.
 In this state, the system cannot guarantee that all processes will finish safely.
 Unsafe state = Possible deadlock risk.
Operating System Unit-II

2. Resource-Allocation Graph Algorithm


The Resource-Allocation Graph (RAG) Algorithm is used for deadlock avoidance in systems
where each resource type has only one instance. It helps decide whether a process should be
granted a requested resource by checking if it leads to a deadlock.

Key Concept
 The system represents processes and resources as a graph with nodes and edges
(connections).
 Processes (P) and Resources (R) are connected based on requests and allocations.
 The system checks for cycles in the graph to determine if a deadlock might occur.

Types of Edges in the Graph


The graph uses three types of edges to represent resource usage:
1. Claim Edge (P → R)
o Shows that a process may need a resource in the future.
o Represented as a dashed line.
o Example: P₁ → R₁ means process P₁ may request resource R₁ later.
2. Request Edge (P → R)
o Shows that a process is currently requesting a resource.
o If a process actually requests a resource, the claim edge converts into a request edge
(solid line).
o Example: P₂ → R₂ means P₂ is asking for R₂ now.

3. Assignment Edge (R → P)
o Shows that a resource has been allocated to a process.
o If a requested resource is assigned, the request edge turns into an assignment edge.
o Example: R₁ → P₁ means resource R₁ is given to process P₁.

How it Works:
1. Before execution, processes must declare all possible resource requests (Claim edges must
be predefined).
2. When a process requests a resource:
o The system uses a Cycle Detection Algorithm to check if granting it will create a
cycle in the graph.
o If no cycle exists, the request is approved, and the resource is assigned.
o If a cycle exists in the graph, it means the system is in an unsafe state, and a
deadlock may occur, the request is denied, and the process must wait.
Operating System Unit-II

3. When a process releases a resource:


o The assignment edge is removed, and it turns back into a claim edge.

Example Scenario

 P₂ requests R₂, and R₂ is currently free.


 But if R₂ is given to P₂, a cycle forms in the graph :P₁ → R₁ → P₂ → R₂ → P₁ (back to P₁).
 Since a cycle means deadlock, P₂’s request is denied, and it must wait.

Banker's Algorithm
The Banker's Algorithm is used for deadlock avoidance in systems where multiple instances of
each resource type exist. It ensures that resources are allocated safely so that the system never runs
out of resources and enters a deadlock.

1
Why is it Called "Banker's Algorithm"?
 Just like a bank never lends all its money at once to avoid bankruptcy, the system does not
allocate all resources immediately.
 It checks before granting a request to ensure there are enough resources left for other
processes to finish.

How It Works

1. Each process declares its maximum resource needs at the beginning.


2. When a process requests resources, the system:
o Checks if the resources are available.
o Simulates granting the request to see if it keeps the system safe.
o Uses a safety check (cycle detection) to ensure that at least one sequence exists where
all processes can finish execution.
3. If granting the request keeps the system safe → allocate the resources.
4. If granting the request makes the system unsafe → the process must wait.
Operating System Unit-II

Data Structures Used in Banker's Algorithm


Operating System Unit-II

To manage resources properly, the system keeps track of four tables:


 Available: Number of free instances of each resource type.

o Example: If Available[R1] = 3, it means there are 3 instances of resource R1 free.


 Max: The maximum resources a process may request.
o Example: If Max[P1][R1] = 5, process P1 may need up to 5 instances of resource R1.
 Allocation: The resources currently assigned to a process.
o Example: If Allocation[P1][R1] = 2, process P1 is using 2 instances of R1.
 Need: Resources a process still requires to complete execution.
o Formula: Need = Max - Allocation
o Example: If Need[P1][R1] = 3, it means P1 still needs 3 instances of R1 to finish.

Bankers Algorithm

1. Safety Algorithm

2. Resource Algorithm

1. Safety Algorithm:

The Algorithm for finding out whether or not a system is in safe state can be described as follows:

1. Let Work and Finish be vectors of length ‘m’ and ‘n’ respectively.

Work = Available

Finish[i] = false; for i=1,2,3,4….n

2. Find an i such than both

(a) Finish[i] = false

(b) Need <= Work

If no such i exists go to step (4)

3. Work = Work + Allocation[i]

Finish[i] = true

go to step (2)

4. if Finish[i] = true for all i

Then the system is in a safe state

a. Resource-Request Algorithm
Operating System Unit-II

Let Request[i] be the request array for process Pᵢ. Request[i][j] = k means process Pᵢ wants k
instances of resource type Rⱼ. When a process requests resources, the following steps are taken:

1. if Request[i]<=need

Goto step(2); otherwise, raise an error condition, since the process has exceeded its maximum
claim.

2. If Request[i] <= Available

Goto(3); otherwise, Pi must wait , since the resources are not available.

3. Have the system pretend to have allocated the requested resources to process p[i] by modifying
the state as follows:

Available = Available – Request[i]

Allocation = Allocation + Request[i]

Need = need – Request

Deadlock Detection
A deadlock occurs when a group of processes are stuck, waiting for resources that other processes are
holding. This means they can never proceed.
To detect deadlocks, the system regularly checks if processes are waiting indefinitely.

How does Deadlock Detection Work?


There are two cases based on the type of resources:
1. If each resource type has only one instance:
o The system creates a Wait-for Graph, which shows which process is waiting for
which resource.
o It checks for cycles in the graph. If a cycle is found, a deadlock has occurred.
2. If resources have multiple instances:
o The system uses the Banker’s Algorithm (Safety Algorithm) to check if resources
are stuck and cannot be allocated.

a. Single Instance of each Resources Type


When each resource has only one instance, a deadlock can happen if processes are waiting for each
other in a circular chain. To detect this, we use a Wait-for Graph.

How it Works:
 The Wait-for Graph is created by removing resource nodes from the Resource Allocation
Graph and keeping only process dependencies.
Operating System Unit-II

 If a process P1 is waiting for another process P2 to release a resource, an edge is drawn from
P1 → P2.

Wait-for Graph:
 It is a simplified version of the Resource Allocation Graph (RAG).

 In this graph, only processes are shown, and edges represent waiting relationships between
processes.
 If Process P1 is waiting for Process P2 to release a resource, an edge is drawn from P1 → P2.

How to Detect a Deadlock?


1. The system creates a Wait-for Graph by removing resource nodes from the original
allocation graph.
2. The system regularly checks for cycles in the Wait-for Graph.
 If there is a cycle in this graph, it means that the processes are waiting for each other in a loop,
leading to a deadlock. The detection process takes O(n²) time, where n is the number of
processes.

Example

 Process 1 (P1) has Resource 1 but is waiting for Resource 2, which is held by Process 2
(P2).
 Process 2 (P2) has Resource 2 but is waiting for Resource 1, which is held by Process 1
(P1).
 This forms a cycle: R1 → P1 → R2 → P2 → R1, confirming that a deadlock has occurred.

b. Multiple instances of each resource type

When a system has multiple instances of a resource type, the traditional wait-for graph (which works
for single-instance resources) is not sufficient for deadlock detection and resource management.
Instead, more sophisticated algorithms and data structures are used.
Operating System Unit-II

Key Data Structures:


1. Available:
o Indicates how many instances of each resource type are currently available
o It is an array of length mm (where mm represents the number of different resource
types).
o It indicates the number of currently available instances for each resource type.
2. Allocation:
o Represents how many instances of each resource have been allocated to each
process.
o It is an n×mn \times m matrix (where nn is the number of processes, and mm is the
number of resource types).
o Each entry in this matrix represents the number of instances of a particular resource
type currently allocated to a process.
3. Request Matrix
 Represents how many more instances of each resource a process needs.
 It is an n×mn \times m matrix that stores resource requests made by processes.
 If Request[i][j] = k, it means process Pi is requesting k more instances of resource type Rj

Algorithm:
Step 1:
Let Work and Finish be vectors of length m and n, respectively. Initialize:
Work = Available
Finish[i] = false for i = 0, 1, ..., n - 1.
If Allocation[i] is not equal to 0, then Finish[i] = false; else Finish[i] = true
Step 2:
Find an index i such that both
Finish[i] == false
Request[i] <= Work
If no such i exists then go to step 4.
Step 3:
Perform the following:
Work = Work + Allocation[i]
Finish[i] = true
Go to step 2.
Step 4:
If Finish[i] == false for some i, 0 <= i < n, then it means the system is in a deadlocked state.
Operating System Unit-II

Moreover, if Finish[i] == false, then process Pi is deadlocked.

Handling Multiple Instances:


 The Banker’s Algorithm is used for deadlock avoidance when multiple instances exist.

 The Deadlock Detection Algorithm checks whether a set of processes is stuck in a circular
wait, preventing system progress.

Usage of Deadlock Detection Algorithm


The frequency of using a deadlock detection algorithm depends on:

1. How often deadlocks happen


2. How badly deadlocks affect other processes

Ways to Use Deadlock Detection:


1. Frequent Checking:
o If deadlocks happen often, the system should run the algorithm frequently.
o This prevents resource blocking and keeps processes running smoothly.
2. On Every Request:
o Run the detection algorithm whenever a process requests a resource.
o Helps identify deadlocks immediately.
o But this slows down the system because checking every request takes time.
3. At Fixed Time Intervals:
o Run the algorithm every 30 minutes or so.
o Reduces system load but might delay deadlock detection.

Deadlock Recovery
When a deadlock happens, the system must find a way to recover from it. There are two main ways
to do this:

1. Process Termination (Ending Processes)


This method removes processes to break the deadlock. There are two ways to do this:

a. Terminate all deadlocked processes


 This immediately removes all processes stuck in the deadlock.

 It fixes the problem quickly.


 But it may waste CPU time and progress if some processes were running for a long time.
b. Terminate one process at a time
 Instead of stopping everything, we remove one process at a time until the deadlock is gone.
Operating System Unit-II

 This is a slower method because after stopping each process, we need to check again if the
deadlock is fixed.
 But it helps in reducing data loss compared to stopping all processes at once.
Summary: Either stop all deadlocked processes quickly or remove them one by one slowly to fix
the issue.

2. Resource Preemption (Taking Back Resources)


Instead of stopping processes, this method takes resources from some processes and gives them to
others.

Steps involved:
1. Choosing which process loses its resources
o The system decides which process should give up resources based on:
 How many resources it holds
 How much CPU time it used
 How important the process is
2. Rollback (Restarting the process if needed)
o If a process loses its resources, it cannot continue normally.
o The system may need to restart the process from a previous checkpoint or
completely restart it.
3. Avoid starvation (Fairness in preemption)
o The system must ensure that the same process does not always lose resources.
o If a process is always chosen to give up resources, it will never finish (this is called
starvation).
Summary: Instead of stopping processes, this method moves resources around carefully to fix the
deadlock.

*********************************************************
Operating System Unit-II
Operating System Unit-II

You might also like