KEMBAR78
OS Unit 3 Notes | PDF | Process (Computing) | Concurrent Computing
0% found this document useful (0 votes)
11 views25 pages

OS Unit 3 Notes

The document discusses concurrency in operating systems, explaining its mechanisms such as threads, processes, and synchronization methods. It highlights the difference between concurrency and parallelism, presents the Dining Philosophers Problem as a classic concurrency issue, and outlines solutions like Chandy & Misra's approach. Additionally, it covers important concepts like mutual exclusion, semaphores, and mutexes, along with their benefits and challenges in ensuring proper process interactions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views25 pages

OS Unit 3 Notes

The document discusses concurrency in operating systems, explaining its mechanisms such as threads, processes, and synchronization methods. It highlights the difference between concurrency and parallelism, presents the Dining Philosophers Problem as a classic concurrency issue, and outlines solutions like Chandy & Misra's approach. Additionally, it covers important concepts like mutual exclusion, semaphores, and mutexes, along with their benefits and challenges in ensuring proper process interactions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Operating System (CS1103)

Unit-3
Concurrency
Concurrency is the ability of a system to execute multiple tasks or processes simultaneously
without affecting the final outcome. It allows different parts of a program (or multiple programs)
to execute independently while still coordinating to produce a correct final outcome.
Concurrency in Programming
Programming languages and operating systems provide various mechanisms to handle
concurrency, such as:
 Threads: Lightweight processes that can run concurrently.
 Processes: Independent execution units with separate memory space.
 Synchronization Mechanisms: Mutex locks, semaphores, and monitors to avoid race
conditions.

Difference Between Concurrency and Parallelism

Feature Concurrency Parallelism


The ability to handle multiple tasks The ability to execute multiple
Definition
by switching between them. tasks simultaneously.
Tasks run in an interleaved manner
Tasks run at the same time on
Execution (one at a time, but switching
multiple processors/cores.
quickly).
System Can happen on a single-core CPU Requires a multi-core or multi-
Requirement (via time slicing). processor system.
Managing shared resources, avoiding
Synchronizing tasks to ensure they
Main Challenge race conditions, and ensuring
do not interfere with each other.
fairness.

Dining Philosophers Problem (Using Chopsticks)


The Setup:
 Imagine five philosophers sitting at a round table.
 Each philosopher has a bowl of noodles.
 Five chopsticks are placed between them—one chopstick between each pair of
philosophers.
 Rules of eating:

Page 1 of 25
Operating System (CS1103)
o Philosophers alternate between thinking and eating.
o A philosopher must have two chopsticks (left & right) to eat.
o A chopstick can only be held by one philosopher at a time.
o After eating, the philosopher must put both chopsticks back so that the
chopsticks become available to others.
o A philosopher can take the chopstick on their right or the one on their left as they
become available, but cannot start eating before getting both chopsticks.
o There is infinite noodles and infinite hunger, meaning philosophers will always
want to eat at some point.

The Problem:
How do we ensure that all philosophers get a fair chance to eat without deadlock (no progress) or
starvation (some never eat)? i.e., Ensure each philosophe can forever continue to alternate
between eating and thinking, assuming that no philosopher can know when others may want to
eat or think.

Why does this fail?


 If every philosopher picks up their left chopstick at the same time, they all wait forever
for their right chopstick, which never becomes available.
 This creates a deadlock—no one can eat because everyone is stuck waiting.

A Working Solution (Chandy & Misra's Approach, 1984)


The Main Idea
 Philosophers need two chopsticks to eat.
 Chopsticks can be either clean or dirty.
 A philosopher must request chopsticks from neighbors when needed.
 Chopsticks are exchanged fairly to avoid deadlock.

Step 1: Initial Setup


 Each pair of neighboring philosophers shares a chopstick.
 At the start, each chopstick is assigned to the philosopher with the lower ID (number).

Page 2 of 25
Operating System (CS1103)
Example:
If we have 5 philosophers (P1, P2, P3, P4, P5), the chopsticks are assigned as:
 P1 gets chopstick 1
 P2 gets chopstick 2
 P3 gets chopstick 3
 P4 gets chopstick 4
 P5 gets chopstick 5

Step 2: Requesting a Chopstick

 If a philosopher wants to eat, they check if they already have both chopsticks.
 If they don’t have both, they request the missing one from their neighbor.
 The neighbor only gives the chopstick if it is "dirty".

Why?

 If the chopstick is dirty, it means the neighbor already ate and should pass it.
 If the chopstick is clean, the neighbor keeps it because they haven't eaten yet.

Step 3: Eating Process

 Once a philosopher has both chopsticks, they start eating.


 While eating, they ignore any chopstick requests from other philosophers.
 After finishing their meal, they mark both chopsticks as dirty.

Step 4: Returning Chopsticks

 After eating, if a philosopher receives a request for a chopstick, they clean it and give
it to the requester.

Why This Works

 No Deadlock: Philosophers do not all hold one chopstick forever. If they finish eating,
they return chopsticks fairly.
 No Starvation: Every philosopher eventually gets both chopsticks and eats.
 Fair Resource Sharing: A philosopher can’t hold onto chopsticks indefinitely.

Page 3 of 25
Operating System (CS1103)
Understanding Process Interactions & Concurrency

Types of Process Interactions

1. Processes that don’t know about each other


o These processes work independently and don’t interact.
2. Processes that are indirectly aware of each other
o These processes don’t directly communicate but share some common resource
(like a file or memory space).
3. Processes that are directly aware of each other
o These processes communicate and work together for a common task.

Important Terms in Concurrency

1. Atomic Operation: A sequence of one or more statements that appears to be indivisible;


ie, no other process can see an intermediate state or interrupt the operation.
2. Race Condition: A race condition is a situation that occurs in a multi-threaded or multi-
process system, where two or more processes/threads try to access and modify shared
data at the same time, and the final result depends on the order in which they execute.
This can lead to unexpected and incorrect results.
3. Critical Section: A part of a program where shared resources (like memory or files) are
accessed. Only one process should use it at a time to prevent errors.
4. Deadlock: A situation where two or more processes are stuck, each waiting for the other
to do something.
5. Livelock: Similar to deadlock, but instead of being stuck, the processes keep changing
states without actually making any progress.
6. Mutual Exclusion: Only one process at a time can access a shared resource to prevent
conflicts.
7. Starvation: When a process is always ready to run but never gets chosen by the system,
so it never gets executed.

Errors Due to Concurrent Execution

When multiple processes work together and execute simultaneously without proper coordination,
errors can happen. These errors are mainly due to unpredictable timing (also called timing
errors).

 The mistake isn't caused by a single process; instead, it's due to how the processes are
interleaved (mixed together during execution).
 Each process might work fine on its own, but when executed together, they may interfere
with each other, causing incorrect results.

For example, if two bank clerks update the same account balance at the same time without
coordination, they might overwrite each other’s changes, leading to incorrect balances.

Page 4 of 25
Operating System (CS1103)
Mutual Exclusion

Mutual exclusion ensures that only one process at a time can access a shared resource (such as a
variable or a printer). This prevents data corruption and ensures system reliability.

Critical Section (CS)

A critical section is a part of a program that accesses shared resources. It must be executed
without interruption to avoid errors.

How It Works:

 ENTER CS: A process enters the critical section and starts executing.
 EXECUTE CS: It modifies shared data while other processes wait.
 EXIT CS: The process finishes and allows the next process to enter.

Example of a Critical Section (CS):

LOAD Rx, Count ; Load the shared variable Count into register Rx

INC Rx ; Increment the value of Rx

STORE Count, Rx ; Store the updated value back into memory

If multiple processes execute this code without synchronization, they may read the same value
before updating it, causing incorrect results.

Benefits of Mutual Exclusion

1. Prevents Data Corruption: Only one process modifies a shared variable at a time,
avoiding inconsistent values.
2. Ensures Proper Resource Use: Shared resources like printers won’t mix outputs from
different users.
3. Avoids Unpredictable Behavior: Ensures processes execute in an orderly manner.

Example: If two print jobs (one from a PDF and another from MS Word) run without mutual
exclusion, their contents may get mixed up on the same page.

Principles of Mutual Exclusion

To be effective, mutual exclusion must:

 Ensure only one process enters the critical section at a time.


 Work regardless of process speed or priority.

Page 5 of 25
Operating System (CS1103)
 Prevent deadlock (where no process can proceed) and starvation (where some processes
never get access).
 Ensure processes don’t wait indefinitely to access the resource.

Mutual Exclusion: Design Requirements

A good mutual exclusion mechanism must:

 Allow only one process to enter CS at a time.


 Ensure halted processes do not block others.
 Prevent deadlock or starvation.
 Allow quick access if no one else is in the CS.

Stages of Mutual Exclusion

Every process follows these three steps while accessing a shared resource:

 Negotiation Protocol → Decides which process can enter CS (Winner proceeds).


 Critical Section → The chosen process executes the CS (Exclusive access).
 Release Protocol → The process exits, allowing others to enter (Ownership
relinquished).

Semaphores – A Tool for Mutual Exclusion

A semaphore is a special variable (signaling mechanism) used to control access to shared


resources in a concurrent system.

 Only one process can access the semaphore at a time.


 Others must wait until it is released.
 Before accessing a resource, a process must acquire the semaphore.
 After finishing, it must release the semaphore so others can access the resource.

It is the application’s responsibility to correctly use semaphores to manage shared resources.

A Semaphore contains:

 A counter (integer)
 A waiting queue of processes
 Two operations:

Page 6 of 25
Operating System (CS1103)
o wait(S)
o signal(S)

wait(S):
while (S <= 0) do ; // busy wait
S := S - 1; // acquire

signal(S):
S := S + 1; // release

These operations must be atomic (indivisible) to avoid race conditions.

Usage for Critical Section (CS) Access

Entry Protocol:

 Process calls wait(S) → If S > 0, it enters the CS.


 If S == 0, the process waits.

Exit Protocol:

 Process finishes CS → calls signal(S) → S increments.


 One of the waiting processes can now proceed.

Example: Mutual Exclusion

Assume S = 1 (Binary Semaphore)

while (true) {
// non-critical code
wait(S); // Enter CS
// critical section
signal(S); // Exit CS
// remaining code
}

Page 7 of 25
Operating System (CS1103)
Semaphore for Event Synchronization

Used when one process must wait for an event in another.

Scenario: P2 must execute s2 after P1 executes s1.

Process P1 Process P2
s1; wait(sync);
signal(sync); s2;

Initialize sync = 0.
signal() and wait() are in different processes.

Operational Flow:

1. P1 calls wait(S) → S becomes 0 → enters CS.


2. P2 calls wait(S) → waits (S = 0).
3. P1 exits CS → calls signal(S) → S = 1.
4. P2 proceeds into CS.

Types of Semaphores

1. Binary Semaphore (Mutex)

 Value: 0 (Busy) or 1 (Free)


 Used to allow only one process at a time.

2. Counting Semaphore

 Value: >= 0
 Used when multiple instances of a resource are available.

E.g., 3 printers → semaphore initialized to 3.

Page 8 of 25
Operating System (CS1103)

Queue-based Semaphore

To avoid busy waiting, semaphores can use blocking queues.

wait(S):
if (S <= 0)
suspend process;
else
S--;

signal(S):
S++;
if (any process is waiting)
resume one;

Implementation Notes

 Initial value of 1 for a CS semaphore ensures mutual exclusion.


 Semaphores must be used carefully to avoid:
o Deadlocks (especially when using multiple semaphores)
o Starvation (if a process waits too long)

Criticisms on Semaphores

 Semaphores are low-level and error-prone if not used carefully.


 A missing or misplaced semaphore may crash the whole program.
 Hard to debug due to lack of structure.
 Modern high-level languages do not rely only on semaphores.
 Not ideal for real-time systems where predictability is crucial.

Problems with Semaphores

Problem Description
Accidental Release Any process can signal a semaphore, even if it didn’t acquire it.
Recursive If a process calls wait() on a semaphore it already holds, it waits
Deadlock forever.
Process-Death If a process holding a semaphore crashes, it’s never released.
Deadlock
Priority Inversion Can happen when low-priority process holds a semaphore needed by
high-priority one (to be covered later).

Page 9 of 25
Operating System (CS1103)

Mutex (Mutual Exclusion Lock)

A Mutex is like a binary semaphore but with ownership:

 Only the owner can release it.


 OS enforces this, preventing accidental misuse.

Advantages over Semaphore

Issue Handled by Mutex


Accidental Release Not allowed. Only owner can unlock.
Recursive Deadlock OS detects and can return an error.
Process-Death Deadlock OS notifies waiting processes if the owner crashes.

Quiz1: Why should an error message be given to all the waiting processes while waking them
all up, when a process owning a Mutex dies while owning it?

a) Just to inform the next process which is going to own the Mutex and going to enter the CS
about the past event.
b) All the waiting processes are woken up so that all of them can enter the CS without any
restrictions.
c) Since the process owning the Mutex has died while inside the CS it could have left the CS in
an unstable state. All the waiting processes need to be informed, so that they take any
corrective action based on what the application warrants.
d) All the waiting processes can work together to restart the process which got killed while
owning the Mutex.

Correct Answer:
(c) - The CS may be in an unstable state; others must take corrective action.

Quiz2: Can a Mutex be used to synchronize occurrence of an event between two cooperating
processes?

a) Yes, similar to Semaphore, mutex can also be used to protect CS as well as synchronize
event between two processes.
b) No, it is not possible because OS knows which process is owning the Mutex and it cannot be
released by another process.
c) Yes, it is possible for a process to release the Mutex who is not the current owner of it,
because it is not being used by the application for CS and only for synchronization.
d) No, the OS does not have the knowledge of what is the purpose for which an application is
using the Mutex.

Page 10 of 25
Operating System (CS1103)
Correct Answers:
(b) OS restricts unlocking by non-owners.
(d) OS doesn’t know how application intends to use it.

Key Insight: Mutex is strictly for mutual exclusion, not event synchronization.

Why Mutex lock is not used for Synchronization

 Mutex has strict lock/unlock pairing and ownership rules which cannot be used for
synchronization
 It is not possible for another process to release it for signaling purposes.
 OS does not distinguish between usage types (e.g., signaling vs. locking).
 Keeps implementation simple and clean. (Doesn’t confuse mutual exclusion with
synchronization)

Spinlocks (Busy-Wait Locks)

What is a Spinlock?

A spinlock is a type of lock used in multithreaded or multiprocessing environments where a


thread "spins" in a loop, repeatedly checking if the lock is available.

 It does not yield the CPU or go to sleep.


 It keeps actively checking (busy-waiting) until the lock is released.

This makes it lightweight and fast—but only under certain conditions.

How Spinlock Works – Step by Step

1. A thread tries to acquire the spinlock.


2. If the lock is already held by another thread:
o It does not block or go to sleep.
o Instead, it loops continuously, checking the lock.
3. Once the lock is free:
o The thread acquires the lock and enters the critical section.
4. After finishing the critical section:
o It releases the lock, allowing other threads to acquire it.

Real-world analogy

Imagine you're waiting for a restroom:

 You knock, and it's occupied.


 Instead of going and sitting down or waiting in a queue,
 You keep knocking every second to see if it's free.

Page 11 of 25
Operating System (CS1103)
If the person inside takes too long — you're wasting time and energy. But if they come out
quickly — your aggressive waiting pays off.

That's what spinlocks do.

Why Use Spinlocks?

Spinlocks are useful in short-duration critical sections, especially when:

 Lock-hold time is very short (e.g., few CPU cycles).


 Context switching (saving thread state, switching stack, scheduler invocation, etc.) is
more expensive than just spinning for a brief moment.

When Not to Use Spinlocks

Avoid spinlocks if:

 Lock is held for a long time.


 You're on a single-core system – the spinning thread prevents the lock holder from
running

In such cases, use mutexes or semaphores instead.

Bounded Buffer Problem (Producer-Consumer)

Problem Statement

 Two processes:
o Producer: Produces data and adds it to a shared buffer.
o Consumer: Takes data from the buffer and processes it.
 Both run in infinite loops.
 The buffer has a fixed size (N), hence the term bounded buffer.

Why Synchronization Is Needed

 If buffer is full, the producer must wait.


 If buffer is empty, the consumer must wait.
 Without synchronization, there can be:
o Data inconsistency
o Overflows / Underflows
o Race conditions

Page 12 of 25
Operating System (CS1103)
Implementation Using Sleep() and Wakeup()

Code:

Initially count = 0;

Producer()
{
while (1)
{
produce_item();
if (count == N) sleep(); // Wait if buffer is full
count++;
if (count == 1) wakeup(consumer); // Wake up consumer if it was sleeping
}
}

Consumer()
{
while (1)
{
if (count == 0) sleep(); // Wait if buffer is empty
consume_item();
count--;
if (count == N - 1) wakeup(producer); // Wake up producer if space created
}
}

The Lost Wakeup Problem

Scenario:

1. Consumer checks count == 0 and is about to sleep.


2. Producer adds an item and calls wakeup(consumer).
3. But consumer was not sleeping yet, so the wakeup is lost.
4. Now consumer goes to sleep forever, even though the buffer is not empty!

Cause:

 Check (count == 0) and sleep() are not atomic (are different distinct step).
 There’s a race condition between the test and the action.

Page 13 of 25
Operating System (CS1103)

Solution: Use Semaphores for Synchronization

Semaphores make operations atomic and allow safe inter-process communication.

Required Semaphores:

 mutex: Ensures mutual exclusion (value = 1)


 emptyCount: Tracks empty slots in buffer (value = N)
 fullCount: Tracks full slots in buffer (value = 0)

Semaphore Initialization:
mutex = 1
emptyCount = N
fullCount = 0

Producer Code with Semaphores:

do {
produce_item();
wait(emptyCount); // Wait if buffer is full
wait(mutex); // Lock critical section

add_item_to_buffer();

signal(mutex); // Unlock critical section


signal(fullCount); // Signal consumer
} while (true);

Consumer Code with Semaphores:

do {
wait(fullCount); // Wait if buffer is empty
wait(mutex); // Lock critical section

remove_item_from_buffer();

signal(mutex); // Unlock critical section


signal(emptyCount); // Signal producer

consume_item();
} while (true);

Page 14 of 25
Operating System (CS1103)

Why mutex is Necessary?


 When 0 < buffer < N, both producer and consumer may be active.
 They may try to access shared variables at the same time.
 mutex ensures only one process modifies the buffer at a time → avoids race conditions.

Event Sequencing

Condition Action
Buffer Full Producer should block
Buffer Empty Consumer should block
0 < Buffer < N Both can proceed, with mutex

Danger of Wrong Semaphore Order

 After getting the mutex, the producer can get blocked because the buffer is full.
 When the consumer gets a chance to run, it sees that the buffer is full with data, it wants
to acquire the mutex to take an item out from the buffer, but it does not get it because the
producer holds it and is blocked now.
 It will result in a Deadlock.
 The sequence of mutex/semaphore calls is very critical.

Always:
 Acquire semaphores in correct order.
 Release semaphores properly.

Use of Semaphores in OS (Interrupt Handling)

Device Manager waits (P(s)) until interrupt arrives. Interrupt Handler executes and signals
(V(s)). Device Manager wakes up and processes data.

What Is the Readers-Writers Problem?

The Readers-Writers Problem deals with managing concurrent access to a shared


resource—usually a database or a file—by multiple processes:

 Readers can read simultaneously without affecting each other.


 Writers need exclusive access to write data correctly (i.e., no one else should read or
write at the same time).

Page 15 of 25
Operating System (CS1103)

The challenge is to coordinate access to avoid:

 Data inconsistency (if reading while writing).


 Wasted time (if unnecessarily blocking readers).
 Starvation of either readers or writers.

Solution: Writer Priority

To prevent writer starvation, modify logic so that:

 Once a writer is waiting, no new readers are allowed to enter.


 Only existing readers can finish.

Deadlocks

A deadlock is a situation where multiple processes are waiting for resources that are held by
other processes, creating a cycle of dependency that prevents any process from proceeding.

 Each process in the system requests resources to execute.


 If a resource is already assigned to another process, the requesting process must wait.
 If a circular wait condition occurs, processes get stuck indefinitely, leading to deadlock.

In simple words, each process waits for a resource that is been assigned to another process, none
of the process gets executed since the resource it needs, is held by some other process that is also
waiting for some other resource to be released. None of the processes is progressing they are all
waiting. The computer becomes unresponsive since all the processes are blocked.

Example Scenario:

Assume three processes P1, P2, and P3 and three resources R1, R2, and R3:

Page 16 of 25
Operating System (CS1103)
 P1 holds R1 and waits for R2.
 P2 holds R2 and waits for R3.
 P3 holds R3 and waits for R1.

A cycle forms, preventing any process from executing further.

Real-life Example:

Two students need a pen and paper to write a note:

 Student A holds the pen but needs paper.


 Student B holds the paper but needs the pen.

Since each is waiting for the other, neither can proceed, leading to a deadlock.

Deadlock Operations

Every process follows these stages when using resources:

1. Request: The process asks for a resource. If unavailable, the process must wait.
2. Use: Once acquired, the process operates on the resource.
3. Release: The process frees the resource after use, allowing other processes to access it.

Necessary Conditions for Deadlocks (Characteristics)

A deadlock can occur if all the following four Coffman conditions hold good:

1. Mutual Exclusion
o Definition: A resource cannot be used by more than one process simultaneously.
o Example:
 Pen and paper are non-shareable resources.
 Student A holds the pen, so Student B cannot use it.
 Student B holds the paper, so Student A cannot use it.
2. Hold and Wait
o Definition: A process holds at least one resource while waiting to acquire
additional resources held by other processes.
o Example:
 Student A holds the pen and waits for the paper.
 Student B holds the paper and waits for the pen.
 Neither releases their current resource until they acquire the missing one.
3. No Preemption
o Definition: Resources cannot be forcibly taken away from a process; they must be
released voluntarily.
o Example:
 Student A cannot snatch the paper from Student B.
 Student B cannot snatch the pen from Student A.

Page 17 of 25
Operating System (CS1103)
 Resources (pen/paper) are only released when the student chooses to do
so.
4. Circular Wait
o Definition: A circular chain of processes exists where each process waits for a
resource held by the next process in the chain.
o Example:
 Student A waits for Student B’s paper.
 Student B waits for Student A’s pen.
 This creates a cycle of dependencies, preventing both from proceeding.

Methods for Handling Deadlocks

There are mainly four methods for handling deadlocks:

1. Deadlock Ignorance
2. Deadlock Prevention
3. Deadlock Avoidance
4. Detection and Recovery

1. Deadlock Ignorance

 The most commonly used method.


 Assumes that deadlocks do not occur and relies on users restarting processes if necessary.
 Used because handling deadlocks is expensive and complex.
 A lot of codes need to be altered which will decrease the performance so for less critical
jobs deadlock are ignored.
 Ostrich Algorithm is used to ignore deadlocks.
 Used in operating systems like Windows and Linux.

2. Deadlock Prevention

To prevent deadlocks, the system ensures that at least one of the four necessary conditions never
holds good, i.e., at least one condition is false.

Methods of Deadlock Prevention

1. Eliminating Mutual Exclusion

 Some resources (e.g., read-only files) can be shared among multiple processes,
removing the need for mutual exclusion.
 However, most resources (e.g., printers, CPU, memory) require mutual exclusion,
making this method impractical in many cases

Page 18 of 25
Operating System (CS1103)

2. Eliminating Hold and Wait

 A process must request all the required resources at once before execution begins.
If resources are not available, the process must wait. Or request only when
holding none.
 This leads to low resource utilization and possible starvation.

3. Eliminating No Preemption

 If a process holding resources requests another unavailable resource, it must


release all held resources.
 The process restarts only when it can acquire all required resources.
 Can cause system inefficiency due to frequent rollbacks

4. Eliminating Circular Wait


1. Imposes an ordering on resource requests (e.g., processes must request resources
in a predefined sequence).
2. Example: If resources are ordered as R1 < R2 < R3, a process can request R2
only if it is not holding R3.
3. Requires system-wide agreement on resource ordering, which may not be
practical in all scenarios.

 Example: Printer (R1) and Scanner (R2)


 Scenario:
 Process A needs Printer (R1) and Scanner (R2).
 Process B needs Scanner (R2) and Printer (R1).
 Deadlock Risk Without Ordering:
 Process A holds R1 and waits for R2.
 Process B holds R2 and waits for R1.
 Circular wait → deadlock.
 With Resource Ordering:
 Both processes must request R1 first (R1 < R2).
 Process A requests and acquires R1.
 Process B requests R1 but must wait.
 Process A requests R2, completes its task, and releases both
resources.
 Process B then acquires R1, then R2, and completes execution.

4. Result: No deadlock! The strict ordering breaks the cycle.

Page 19 of 25
Operating System (CS1103)
2. Deadlock Avoidance

Deadlock avoidance is a technique where the operating system dynamically analyzes resource
allocation to ensure that the system never enters a deadlocked state. Unlike deadlock
prevention, which restricts resource allocation upfront, avoidance allows more flexible
allocation but ensures that the system always remains in a safe state.

There are 3 states of the system: safe, unsafe, and starvation.

 Safe State: A system is in a safe state if resources can be allocated without causing a
deadlock.
 Unsafe State: If a system cannot allocate resources safely, it is in an unsafe state,
increasing the risk of deadlock. ie if resource allocation continues, a deadlock may occur.
 Starvation: A condition where a process waits indefinitely for resources because other
processes are given priority.

Methods for Deadlock Avoidance

a) Single instance of a resource type: Resource Allocation Graph


b) Multiple instances of a resource type: Banker's Algorithm

Resource Allocation Graph (RAG)

A Resource Allocation Graph is a graphical representation used to describe how


processes and resources interact in a system. It helps in detecting deadlocks and understanding
resource allocation. If adding a new resource request creates a cycle in the graph, the request is
denied to avoid deadlock.

Example: If P1 is assigned R1 and P2 is assigned R2, and then P1 requests R2 while P2


requests R1, this would form a cycle, indicating a potential deadlock. The OS would deny one
of these requests to avoid deadlock.

Page 20 of 25
Operating System (CS1103)
1. Graph Components

 Processes (P1, P2, …, Pn): Represented as circles.


 Resources (R1, R2, …, Rn): Represented as rectangles.
 Edges:
o Claim Edge (Pi → Rj): A process may request a resource.
o Request Edge (Pi → Rj): A process is waiting for a resource.
o Assignment Edge (Rj → Pi): A resource is allocated to a process.

A cycle in a single instance resource type means a deadlock exists.

Example: Single Instance Resource Type


 Processes: P1, P2
 Resources: R1, R2
 If P1 holds R1 and waits for R2, while P2 holds R2 and waits for R1
 A cycle forms → deadlock occurs.

Example: Multiple Instance Resource Type


 Processes: P1, P2, P3, P4
 Resources: R1 (2 instances), R2 (2 instances)
 If one instance of R2 is assigned to P1 and another to P4, and they are waiting for R1,
while P2 and P3 hold R1 but wait for R2, the system may still avoid deadlock depending
on allocation.

Page 21 of 25
Operating System (CS1103)

Drawback of RAG:

 Works only when one instance of each resource type exists.


 For multiple instances, the detection of cycles is more complex.

Banker's Algorithm

The Banker’s Algorithm is a deadlock-avoidance method it ensures that a system remains in a


safe state by allocating resources only if they do not lead to a potential deadlock.

Data Structures for the Banker's Algorithm

Assuming there are n processes and m resource types, the following data structures are used:

1. Available: A vector of length m that indicates the number of available instances of each
resource.
o Available[j] = k means there are k instances of resource Rj available.
2. Max: An n x m matrix that defines the maximum demand of each process.
o Max[i, j] = k means process Pi may request at most k instances of Rj.
3. Allocation: An n x m matrix that shows the number of resources currently allocated to
each process.
o Allocation[i, j] = k means Pi currently holds k instances of Rj.
4. Need: An n x m matrix that represents the remaining resource need of each process.
o Need[i, j] = Max[i, j] – Allocation[i, j].
5. Finish: A Boolean vector of size n, initialized as false, indicating whether a process has
completed execution.

Steps in Banker’s Algorithm

The algorithm works in two phases:

1. Safety Algorithm (Checking if the state is safe)

 Maintain vectors for Available, Max, Allocation, and Need.


 Check if there exists a process whose Need can be satisfied with available resources.
 If such a process exists, assume it finishes and releases its resources.
 Repeat until all processes can finish.
 If all can finish, the state is safe, otherwise unsafe.

Page 22 of 25
Operating System (CS1103)

2. Resource-Request Algorithm (Allocating resources safely)

 A process requests resources.


 Check if the request is within the Need of the process and within Available resources.
 Temporarily allocate resources and check if the system remains in a safe state.
 If safe, grant the request; otherwise, deny it.

Page 23 of 25
Operating System (CS1103)
Deadlock Recovery

When a system enters a deadlock state, it must recover using various techniques. Deadlock
recovery can be classified into two main approaches:

1. Resource Preemption and Rollback


2. Process Termination

1. Resource Preemption and Rollback

When a deadlock occurs, the system can recover by taking back allocated resources from one
or more processes and reassigning them to break the deadlock.

Resource Preemption

 Snatching resources from a process and allocating them to another process.


 The expectation is that the new process will complete sooner and release the resources
back.
 Challenges:
o Deciding which resource to preempt.
o Ensuring that preempting a resource does not cause starvation.
o Handling the process that lost the resource to prevent further issues.

Rollback to a Safe State

 The system tracks safe states before entering deadlock.


 The OS maintains checkpoints at various execution points.
 If deadlock occurs, the system rolls back to the last safe state and resumes execution
from there.
 Challenges:
o Requires checkpointing at regular intervals, which increases overhead.
o Might cause loss of computation if frequent rollbacks occur.

2. Process Termination

When resource preemption and rollback are not sufficient, the OS may terminate one or more
processes to recover from deadlock.

Kill a Process

 The OS selects a process to terminate, breaking the circular wait condition.


 A common strategy is to kill the process that has done the least work to minimize
computation loss.
 Challenges:

Page 24 of 25
Operating System (CS1103)
o Selecting the right process to kill.
o Ensuring the system remains stable after termination.

Kill All Processes

 Extreme approach: Terminate all processes in the system.


 This guarantees deadlock removal but results in inefficiency, as all processes must restart.
 Not recommended unless no better solution exists.

Page 25 of 25

You might also like