KEMBAR78
OS Tutorials 2 | PDF | Process (Computing) | Central Processing Unit
0% found this document useful (0 votes)
7 views28 pages

OS Tutorials 2

The document discusses various synchronization problems in operating systems, including the Dining Philosophers Problem, Reader-Writer Problem, and Critical Section Problem, along with their solutions using semaphores. It also explains the Resource Allocation Graph for detecting deadlocks and cycles, and provides scheduling algorithms like FCFS, SJF, Priority, and RR for process management. Each section includes explanations, pseudo code, and examples to illustrate the concepts.

Uploaded by

anantg662
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views28 pages

OS Tutorials 2

The document discusses various synchronization problems in operating systems, including the Dining Philosophers Problem, Reader-Writer Problem, and Critical Section Problem, along with their solutions using semaphores. It also explains the Resource Allocation Graph for detecting deadlocks and cycles, and provides scheduling algorithms like FCFS, SJF, Priority, and RR for process management. Each section includes explanations, pseudo code, and examples to illustrate the concepts.

Uploaded by

anantg662
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

||JAI SRI GURUDEV||

S.J.C.INSTITUTE OF TECHNOLOGY, CHICKBALLAPUR


Department of Computer Science & Engineering

TUTORIAL-II

Sem: 3rd SEM Sub Name: OPERATING SYSTEMS [BCS303] Date: 25/01/2024

1. Explain Dining Philosopher’s problem.

The Dining Philosophers Problem is a classic synchronization problem in computer science that
demonstrates the challenges of resource sharing and the potential for deadlock in a multi-threaded
environment. In this problem, five philosophers are seated around a circular table, each with a bowl
of rice and a chopstick placed between each pair of adjacent philosophers. Philosophers alternate
between two states: thinking and eating. To eat, a philosopher needs two chopsticks: one to their left
and one to their right. When a philosopher becomes hungry, they try to pick up the chopsticks on
either side. If both chopsticks are available, they pick them up and begin eating. After eating, the
philosopher puts down both chopsticks and returns to thinking. The challenge lies in ensuring that
the philosophers can eat without deadlock or starvation. Deadlock could occur if all philosophers
simultaneously pick up one chopstick and wait for the second, causing the system to freeze. One
simple solution to this problem uses semaphores to represent the chopsticks. Each chopstick is
associated with a binary semaphore (either 0 or 1), which indicates whether the chopstick is
available. When a philosopher wants to eat, they must acquire the semaphores for both their left and
right chopsticks. If both are available (i.e., the semaphores are set to 1), the philosopher can eat, and
the semaphores are then set to 0 to indicate that the chopsticks are in use. Once the philosopher
finishes eating, the semaphores are released (set back to 1), allowing other philosophers to use the
chopsticks.

A philosopher tries to grab a chopstick by executing a wait() operation on that semaphore; she
releases her chopsticks by executing the signal() operation on the appropriate semaphores.
Thus, the shared data are semaphore chopstick[5]; where all the elements of chopstick are initialized
to 1.The structure of philosopher i is shown below,

do {

wait(chopstick[i]); k[(i+l) % 5]); // eat

signal(chopstick[i]); signal(chopstick[(i+l) % 5]); // think

} while (TRUE);

 The disadvantage is it could create a deadlock. Suppose that all five philosophers become hungry
simultaneously and each grabs her left chopstick. All the elements of chopstick will now be equal to
0. When each philosopher tries to grab her right chopstick, she will be delayed forever.
 Several possible remedies to the deadlock problem are available.
 Allow at most four philosophers to be sitting simultaneously at the table.
 Allow a philosopher to pick up her chopsticks only if both chopsticks are available (to do this, she
must pick them up in a critical section).
 Use an asymmetric solution; that is, an odd philosopher picks up first her left chopstick and then her
right chopstick, whereas an even philosopher picks up her right chopstick and then her left
chopstick.

The dining philosopher’s problem can be solved using a monitor with a deadlock-free approach.
Each philosopher's state (thinking, hungry, or eating) is tracked in an array state [5], and they can
only start eating if both chopsticks are available. The pickup () function sets the philosopher's state
to hungry and checks if both neighbors are not eating; if not, the philosopher waits. The putdown ()
function updates the state to thinking and notifies neighbors. Semaphores and condition variables
control mutual exclusion and synchronization: a mutex ensures exclusive access to the monitor, and
a next semaphore manages process signaling. Condition variables are implemented using x_sem for
waiting and signaling. The signaling mechanism involves a next_count to track suspended
processes.

2. Show how a semaphore provides solution to reader writer’s problem.


The Reader-Writer Problem is a classic synchronization problem in computer science where a
shared resource (like a database or file) is accessed by multiple processes. The challenge arises
when some processes are readers (which can read the resource without modifying it), and others are
writers (which modify the resource). The goal is to design a synchronization mechanism that allows
readers to read concurrently (since reading does not interfere with other readers) but prevents
writers from accessing the resource while any reader is reading or while another writer is writing.
1. Readers: Multiple readers can access the resource simultaneously as long as there are no
writers.
2. Writers: A writer needs exclusive access to the resource, meaning no readers or other writers
can access it while the writer is working.
Using Semaphores to Solve the Reader-Writer Problem
We can use semaphores to control access to the shared resource. Specifically, we need:
 A mutex to ensure mutual exclusion when updating the shared variables.
 A read-count semaphore to keep track of the number of readers currently accessing the resource.
 A write-semaphore to ensure exclusive access for writers.
Semaphores Used:
mutex: A binary semaphore used to protect the read_count variable (which keeps track of the
number of readers).
read_count: A counting semaphore that tracks how many readers are currently accessing the
resource.
write_semaphore: A binary semaphore used to give exclusive access to writers.
Solution Outline Using Semaphores:
1. Readers:
 When a reader wants to read, it must first acquire the mutex semaphore (to safely update the
read_count variable).
 If the reader is the first to start reading (i.e., read_count is 0), it will acquire the write_semaphore,
preventing any writers from writing.
 After updating read_count, the reader can access the resource.
 When done, it decreases the read_count. If it is the last reader (i.e., read_count becomes 0), it will
release the write_semaphore.
2. Writers:
 When a writer wants to write, it must acquire the write_semaphore, ensuring that no other readers or
writers are accessing the resource.
 Once the writer has finished, it releases the write_semaphore to allow either readers or other writers
to proceed.
Pseudo code for the Reader-Writer Problem Using Semaphores
Reader Process:
wait(mutex); // Protect access to read_count variable
read_count++; // Increment the number of readers
if (read_count == 1) // If this is the first reader, lock the write_semaphore
wait(write_semaphore);
signal(mutex); // Release mutex to allow other readers // Read the shared resource here
wait(mutex); // Protect access to read_count variable
read_count--; // Decrement the number of readers
if (read_count == 0) // If this is the last reader, release the write_semaphore
signal(write_semaphore);
signal(mutex); // Release mutex
Writer Process:
wait(write_semaphore);
signal(write_semaphore);

3. Explain critical section problem. What are the requirements that critical section
problem must satisfy.
The critical-section problem

 Consider a system consisting of n processes {P0, P1... Pn-1}. Each process has a segment of code,
called a critical section, in which the process may be changing the common variables, updating a
table, writing a file, and so on.
 The important feature of the system is that, when one process is executing in its critical section, no
other process is to be allowed to execute in its critical section. That is, no two processes are
executing in their critical sections at the same time.
 The critical-section problem is to design a protocol that the processes can use to cooperate.
 Each process must request permission to enter its critical section. The section of code
implementing this request is the entry section.
 The critical section may be followed by an exit section.
 The remaining code is the remainder section.
 The general structure of a typical process Pi, is shown in the following figure

 A solution to the critical-section problem must satisfy the following three


requirements:
1. Mutual exclusion. If process P, is executing in its critical section, then no other processes can
be executing in their critical sections.
2. Progress. If no process is executing in its critical section and some processes wish to enter their
critical sections, then only those processes that are not executing in their remainder sections can
participate in the decision on which will enter its critical section next, and this selection cannot be
postponed indefinitely.
2. Bounded waiting. There exists a bound, or limit, on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its critical section
and before that request is granted.
 We assume that each process is executing at a nonzero speed. We can make no assumption
concerning the relative speed of the n processes.
 Two general approaches are used to handle critical sections in operating systems.
1. Pre-emptive kernel
2. Non preemptive kernel.

 A pre-emptive kernel allows a process to be pre-empted while it is running in kernel mode. It is


difficult to design for SMP architectures, since in these environments it is possible for two kernel-
mode processes to run simultaneously on different processors. It is more suitable for real-time
programming, as it will allow a real-time process to pre-empt a process currently running in the
kernel. Also, pre-emptive kernel may be more responsive, since there is less risk that a kernel-
mode process will run for an arbitrarily long period before giving up the processor to waiting
processes.
 A Nonpreemptive kernel does not allow a process running in kernel mode to be pre- empted; a
kernel-mode process will run until it exits kernel mode, blocks, or voluntarily yields control of the
CPU. It is essentially free from race conditions on kernel data structures, as only one process is
active in the kernel at a time.

4. Describe the resource allocation graph


i) With deadlock ii) With a cycle but no deadlock.
A Resource Allocation Graph (RAG) is a directed graph used to represent the allocation of
resources to processes in a computer system. It helps in detecting deadlock and analyzing the
system's state by visually modeling the relationships between processes and resources.
In a Resource Allocation Graph:
 Processes are represented by nodes.
 Resources are represented by nodes (resource types).
 Edges are used to represent the relationships:
A request edge (P → R) indicates that a process P is requesting a resource R.
An assignment edge (R → P) indicates that a resource R is currently allocated to process P.
The primary purpose of the Resource Allocation Graph is to help detect deadlocks or potential
deadlocks in a system by analysing the presence of cycles and requests.
i) Resource Allocation Graph with Deadlock
A deadlock occurs in a system when there is a set of processes, each of which is waiting for a
resource held by another process in the set, causing the system to enter a state where no process can
proceed. In the context of the Resource Allocation Graph, a deadlock can be identified when there
is a cycle in the graph that involves processes and resources, where every process is waiting for a
resource held by another process in the cycle.
Example with Deadlock:
Consider a system with two processes, P1 and P2, and two resources, R1 and R2.
 P1 holds R1 and requests R2.
 P2 holds R2 and requests R1.
In the Resource Allocation Graph, the edges would look like this:
P1 → R2 (P1 requests R2)
P2 → R1 (P2 requests R1)
R1 → P1 (R1 is assigned to P1)
R2 → P2 (R2 is assigned to P2)
Graph Representation:
P1 → R2 ← P2
↑ ↓
R1 ← P2 ← R1
 Cycle Detection:
There is a cycle in the graph: P1 → R2 → P2 → R1 → P1.
Deadlock: This cycle means that P1 is waiting for R2, while P2 is waiting for R1. Neither process
can proceed, resulting in deadlock.
ii) Resource Allocation Graph with a Cycle but No Deadlock
A cycle in the Resource Allocation Graph does not always indicate a deadlock. A deadlock requires
that the processes in the cycle are all in a waiting state, which is not necessarily the case when
there is a cycle. If any process in the cycle is not waiting for a resource and is instead holding a
resource that is part of the cycle, the system may still be in a safe state.
Example with a Cycle but No Deadlock:
Consider the same system with two processes, P1 and P2, and two resources, R1 and R2, but now
with the following scenario:
 P1 holds R1 and requests R2.
 P2 holds R2 but P1 does not hold R1 yet (or it has finished using it).
In the Resource Allocation Graph, the edges would look like this:
P1 → R2 (P1 requests R2)
P2 → R1 (P2 requests R1)
R1 → P1 (R1 is assigned to P1)
R2 → P2 (R2 is assigned to P2)
Graph Representation:
P1 → R2 ← P2
↑ ↓
R1 ← P2 ← R1
 Cycle Detection:
There is a cycle: P1 → R2 → P2 → R1 → P1, but the process P1 may not be blocked, because it is
already holding R1 and can eventually proceed to acquire R2. Similarly, P2 may eventually release
R2 or continue to hold it until its resource usage is complete.
A cycle, P1 is not blocked waiting for R2 to continue its execution. Both processes may still
complete without deadlock if the system is correctly designed. This is an example of a cycle that
does not result in deadlock.

5. Draw the Gnatt Chart and calculate average waiting time and turnaround time for
the following snapshot of processes using i) FCFS ii) SJF iii) Priority iv)RR (2 ms)
scheduling algorithm.

Process idBurst Arrival P


time time
P1 6 0 3
P2 3 1 2
P3 1 2 1
P4 4 3 0
First-Come, First-Served (FCFS) Scheduling Algorithm
The FCFS scheduling algorithm executes processes in the order of their arrival times. The process
that arrives first is executed first, and each process runs to completion before the next one starts.
Given processes:
Process ID
Burst Time
Arrival Time
P1 6 0
P2 3 1
P3 1 2
P4 4 3
Steps for FCFS Scheduling:
 P1 arrives at time 0 and runs for 6 units of time.
 P2 arrives at time 1 and runs after P1, for 3 units of time.
 P3 arrives at time 2 and runs after P2, for 1 unit of time.
 P4 arrives at time 3 and runs after P3, for 4 units of time.
Gantt chart for FCFS:
| P1 | P2 | P3 | P4 |
0 6 9 10 14
Calculation of Waiting Time and Turnaround Time for FCFS:
Waiting Time: Waiting time for a process is the total time it spends waiting in the ready queue, i.e.,
the time from arrival to start of execution.

Turnaround Time: Turnaround time for a process is the total time from arrival to completion.
Process IDArrival Time Burst Time Start TimeCompletion Time Waiting Time Turnaround Time

P1 0 6 0 6 0 6
P2 1 3 6 9 5 8
P3 2 1 9 10 7 8
P4 3 4 10 14 7 11
Average Waiting Time for FCFS:

Average Turnaround Time for FCFS:

2. Shortest Job First (SJF) Scheduling Algorithm


The SJF scheduling algorithm executes the process with the smallest burst time first, among the
processes that are ready at the same time. If there are multiple processes with the same burst time,
FCFS is used to break the tie.
Steps for SJF Scheduling:
 Sort the processes by their burst time and execute the process with the shortest burst time.
 If two or more processes have the same burst time, use FCFS to break the tie.
 Note: The processes are considered "ready" when their arrival time is less than or equal to the
current time. The Gantt chart for SJF needs to carefully consider the processes' arrival times and
burst times.
Order of Execution for SJF:
 P1 arrives at time 0 and has the shortest burst time of 6 units.
 P2 arrives at time 1 with a burst time of 3 units, so it runs next.
 P3 arrives at time 2 with a burst time of 1 unit, so it runs after P2.
 P4 arrives at time 3 with a burst time of 4 units, so it runs last.
Gantt Chart for SJF:
| P1 | P2 | P3 | P4 |
0 6 9 10 14
3. Priority Scheduling Algorithm (Non-Preemptive)
In Priority Scheduling, the process with the highest priority (smallest priority number) is selected
first. Once a process starts, it runs to completion without interruption.
Steps for Priority Scheduling:
 At time 0, P1 arrives and has the highest priority (priority 3), so it starts first.
 At time 1, P2 arrives with the highest priority (priority 1), so it is selected next.
 At time 2, P3 arrives with priority 4, but P2 (priority 1) is still running.
 At time 3, P4 arrives with priority 2, but P2 is still running.
 Once P2 finishes, P4 (priority 2) will run.
 Finally, P1 (priority 3) and P3 (priority 4) will run.
Gantt Chart for Priority Scheduling:
| P2 | P4 | P1 | P3 |
0 3 7 13 14
Waiting Time and Turnaround Time for Priority Scheduling:
Waiting Time: The waiting time is the total time a process spends in the ready queue, i.e., from its
arrival until it starts execution.
4. Round Robin (RR) Scheduling Algorithm (Time Quantum = 2 ms)
In Round Robin (RR) scheduling, each process is assigned a fixed time quantum (in this case, 2
ms). If a process does not finish within its time quantum, it is preempted and placed back in the
ready queue.
Steps for Round Robin Scheduling:
 At time 0, P1 starts and runs for 2 ms (since the time quantum is 2 ms), so it is preempted and goes
back to the ready queue.
 P2 runs for 2 ms, but it is also preempted because of the time quantum.
 P3 runs for 1 ms and finishes (as its burst time is 1 ms).
 P4 runs for 2 ms and is preempted.
 P1 runs for the remaining 4 ms and finishes.
 P2 completes its remaining 1 ms.
 P4 runs for the last remaining 2 ms and finishes.
Gantt Chart for Round Robin Scheduling (Quantum = 2 ms):
6. Define hardware instructions test (), set () and swap (). Give algorithms to
implement mutual exclusion with these instructions
These hardware instructions are commonly used to implement mutual exclusion in concurrent
programming, where multiple processes must be prevented from simultaneously accessing shared
resources or critical sections. These operations are typically atomic, meaning they are executed in a
single, uninterruptible step, which is crucial for synchronizing access to shared resources.
1. test():
 The test() instruction is used to check the state of a variable (typically a lock or flag) without
modifying it.
 It returns a boolean value indicating whether a certain condition is met (e.g., whether the lock is
free or taken).
 Example usage: Testing if a lock is set (1) or not set (0).
2. set():
 The set() instruction is used to set a variable to a specific value, typically indicating that a resource
is in use or that a critical section is entered.
 It is often used to set the value of a lock (e.g., to mark the critical section as occupied).
 Example usage: Setting a flag (e.g., lock) to 1, indicating that a process has acquired the lock.
3.swap():
 The swap() instruction atomically swaps the values of two variables.
 This operation is typically used for atomic changes to variables, ensuring that a process can change
the state of a flag or resource without interference from other processes.
 Example usage: Swapping the value of a lock with a variable representing the lock’s state.

Algorithms for Mutual Exclusion Using test(), set(), and swap() Instructions
1. Peterson’s Algorithm (Using swap() instruction)
Peterson's algorithm is a classic mutual exclusion algorithm that uses two shared flags and a turn
variable to ensure mutual exclusion between two processes. It can be implemented using a swap()
instruction to atomically set and test flags.
Peterson's Algorithm with swap()
flag[0] and flag[1] are the flags for two processes (processes P0 and P1).
turn is a shared variable that indicates which process’s turn it is to enter the critical section.
Algorithm: // Initialization
flag[0] = flag[1] = false; // Both processes are initially not interested
turn = 0; // Let process 0 go first
// Process 0 (P0)
do {
flag[0] = true; // P0 wants to enter the critical section
turn = 1; // Give priority to process 1
while (flag[1] && turn == 1); // P0 waits if P1 wants to enter and it's P1's turn
flag[0] = false; // P0 is done, set its flag to false // Remainder Section for P0
} while (true);
// Process 1 (P1)
do {
flag[1] = true; // P1 wants to enter the critical section
turn = 0; // Give priority to process 0
while (flag[0] && turn == 0); // P1 waits if P0 wants to enter and it's P0's turn

// Critical Section for P1


flag[1] = false; // P1 is done, set its flag to false
// Remainder Section for P1
} while (true);
 The turn variable ensures mutual exclusion by giving each process a turn to enter the critical
section.
 The flag[i] (for process i) indicates whether process i is interested in entering the critical section. If
flag[0] = true, P0 wants to enter, and if flag[1] = true, P1 wants to enter.
 If both processes attempt to enter the critical section at the same time, the turn variable helps decide
who should go first, ensuring no simultaneous access to the critical section.
2. Mutual Exclusion Using test() and set() Instructions (Simple Lock Mechanism)
We can use the test() and set() instructions to implement a simple lock that ensures mutual
exclusion between processes. This mechanism is based on a busy-waiting approach where the
process continuously checks the lock status using the test() function.
Algorithm Using test() and set() (Simple Lock)
lock is a shared variable that represents the lock status. If lock = 0, the lock is free; if lock = 1, the
lock is taken.
Algorithm:
// Initialize lock to 0 (unlocked)
lock = 0;
// Process i (P0 or P1)
do {
while (test(lock) == 1); // Wait until the lock is free (test returns 0)
set(lock, 1); // Set the lock to 1 (take the lock)
// Critical Section
set(lock, 0); // Release the lock by setting it to 0
// Remainder Section
} while (true);
The test(lock) checks if the lock is currently taken (i.e., lock = 1).
If the lock is free (lock = 0), the process sets the lock to 1 using the set(lock, 1) instruction to enter
the critical section.
After completing the critical section, the process releases the lock by setting lock = 0, making it
available for other processes.
3. Swap-Based Mutual Exclusion (Using swap() Instruction)
Another classic mutual exclusion algorithm uses the swap() instruction. This method is a lock-
based approach that ensures mutual exclusion by atomically exchanging the value of the lock
variable.
Algorithm Using swap() for Mutual Exclusion
lock is a shared variable representing the lock status.
The initial value of lock is 0 (unlocked).
Algorithm:
// Initialize lock to 0 (unlocked)
lock = 0;
// Process i (P0 or P1)
do {
while (swap(lock, 1) == 1); // Attempt to acquire the lock, busy-wait if locked

// Critical Section
lock = 0; // Release the lock
// Remainder Section
} while (true);
The swap(lock, 1) instruction atomically attempts to set lock to 1 and returns the previous value of
lock.
If the returned value is 1, it means the lock is already held by another process, so the current process
waits.
If the returned value is 0, it means the lock was free, and the current process successfully acquired
the lock and can proceed to the critical section.
After finishing the critical section, the process releases the lock by setting lock = 0.

7. Discuss in detail about deadlock. Point out and example its necessary condition.
Deadlock is a situation in computer systems, particularly in concurrent processing environments,
where a set of processes are blocked because they are each waiting for a resource that is held by
another process in the set. As a result, none of the processes can proceed, leading to a standstill.
Deadlock is a critical issue in multi-threaded and multi-processing systems that need to share
resources, and its occurrence can severely affect system performance and resource utilization.
A deadlock occurs when a group of processes get into a state where:

 Each process is waiting for a resource held by another process in the set.
 No process can release any resources, because they are all waiting for resources held by others.

This leads to a cycle of waiting processes, where none of the processes can continue execution or
release resources. Deadlocks are most commonly associated with operating systems, database
management systems, and distributed systems, where resource allocation is handled dynamically.

Necessary Conditions for Deadlock


There are four necessary conditions that must hold simultaneously for a deadlock to occur. These
are referred to as the Coffman conditions, after the researchers who first identified them. The
presence of all four conditions creates the possibility of a deadlock. If any one of these conditions is
not met, deadlock is avoided. Let’s discuss each condition in detail:
1. Mutual Exclusion: Mutual Exclusion ensures that only one process can access a resource at a
time, preventing interference between processes. This is crucial for resources like printers or files,
where concurrent access could lead to data corruption or inconsistency. A resource is considered
non-shareable if it cannot be used simultaneously by multiple processes. While mutual exclusion
guarantees exclusive access, it also introduces the potential for deadlock. If multiple processes hold
some resources and are waiting for others, they can enter a deadlock state where no process can
proceed. Thus, while mutual exclusion maintains resource integrity, it also becomes a necessary
condition for deadlock. Therefore, managing mutual exclusion carefully is essential in concurrent
systems.
2. Hold and Wait is a condition where a process that already holds one or more resources is
waiting to acquire additional resources that are currently being held by other processes. In this
scenario, a process does not release the resources it already holds while requesting others, leading to
a situation where it holds certain resources and waits for others to be freed. This creates a potential
for deadlock, as it can lead to a circular wait. For instance, if Process A holds Resource R1 and
requests Resource R2, while Process B holds Resource R2 and requests Resource R1, a cycle is
formed where neither process can proceed. This circular dependency can result in deadlock, where
all involved processes are stuck, waiting for resources that will never be released. Hence, hold and
wait is a key condition contributing to deadlock in resource allocation systems.
3. No Preemption

No Preemption is a condition where resources cannot be forcibly taken away from a process
holding them; instead, resources can only be released voluntarily by the process once it has finished
using them. This means that once a process acquires a resource, it will hold on to it until it
completes its task and voluntarily releases it. While this condition ensures that a process is not
interrupted in its execution, it can contribute to deadlock. If a process is holding certain resources
and waiting for others that are held by different processes, it cannot release the resources it
currently holds to allow others to proceed. This prevents the system from recovering resources that
could potentially break a deadlock cycle. As a result, no preemption increases the likelihood that
processes will remain stuck in a state of waiting, which can lead to deadlock if the resources they
need are never freed.

4. Circular Wait: Circular Wait is a condition in which a set of processes exist, where each
process in the set is waiting for a resource held by the next process, ultimately forming a cycle.
For example, if process P1 is waiting for a resource held by P2, P2 is waiting for a resource held
by P3, and so on, the last process in the cycle (e.g., Pn) will be waiting for a resource held by the
first process (P1). This creates a circular chain of dependencies, where each process is waiting
for a resource that is held by another process in the cycle. This condition is the most critical for
deadlock because it means that no process can make progress, as they are all waiting for each
other in a closed loop. None of the processes can release the resources they hold, because they
are all blocked waiting for resources held by others. As a result, all processes are stuck in the
cycle, unable to proceed, which creates a deadlock state. Circular wait, therefore, is a necessary
condition for deadlock to occur in a system.
Example of Deadlock
Let’s consider a simple system with two processes P1 and P2, and two resources R1 and R2.
Both processes need both resources to complete their tasks.
 P1 holds R1 and requests R2.
 P2 holds R2 and requests R1.
In this case, we have the following conditions:
 Mutual Exclusion: Both R1 and R2 are non-shareable, so only one process can use them
at any time.
 Hold and Wait: P1 holds R1 and is waiting for R2, and P2 holds R2 and is waiting for
R1.
 No Preemption: Neither process can be forced to release the resources they hold, as
resources can only be released voluntarily by the process.
 Circular Wait: There is a circular wait where P1 is waiting for R2 (held by P2), and P2
is waiting for R1 (held by P1), thus forming a cycle.

8. Using Bankers algorithm determines whether the following system is in a safe


state.
Process Allocation Max Available
A B C A B C A B C
P0 0 0 2 0 0 4 1 0 2
P1 1 0 0 2 0 1
P2 1 3 5 1 3 7
P3 6 3 2 8 4 2
P4 1 4 3 1 5 7
If a request from process P2 arrives for (0, 0, 2) can the request be granted immediately.

2. Check if the System is in a Safe State


To check if the system is in a safe state, we use the Banker's algorithm for safety:
1. Available Resources:
Available resources are given as (1, 0, 2). This means there is 1 unit of resource A, 0 units of
resource B, and 2 units of resource C available in the system.
2. Safety Algorithm:
We attempt to find a process that can proceed with its execution, meaning its Need is less than or
equal to the Available resources.
Start with Available = (1, 0, 2).
P0's Need (0, 0, 2): P0 can proceed because its need is less than or equal to the available
resources. After P0 runs, it will release its allocated resources, updating Available to:
Available = Available + Allocation (P0) = (1, 0, 2) + (0, 0, 2) = (1, 0, 4).
P1's Need (1, 0, 1): P1 can proceed because its need is less than or equal to Available = (1, 0, 4).
After P1 runs, it will release its allocated resources, updating Available to:
Available = Available + Allocation (P1) = (1, 0, 4) + (1, 0, 0) = (2, 0, 4).
P2's Need (0, 0, 2): P2 can proceed because its need is less than or equal to Available = (2, 0, 4).
After P2 runs, it will release its allocated resources, updating Available to:
Available = Available + Allocation (P2) = (2, 0, 4) + (1, 3, 5) = (3, 3, 9).
P3's Need (2, 1, 0): P3 can proceed because its need is less than or equal to Available = (3, 3, 9).
After P3 runs, it will release its allocated resources, updating Available to:
Available = Available + Allocation (P3) = (3, 3, 9) + (6, 3, 2) = (9, 6, 11).
P4's Need (0, 1, 4): P4 can proceed because its need is less than or equal to Available = (9, 6,
11). After P4 runs, it will release its allocated resources, updating Available to:
Available = Available + Allocation (P4) = (9, 6, 11) + (1, 4, 3) = (10, 10, 14).
Since all processes can eventually finish, the system is in a safe state.

3. Can the Request from P2 for (0, 0, 2) be Granted Immediately?


Now, we need to check whether the request from P2 for (0, 0, 2) resources can be granted
immediately. The request is for (0, 0, 2) resources.
 Available Resources = (1, 0, 2).
For the request to be granted immediately:
The requested resources must be less than or equal to the available resources.
P2's request = (0, 0, 2), and Available = (1, 0, 2).
Since (0, 0, 2) ≤ (1, 0, 2), the request can be granted immediately. After granting the request:
Available will be updated to Available - Request = (1, 0, 2) - (0, 0, 2) = (1, 0, 0).
The allocation for P2 will also be updated:
New Allocation (P2) = (1, 3, 5) + (0, 0, 2) = (1, 3, 7)

9. Illustrate with examples the Peterson’s solution for critical section problem and
prove that mutual exclusion property is preserved.
Peterson’s Solution for the Critical Section Problem: The Critical Section Problem arises in
concurrent programming when multiple processes need to access shared resources (critical
sections). The challenge is to ensure that only one process can execute in the critical section at a
time, preventing race conditions. Peterson’s Solution is a well-known algorithm used to solve
the critical section problem for two processes. It guarantees mutual exclusion, progress, and
bounded waiting without the need for hardware support, such as semaphores or locks.
Peterson’s Algorithm: Peterson's solution was developed by Gary Peterson in 1981 to ensure
mutual exclusion for two processes, which we will denote as P0 and P1. The algorithm uses two
shared variables:
1. flag[]: An array of two boolean values, where flag[i] indicates if process i is interested in
entering the critical section.
2. turn: A variable that indicates whose turn it is to enter the critical section. It can take the
values 0 or 1.
Peterson’s Algorithm (for two processes P0 and P1)
// Shared variables
boolean flag[2]; // flag[0] for P0, flag[1] for P1
int turn; // turn is either 0 or 1
// Process P0:
flag[0] = true; // P0 wants to enter critical section
turn = 1; // Give turn to P1
while (flag[1] && turn == 1); // Wait until P1 is not interested or it's P0's turn
// Critical Section (P0 accesses shared resource)
flag[0] = false; // P0 is leaving the critical section

// Process P1:
flag[1] = true; // P1 wants to enter critical section
turn = 0; // Give turn to P0
while (flag[0] && turn == 0); // Wait until P0 is not interested or it's P1's turn
// Critical Section (P1 accesses shared resource)
flag[1] = false; // P1 is leaving the critical section
Peterson's algorithm ensures mutual exclusion by using two shared variables: flag[] and turn.
Initially, both processes are not interested in entering the critical section, and turn is set to either
0 or 1. When a process wants to enter, it sets its flag to true and gives the other process a chance
by setting the turn variable. The process then waits if the other process is also interested and it's
not its turn. Mutual exclusion is guaranteed because only one process can enter the critical
section at any time. Progress is ensured, as at least one process will enter the critical section
when it's not occupied. Bounded waiting is satisfied because each process can only wait for a
limited number of turns before it can enter. This ensures that no process waits indefinitely.
Example:
 P0 wants to enter the critical section, so it sets flag[0] = true and turn = 1, giving P1 the turn
to enter if it wants to.
 P1 wants to enter as well, so it sets flag[1] = true and turn = 0, giving P0 the turn to enter if
it wants to.
 P0 enters the critical section because flag[1] == true and turn == 1. After P0 finishes, it sets
flag[0] = false, allowing P1 to enter.

10. Consider the following snapshot of a system:

Answer the following questions using the banker's algorithm:


a. What is the content of the matrix Need?
b. Is the system in a safe state?
If a request from process Pi arrives for (0, 4, 2, 0), can the request be granted immediately?

11. Show how a semaphore provides solution to producer and consumer problem.
The Producer-Consumer Problem is a classic synchronization problem in computer science,
where two types of processes, producers and consumers, share a common, finite-sized buffer. The
producers produce items and place them into the buffer, while consumers remove items from the
buffer. The challenge lies in ensuring that the producers do not add items when the buffer is full,
and that consumers do not attempt to remove items from an empty buffer.

The solution to this problem requires synchronization mechanisms that can handle mutual exclusion
and ensure proper coordination between producers and consumers to prevent race conditions,
deadlocks, and data inconsistencies.

Semaphores in Producer-Consumer Problem

In the producer-consumer scenario, semaphores can be used effectively to synchronize access to the
shared buffer. We will use three semaphores to solve this problem:

1. mutex (binary semaphore): Ensures mutual exclusion when accessing the shared buffer. This
prevents both the producer and the consumer from accessing the buffer simultaneously.

2. empty (counting semaphore): Keeps track of how many empty slots are available in the buffer.
It is initialized to the size of the buffer (say N), indicating how many slots are available for
producers to place items.

3. full (counting semaphore): Keeps track of how many filled slots are in the buffer. It is
initialized to 0, indicating that the buffer starts empty.

Producer-Consumer Semaphores Solution

Producer Process

A producer process works as follows:

1. Wait on the empty semaphore: This ensures that the producer waits if the buffer is full (i.e.,
no empty slots are available).

2. Wait on the mutex semaphore: This ensures mutual exclusion, preventing both the producer
and the consumer from accessing the buffer at the same time.

3. Produce an item: The producer generates an item and places it in the buffer.

4. Signal on the mutex semaphore: The producer releases the mutex, allowing consumers to
access the buffer.

5. Signal on the full semaphore: The producer increments the count of filled slots, signaling that
there is a new item available for consumption.

Consumer Process
A consumer process works as follows:

1. Wait on the full semaphore: This ensures that the consumer waits if the buffer is empty (i.e.,
no items are available for consumption).

2. Wait on the mutex semaphore: This ensures mutual exclusion, preventing both the producer
and the consumer from accessing the buffer simultaneously.

3. Consume an item: The consumer removes an item from the buffer.

4. Signal on the mutex semaphore: The consumer releases the mutex, allowing producers to
access the buffer.

5. Signal on the empty semaphore: The consumer increments the count of empty slots, signaling
that there is space for producers to place new items.

Producer Code
void* producer(void* param) {
while (1) {
int item = produce_item(); // Produce an item
sem_wait(&empty); // Wait for an empty slot
sem_wait(&mutex); // Enter critical section
add_to_buffer(item); // Add item to buffer
sem_post(&mutex); // Exit critical section
sem_post(&full); // Signal that buffer has a new item
sleep(1); // Simulate time taken to produce
}
}

Consumer Code
void* consumer(void* param) {
while (1) {
sem_wait(&full); // Wait for a filled slot
sem_wait(&mutex); // Enter critical section

int item = buffer[out]; // Consume item from buffer


out = (out + 1) % BUFFER_SIZE; // Update buffer index
printf("Consumed item: %d\n", item); // Consume the item

sem_post(&mutex); // Exit critical section


sem_post(&empty); // Signal that there is an empty slot

sleep(1); // Simulate time taken to consume


}
}
Semaphores play a crucial role in managing synchronization in the Producer-Consumer problem.
The mutex (binary semaphore) ensures mutual exclusion, allowing only one process (either
producer or consumer) to access the buffer at a time, preventing race conditions. The producer waits
for the mutex before adding an item to the buffer and signals it afterward, while the consumer does
the same before and after consuming an item. The empty semaphore, a counting semaphore, tracks
available empty slots in the buffer. It is initialized to the buffer size and is decremented by the
producer when an item is added, and incremented by the consumer when an item is consumed.
Similarly, the full semaphore, also a counting semaphore, tracks filled slots in the buffer. It is
initialized to zero and is incremented by the producer when it adds an item, while the consumer
waits on it to ensure there is something to consume. Together, these semaphores synchronize access
to the shared buffer, ensuring proper coordination between producer and consumer processes.

12. Demonstrate multi process scheduling and multi-level feedback queue scheduling
with diagram?
Multiple-Processor Scheduling

One approach to CPU scheduling in a multiprocessor system is where all scheduling decisions, I/O
processing, and other system activities are handled by a single processor i.e., the master server.
The other processors execute only user code. This asymmetric multiprocessing is simple because
only one processor accesses the system data structures, reducing the need for data sharing.

A second approach uses symmetric multiprocessing (SMP), where each processor is self-
scheduling. All processes may be in a common ready queue, or each processor may have its own
private queue of ready processes.

Some of the issues related to SMP are,

1. Processor Affinity

The data most recently accessed by the process is populated in the cache for the processor and
successive memory accesses by the process are often satisfied in cache memory.

If the process migrates to another processor, the contents of cache memory must be invalidated for
the processor being migrated from, and the cache for the processor being migrated to must be re-
populated. Because of the high cost of invalidating and re-populating caches, most SMP systems
try to avoid migration of processes from one processor to another and instead tries to keep a
process running on the same processor. This is known as processor affinity, i.e., a process has an
affinity for the processor on which it is currently running.

Processor affinity takes several forms. When an operating system has a policy of attempting to
keep a process running on the same processor but not guaranteeing that it will do so, a situation is
known as soft affinity. Here, it is possible for a process to migrate between processors.

Some systems such as Linux provide system calls that support hard affinity, thereby allowing a
process to specify that it must not migrate to other processors.

2. Load Balancing

On SMP systems, it is important to keep the workload balanced among all processors to utilize the
benefits of having more than one processor. Otherwise, one or more processors may sit idle while
other processors have high workloads along with lists of processes awaiting the CPU.

Load balancing attempts to keep the workload evenly distributed across all processors in an SMP
system. There are two general approaches to load balancing: push migration and pull migration
with push migration, a specific task periodically checks the load on each processor and if it finds an
imbalance it evenly distributes the load by moving (or pushing) processes from overloaded to idle
or less-busy processors. Pull migration occurs when an idle processor pulls a waiting task from a
busy processor.

3. Symmetric Multithreading

SMP systems allow several threads to run concurrently by providing multiple physical processors.

An alternative strategy is to provide multiple logical processors rather than physical processors.
Such a strategy is known as symmetric multithreading (or SMT).

The idea behind SMT is to create multiple logical processors on the same physical processor,
presenting a view of several logical processors to the operating system, even on a system with only
a single physical processor.

Each logical processor has its own architecture state, which includes general- purpose and machine-
state registers and is responsible for its own interrupt handling, meaning that interrupts are delivered
to and handled by logical processors rather than physical ones. Otherwise, each logical processor
shares the resources of its physical processor, such as cache memory and buses.

The following figure 5.5 illustrates a typical SMT architecture with two physical processors, each
housing two logical processors. From the operating system's perspective, four processors are
available for work on this system.

Multi-Level Feedback Queue (MLFQ) Scheduling


Multi-Level Feedback Queue (MLFQ) is a complex CPU scheduling algorithm designed to
optimize process management by dynamically adjusting the priority of processes based on their
behavior and aging. It attempts to balance between fairness, turnaround time, and response
time by moving processes between different queues, each with its own scheduling policies.

Multiple Queues:
MLFQ consists of several queues, each with a different priority.
The processes are initially placed in the highest priority queue.
Processes in higher-priority queues are allocated CPU time before those in lower-priority queues.

Time Quantum:
Each queue has its own time quantum (time slice) for process execution.
The higher-priority queues have smaller time quanta to ensure that short jobs are processed
quickly.
The lower-priority queues have larger time quanta, allowing longer processes more time to
execute.

Feedback Mechanism:
If a process consumes all its time quantum in a high-priority queue (i.e., it doesn't complete), it is
moved to a lower-priority queue.
If a process doesn’t use up its time quantum and finishes early, it stays in the same queue (or
may move to a higher-priority queue based on specific rules).
This feedback helps balance between short, interactive jobs and long-running, CPU-bound tasks.

Preemption:
When a higher-priority process arrives, it preempts the CPU, pushing the current process back to
a lower-priority queue.

Aging:
To prevent starvation, processes in lower-priority queues can be periodically moved to higher-
priority queues (aging).

Steps in MLFQ Scheduling:


Process Arrival:
A process is initially placed in the highest-priority queue.

Queue Execution:
The scheduler picks the process at the front of the highest-priority non-empty queue.

Time Quantum Expiry:


If a process runs out of its time quantum, it is moved to a lower-priority queue.

Completion or Preemption:
If a process completes before its quantum expires, it exits, or if a new high-priority process
arrives, preemption occurs.

Example:
Let’s take an example where there are 3 queues, each with different time quanta.
 Queue 1 (Highest Priority): Time Quantum = 2 ms
 Queue 2 (Medium Priority): Time Quantum = 4 ms
 Queue 3 (Lowest Priority): Time Quantum = 8 ms
Processes arrive and are initially placed in Queue 1.

Process Arrival Time CPU Burst Time Queue (Initially)

P1 0 5 Queue 1

P2 1 6 Queue 1

P3 2 8 Queue 1

1. P1 runs for 2 ms in Queue 1 (remaining time 3 ms), it is moved to Queue 2.


2. P2 runs for 2 ms in Queue 1 (remaining time 4 ms), it is moved to Queue 2.
3. P3 runs for 2 ms in Queue 1 (remaining time 6 ms), it is moved to Queue 2.
4. P1 runs for 4 ms in Queue 2 and completes.
5. P2 runs for 4 ms in Queue 2 and moves to Queue 3.
6. P3 runs for 4 ms in Queue 2 and moves to Queue 3.
7. P2 and P3 continue in Queue 3, running for 8 ms each.

MLFQ Diagram
Below is a simplified diagram illustrating how processes move between queues in a Multi-Level
Feedback Queue (MLFQ):

1. Queue 1 (High Priority): This queue serves the processes with the highest priority and the
shortest time quantum (e.g., 2 ms).

2. Queue 2 (Medium Priority): Processes that use up their time quantum in Queue 1 are
moved here. It has a larger time quantum (e.g., 4 ms).

3. Queue 3 (Low Priority): If processes still haven't completed after their time quantum in
Queue 2, they are moved to Queue 3, which has the largest time quantum (e.g., 8 ms).

4. Feedback and Aging: If a process in a lower-priority queue is not completed, it might be


"aged" back into a higher priority queue.

13. Demonstrate Bankers Algorithm and Resource Request Algorithm with


example
The Banker's Algorithm is a resource allocation and deadlock avoidance algorithm. It checks
the safety of a system by simulating resource allocation and ensuring that every process can
eventually complete without causing a deadlock. It works by maintaining a set of data structures
to represent resource availability, allocation, and the maximum resource demand of each process.

Steps Involved in the Banker's Algorithm:

1. Data Structures:
 Available[]: The number of available instances of each resource type.
 Maximum[][]: The maximum demand of each process for each resource type.
 Allocation[][]: The resources currently allocated to each process.
 Need[][]: The remaining resources required by each process to complete (Need[i][j] =
Maximum[i][j] - Allocation[i][j]).

2. Safety Check:

 The algorithm uses a work array and a finish[] array to determine if the system is in a
safe state.
 Work[]: Initially, it is set to the value of Available[], representing the resources currently
available.
 Finish[]: Initially, all entries are set to False, representing that no processes have
finished.

3. Algorithm Process:

 The system repeatedly searches for a process i such that Finish[i] == False and
Need[i][j] <= Work[j] for all resources j. This means that process i can be completed
with the current available resources.
 If such a process i is found, it is simulated to finish, and the resources allocated to it are
added to Work[].
 This process is repeated until all processes can finish (i.e., Finish[i] = True for all i).

4. Safe State Check:

 If all processes can eventually finish, the system is in a safe state.


 If no process can be found to finish, the system is in an unsafe state and may lead to
deadlock.

Resource Request Algorithm


The Resource Request Algorithm is used when a process requests resources. The algorithm
checks if granting the request would keep the system in a safe state.

Steps:

1. Request[i]: The request made by process i for each resource.

2. If Request[i] ≤ Need[i], it is a valid request.

3. If Request[i] ≤ Available, it can be granted immediately. Otherwise, the process will have
to wait.

Safety Check After Allocation:


 Temporarily allocate the resources to process i.

 Check if the system is still in a safe state using the Banker's Algorithm.

 If the system is in a safe state, grant the request.

 If the system is not in a safe state, revert the allocation and make the process wait.

14. Describe the various methods of recovery from deadlock.


Deadlock Recovery Methods
Deadlock recovery is a mechanism used by operating systems to recover from a deadlock
situation where a set of processes are stuck, waiting for each other indefinitely. There are several
methods for recovering from deadlock, each with its advantages and drawbacks. Below are the
key methods in detail:
1. Process Termination
In process termination, one or more processes involved in the deadlock are terminated to break
the deadlock cycle. This method aims to remove the processes that are blocking the system.
Terminate all processes: This approach involves terminating all processes involved in the
deadlock. While it guarantees a resolution, it results in the loss of all work done by the processes.
Terminate one process at a time: The system can selectively terminate processes. The process
chosen for termination can be selected based on different criteria, such as:
Priority: Lower priority processes may be terminated first.
Resources held: Terminating a process that holds fewer resources, thus minimizing resource loss.
Process age: Older processes may be terminated first to allow newer processes to proceed.

2. Resource Preemption
Resource preemption is a recovery technique that involves forcibly taking resources away from
one or more processes and reallocating them to other processes in order to break the deadlock
cycle. The goal is to prevent any process from holding onto resources indefinitely and allow the
system to continue executing. By interrupting the deadlock cycle and ensuring that resources are
released, preemption helps restore progress in the system, but it may cause the interrupted
processes to be delayed or require additional work.
Preempt resources from processes: In this approach, the resources held by processes involved
in the deadlock are forcibly taken and reassigned to other processes that need them. For example,
if process P1 is holding resource R1 and needs R2 (held by P2), preemption would allow P1 to
release R1, allowing P2 to release R2, and then P1 can acquire R2 to continue. This reassignment
breaks the deadlock by making resources available to other processes, but it may result in
processes being delayed or needing to retry their operations.
Rollback: If preempting resources leads to the system entering an inconsistent state, the process
may be rolled back to a checkpoint before the deadlock occurred, allowing the resources to be
reallocated safely.

3. Rollback
Rollback is a recovery technique used to resolve deadlock by restoring a process or the entire
system to a previous, consistent state. This is typically done by undoing the changes made by
processes during their execution, particularly when those changes have contributed to a
deadlock. By rolling back to a checkpoint (a saved state), processes can release any resources
they were holding and reattempt execution from a safe point, thereby avoiding the circular wait
conditions that caused the deadlock.
System-level rollback: In this approach, the entire system's state is reverted to a point before the
deadlock occurred. This means all processes, memory states, and resources are restored to their
condition prior to the deadlock, effectively undoing all actions taken by processes since that
time. While this ensures that deadlock is cleared, it can result in the loss of a large amount of
work and time spent.
Process-level rollback: Here, only the processes that are part of the deadlock cycle are rolled
back to their last checkpoint or safe state. These processes are restarted from that point, allowing
them to attempt execution again while releasing resources they were holding. This method is
more granular and minimizes the amount of lost work, but it may not always be sufficient if the
deadlock involves complex interdependencies between many processes.

15. Illustrate Bounded-Buffer Synchronization problem with pseudocode.


16. Explain wait-for graph with an example.

You might also like