OS Tutorials 2
OS Tutorials 2
TUTORIAL-II
Sem: 3rd SEM Sub Name: OPERATING SYSTEMS [BCS303] Date: 25/01/2024
The Dining Philosophers Problem is a classic synchronization problem in computer science that
demonstrates the challenges of resource sharing and the potential for deadlock in a multi-threaded
environment. In this problem, five philosophers are seated around a circular table, each with a bowl
of rice and a chopstick placed between each pair of adjacent philosophers. Philosophers alternate
between two states: thinking and eating. To eat, a philosopher needs two chopsticks: one to their left
and one to their right. When a philosopher becomes hungry, they try to pick up the chopsticks on
either side. If both chopsticks are available, they pick them up and begin eating. After eating, the
philosopher puts down both chopsticks and returns to thinking. The challenge lies in ensuring that
the philosophers can eat without deadlock or starvation. Deadlock could occur if all philosophers
simultaneously pick up one chopstick and wait for the second, causing the system to freeze. One
simple solution to this problem uses semaphores to represent the chopsticks. Each chopstick is
associated with a binary semaphore (either 0 or 1), which indicates whether the chopstick is
available. When a philosopher wants to eat, they must acquire the semaphores for both their left and
right chopsticks. If both are available (i.e., the semaphores are set to 1), the philosopher can eat, and
the semaphores are then set to 0 to indicate that the chopsticks are in use. Once the philosopher
finishes eating, the semaphores are released (set back to 1), allowing other philosophers to use the
chopsticks.
A philosopher tries to grab a chopstick by executing a wait() operation on that semaphore; she
releases her chopsticks by executing the signal() operation on the appropriate semaphores.
Thus, the shared data are semaphore chopstick[5]; where all the elements of chopstick are initialized
to 1.The structure of philosopher i is shown below,
do {
} while (TRUE);
The disadvantage is it could create a deadlock. Suppose that all five philosophers become hungry
simultaneously and each grabs her left chopstick. All the elements of chopstick will now be equal to
0. When each philosopher tries to grab her right chopstick, she will be delayed forever.
Several possible remedies to the deadlock problem are available.
Allow at most four philosophers to be sitting simultaneously at the table.
Allow a philosopher to pick up her chopsticks only if both chopsticks are available (to do this, she
must pick them up in a critical section).
Use an asymmetric solution; that is, an odd philosopher picks up first her left chopstick and then her
right chopstick, whereas an even philosopher picks up her right chopstick and then her left
chopstick.
The dining philosopher’s problem can be solved using a monitor with a deadlock-free approach.
Each philosopher's state (thinking, hungry, or eating) is tracked in an array state [5], and they can
only start eating if both chopsticks are available. The pickup () function sets the philosopher's state
to hungry and checks if both neighbors are not eating; if not, the philosopher waits. The putdown ()
function updates the state to thinking and notifies neighbors. Semaphores and condition variables
control mutual exclusion and synchronization: a mutex ensures exclusive access to the monitor, and
a next semaphore manages process signaling. Condition variables are implemented using x_sem for
waiting and signaling. The signaling mechanism involves a next_count to track suspended
processes.
3. Explain critical section problem. What are the requirements that critical section
problem must satisfy.
The critical-section problem
Consider a system consisting of n processes {P0, P1... Pn-1}. Each process has a segment of code,
called a critical section, in which the process may be changing the common variables, updating a
table, writing a file, and so on.
The important feature of the system is that, when one process is executing in its critical section, no
other process is to be allowed to execute in its critical section. That is, no two processes are
executing in their critical sections at the same time.
The critical-section problem is to design a protocol that the processes can use to cooperate.
Each process must request permission to enter its critical section. The section of code
implementing this request is the entry section.
The critical section may be followed by an exit section.
The remaining code is the remainder section.
The general structure of a typical process Pi, is shown in the following figure
5. Draw the Gnatt Chart and calculate average waiting time and turnaround time for
the following snapshot of processes using i) FCFS ii) SJF iii) Priority iv)RR (2 ms)
scheduling algorithm.
Turnaround Time: Turnaround time for a process is the total time from arrival to completion.
Process IDArrival Time Burst Time Start TimeCompletion Time Waiting Time Turnaround Time
P1 0 6 0 6 0 6
P2 1 3 6 9 5 8
P3 2 1 9 10 7 8
P4 3 4 10 14 7 11
Average Waiting Time for FCFS:
// Critical Section
lock = 0; // Release the lock
// Remainder Section
} while (true);
The swap(lock, 1) instruction atomically attempts to set lock to 1 and returns the previous value of
lock.
If the returned value is 1, it means the lock is already held by another process, so the current process
waits.
If the returned value is 0, it means the lock was free, and the current process successfully acquired
the lock and can proceed to the critical section.
After finishing the critical section, the process releases the lock by setting lock = 0.
7. Discuss in detail about deadlock. Point out and example its necessary condition.
Deadlock is a situation in computer systems, particularly in concurrent processing environments,
where a set of processes are blocked because they are each waiting for a resource that is held by
another process in the set. As a result, none of the processes can proceed, leading to a standstill.
Deadlock is a critical issue in multi-threaded and multi-processing systems that need to share
resources, and its occurrence can severely affect system performance and resource utilization.
A deadlock occurs when a group of processes get into a state where:
Each process is waiting for a resource held by another process in the set.
No process can release any resources, because they are all waiting for resources held by others.
This leads to a cycle of waiting processes, where none of the processes can continue execution or
release resources. Deadlocks are most commonly associated with operating systems, database
management systems, and distributed systems, where resource allocation is handled dynamically.
No Preemption is a condition where resources cannot be forcibly taken away from a process
holding them; instead, resources can only be released voluntarily by the process once it has finished
using them. This means that once a process acquires a resource, it will hold on to it until it
completes its task and voluntarily releases it. While this condition ensures that a process is not
interrupted in its execution, it can contribute to deadlock. If a process is holding certain resources
and waiting for others that are held by different processes, it cannot release the resources it
currently holds to allow others to proceed. This prevents the system from recovering resources that
could potentially break a deadlock cycle. As a result, no preemption increases the likelihood that
processes will remain stuck in a state of waiting, which can lead to deadlock if the resources they
need are never freed.
4. Circular Wait: Circular Wait is a condition in which a set of processes exist, where each
process in the set is waiting for a resource held by the next process, ultimately forming a cycle.
For example, if process P1 is waiting for a resource held by P2, P2 is waiting for a resource held
by P3, and so on, the last process in the cycle (e.g., Pn) will be waiting for a resource held by the
first process (P1). This creates a circular chain of dependencies, where each process is waiting
for a resource that is held by another process in the cycle. This condition is the most critical for
deadlock because it means that no process can make progress, as they are all waiting for each
other in a closed loop. None of the processes can release the resources they hold, because they
are all blocked waiting for resources held by others. As a result, all processes are stuck in the
cycle, unable to proceed, which creates a deadlock state. Circular wait, therefore, is a necessary
condition for deadlock to occur in a system.
Example of Deadlock
Let’s consider a simple system with two processes P1 and P2, and two resources R1 and R2.
Both processes need both resources to complete their tasks.
P1 holds R1 and requests R2.
P2 holds R2 and requests R1.
In this case, we have the following conditions:
Mutual Exclusion: Both R1 and R2 are non-shareable, so only one process can use them
at any time.
Hold and Wait: P1 holds R1 and is waiting for R2, and P2 holds R2 and is waiting for
R1.
No Preemption: Neither process can be forced to release the resources they hold, as
resources can only be released voluntarily by the process.
Circular Wait: There is a circular wait where P1 is waiting for R2 (held by P2), and P2
is waiting for R1 (held by P1), thus forming a cycle.
9. Illustrate with examples the Peterson’s solution for critical section problem and
prove that mutual exclusion property is preserved.
Peterson’s Solution for the Critical Section Problem: The Critical Section Problem arises in
concurrent programming when multiple processes need to access shared resources (critical
sections). The challenge is to ensure that only one process can execute in the critical section at a
time, preventing race conditions. Peterson’s Solution is a well-known algorithm used to solve
the critical section problem for two processes. It guarantees mutual exclusion, progress, and
bounded waiting without the need for hardware support, such as semaphores or locks.
Peterson’s Algorithm: Peterson's solution was developed by Gary Peterson in 1981 to ensure
mutual exclusion for two processes, which we will denote as P0 and P1. The algorithm uses two
shared variables:
1. flag[]: An array of two boolean values, where flag[i] indicates if process i is interested in
entering the critical section.
2. turn: A variable that indicates whose turn it is to enter the critical section. It can take the
values 0 or 1.
Peterson’s Algorithm (for two processes P0 and P1)
// Shared variables
boolean flag[2]; // flag[0] for P0, flag[1] for P1
int turn; // turn is either 0 or 1
// Process P0:
flag[0] = true; // P0 wants to enter critical section
turn = 1; // Give turn to P1
while (flag[1] && turn == 1); // Wait until P1 is not interested or it's P0's turn
// Critical Section (P0 accesses shared resource)
flag[0] = false; // P0 is leaving the critical section
// Process P1:
flag[1] = true; // P1 wants to enter critical section
turn = 0; // Give turn to P0
while (flag[0] && turn == 0); // Wait until P0 is not interested or it's P1's turn
// Critical Section (P1 accesses shared resource)
flag[1] = false; // P1 is leaving the critical section
Peterson's algorithm ensures mutual exclusion by using two shared variables: flag[] and turn.
Initially, both processes are not interested in entering the critical section, and turn is set to either
0 or 1. When a process wants to enter, it sets its flag to true and gives the other process a chance
by setting the turn variable. The process then waits if the other process is also interested and it's
not its turn. Mutual exclusion is guaranteed because only one process can enter the critical
section at any time. Progress is ensured, as at least one process will enter the critical section
when it's not occupied. Bounded waiting is satisfied because each process can only wait for a
limited number of turns before it can enter. This ensures that no process waits indefinitely.
Example:
P0 wants to enter the critical section, so it sets flag[0] = true and turn = 1, giving P1 the turn
to enter if it wants to.
P1 wants to enter as well, so it sets flag[1] = true and turn = 0, giving P0 the turn to enter if
it wants to.
P0 enters the critical section because flag[1] == true and turn == 1. After P0 finishes, it sets
flag[0] = false, allowing P1 to enter.
11. Show how a semaphore provides solution to producer and consumer problem.
The Producer-Consumer Problem is a classic synchronization problem in computer science,
where two types of processes, producers and consumers, share a common, finite-sized buffer. The
producers produce items and place them into the buffer, while consumers remove items from the
buffer. The challenge lies in ensuring that the producers do not add items when the buffer is full,
and that consumers do not attempt to remove items from an empty buffer.
The solution to this problem requires synchronization mechanisms that can handle mutual exclusion
and ensure proper coordination between producers and consumers to prevent race conditions,
deadlocks, and data inconsistencies.
In the producer-consumer scenario, semaphores can be used effectively to synchronize access to the
shared buffer. We will use three semaphores to solve this problem:
1. mutex (binary semaphore): Ensures mutual exclusion when accessing the shared buffer. This
prevents both the producer and the consumer from accessing the buffer simultaneously.
2. empty (counting semaphore): Keeps track of how many empty slots are available in the buffer.
It is initialized to the size of the buffer (say N), indicating how many slots are available for
producers to place items.
3. full (counting semaphore): Keeps track of how many filled slots are in the buffer. It is
initialized to 0, indicating that the buffer starts empty.
Producer Process
1. Wait on the empty semaphore: This ensures that the producer waits if the buffer is full (i.e.,
no empty slots are available).
2. Wait on the mutex semaphore: This ensures mutual exclusion, preventing both the producer
and the consumer from accessing the buffer at the same time.
3. Produce an item: The producer generates an item and places it in the buffer.
4. Signal on the mutex semaphore: The producer releases the mutex, allowing consumers to
access the buffer.
5. Signal on the full semaphore: The producer increments the count of filled slots, signaling that
there is a new item available for consumption.
Consumer Process
A consumer process works as follows:
1. Wait on the full semaphore: This ensures that the consumer waits if the buffer is empty (i.e.,
no items are available for consumption).
2. Wait on the mutex semaphore: This ensures mutual exclusion, preventing both the producer
and the consumer from accessing the buffer simultaneously.
4. Signal on the mutex semaphore: The consumer releases the mutex, allowing producers to
access the buffer.
5. Signal on the empty semaphore: The consumer increments the count of empty slots, signaling
that there is space for producers to place new items.
Producer Code
void* producer(void* param) {
while (1) {
int item = produce_item(); // Produce an item
sem_wait(&empty); // Wait for an empty slot
sem_wait(&mutex); // Enter critical section
add_to_buffer(item); // Add item to buffer
sem_post(&mutex); // Exit critical section
sem_post(&full); // Signal that buffer has a new item
sleep(1); // Simulate time taken to produce
}
}
Consumer Code
void* consumer(void* param) {
while (1) {
sem_wait(&full); // Wait for a filled slot
sem_wait(&mutex); // Enter critical section
12. Demonstrate multi process scheduling and multi-level feedback queue scheduling
with diagram?
Multiple-Processor Scheduling
One approach to CPU scheduling in a multiprocessor system is where all scheduling decisions, I/O
processing, and other system activities are handled by a single processor i.e., the master server.
The other processors execute only user code. This asymmetric multiprocessing is simple because
only one processor accesses the system data structures, reducing the need for data sharing.
A second approach uses symmetric multiprocessing (SMP), where each processor is self-
scheduling. All processes may be in a common ready queue, or each processor may have its own
private queue of ready processes.
1. Processor Affinity
The data most recently accessed by the process is populated in the cache for the processor and
successive memory accesses by the process are often satisfied in cache memory.
If the process migrates to another processor, the contents of cache memory must be invalidated for
the processor being migrated from, and the cache for the processor being migrated to must be re-
populated. Because of the high cost of invalidating and re-populating caches, most SMP systems
try to avoid migration of processes from one processor to another and instead tries to keep a
process running on the same processor. This is known as processor affinity, i.e., a process has an
affinity for the processor on which it is currently running.
Processor affinity takes several forms. When an operating system has a policy of attempting to
keep a process running on the same processor but not guaranteeing that it will do so, a situation is
known as soft affinity. Here, it is possible for a process to migrate between processors.
Some systems such as Linux provide system calls that support hard affinity, thereby allowing a
process to specify that it must not migrate to other processors.
2. Load Balancing
On SMP systems, it is important to keep the workload balanced among all processors to utilize the
benefits of having more than one processor. Otherwise, one or more processors may sit idle while
other processors have high workloads along with lists of processes awaiting the CPU.
Load balancing attempts to keep the workload evenly distributed across all processors in an SMP
system. There are two general approaches to load balancing: push migration and pull migration
with push migration, a specific task periodically checks the load on each processor and if it finds an
imbalance it evenly distributes the load by moving (or pushing) processes from overloaded to idle
or less-busy processors. Pull migration occurs when an idle processor pulls a waiting task from a
busy processor.
3. Symmetric Multithreading
SMP systems allow several threads to run concurrently by providing multiple physical processors.
An alternative strategy is to provide multiple logical processors rather than physical processors.
Such a strategy is known as symmetric multithreading (or SMT).
The idea behind SMT is to create multiple logical processors on the same physical processor,
presenting a view of several logical processors to the operating system, even on a system with only
a single physical processor.
Each logical processor has its own architecture state, which includes general- purpose and machine-
state registers and is responsible for its own interrupt handling, meaning that interrupts are delivered
to and handled by logical processors rather than physical ones. Otherwise, each logical processor
shares the resources of its physical processor, such as cache memory and buses.
The following figure 5.5 illustrates a typical SMT architecture with two physical processors, each
housing two logical processors. From the operating system's perspective, four processors are
available for work on this system.
Multiple Queues:
MLFQ consists of several queues, each with a different priority.
The processes are initially placed in the highest priority queue.
Processes in higher-priority queues are allocated CPU time before those in lower-priority queues.
Time Quantum:
Each queue has its own time quantum (time slice) for process execution.
The higher-priority queues have smaller time quanta to ensure that short jobs are processed
quickly.
The lower-priority queues have larger time quanta, allowing longer processes more time to
execute.
Feedback Mechanism:
If a process consumes all its time quantum in a high-priority queue (i.e., it doesn't complete), it is
moved to a lower-priority queue.
If a process doesn’t use up its time quantum and finishes early, it stays in the same queue (or
may move to a higher-priority queue based on specific rules).
This feedback helps balance between short, interactive jobs and long-running, CPU-bound tasks.
Preemption:
When a higher-priority process arrives, it preempts the CPU, pushing the current process back to
a lower-priority queue.
Aging:
To prevent starvation, processes in lower-priority queues can be periodically moved to higher-
priority queues (aging).
Queue Execution:
The scheduler picks the process at the front of the highest-priority non-empty queue.
Completion or Preemption:
If a process completes before its quantum expires, it exits, or if a new high-priority process
arrives, preemption occurs.
Example:
Let’s take an example where there are 3 queues, each with different time quanta.
Queue 1 (Highest Priority): Time Quantum = 2 ms
Queue 2 (Medium Priority): Time Quantum = 4 ms
Queue 3 (Lowest Priority): Time Quantum = 8 ms
Processes arrive and are initially placed in Queue 1.
P1 0 5 Queue 1
P2 1 6 Queue 1
P3 2 8 Queue 1
MLFQ Diagram
Below is a simplified diagram illustrating how processes move between queues in a Multi-Level
Feedback Queue (MLFQ):
1. Queue 1 (High Priority): This queue serves the processes with the highest priority and the
shortest time quantum (e.g., 2 ms).
2. Queue 2 (Medium Priority): Processes that use up their time quantum in Queue 1 are
moved here. It has a larger time quantum (e.g., 4 ms).
3. Queue 3 (Low Priority): If processes still haven't completed after their time quantum in
Queue 2, they are moved to Queue 3, which has the largest time quantum (e.g., 8 ms).
1. Data Structures:
Available[]: The number of available instances of each resource type.
Maximum[][]: The maximum demand of each process for each resource type.
Allocation[][]: The resources currently allocated to each process.
Need[][]: The remaining resources required by each process to complete (Need[i][j] =
Maximum[i][j] - Allocation[i][j]).
2. Safety Check:
The algorithm uses a work array and a finish[] array to determine if the system is in a
safe state.
Work[]: Initially, it is set to the value of Available[], representing the resources currently
available.
Finish[]: Initially, all entries are set to False, representing that no processes have
finished.
3. Algorithm Process:
The system repeatedly searches for a process i such that Finish[i] == False and
Need[i][j] <= Work[j] for all resources j. This means that process i can be completed
with the current available resources.
If such a process i is found, it is simulated to finish, and the resources allocated to it are
added to Work[].
This process is repeated until all processes can finish (i.e., Finish[i] = True for all i).
Steps:
3. If Request[i] ≤ Available, it can be granted immediately. Otherwise, the process will have
to wait.
Check if the system is still in a safe state using the Banker's Algorithm.
If the system is not in a safe state, revert the allocation and make the process wait.
2. Resource Preemption
Resource preemption is a recovery technique that involves forcibly taking resources away from
one or more processes and reallocating them to other processes in order to break the deadlock
cycle. The goal is to prevent any process from holding onto resources indefinitely and allow the
system to continue executing. By interrupting the deadlock cycle and ensuring that resources are
released, preemption helps restore progress in the system, but it may cause the interrupted
processes to be delayed or require additional work.
Preempt resources from processes: In this approach, the resources held by processes involved
in the deadlock are forcibly taken and reassigned to other processes that need them. For example,
if process P1 is holding resource R1 and needs R2 (held by P2), preemption would allow P1 to
release R1, allowing P2 to release R2, and then P1 can acquire R2 to continue. This reassignment
breaks the deadlock by making resources available to other processes, but it may result in
processes being delayed or needing to retry their operations.
Rollback: If preempting resources leads to the system entering an inconsistent state, the process
may be rolled back to a checkpoint before the deadlock occurred, allowing the resources to be
reallocated safely.
3. Rollback
Rollback is a recovery technique used to resolve deadlock by restoring a process or the entire
system to a previous, consistent state. This is typically done by undoing the changes made by
processes during their execution, particularly when those changes have contributed to a
deadlock. By rolling back to a checkpoint (a saved state), processes can release any resources
they were holding and reattempt execution from a safe point, thereby avoiding the circular wait
conditions that caused the deadlock.
System-level rollback: In this approach, the entire system's state is reverted to a point before the
deadlock occurred. This means all processes, memory states, and resources are restored to their
condition prior to the deadlock, effectively undoing all actions taken by processes since that
time. While this ensures that deadlock is cleared, it can result in the loss of a large amount of
work and time spent.
Process-level rollback: Here, only the processes that are part of the deadlock cycle are rolled
back to their last checkpoint or safe state. These processes are restarted from that point, allowing
them to attempt execution again while releasing resources they were holding. This method is
more granular and minimizes the amount of lost work, but it may not always be sufficient if the
deadlock involves complex interdependencies between many processes.