Os Unit 3 Digital Notes
Os Unit 3 Digital Notes
This document is confidential and intended solely for the educational purpose of
RMK Group of Educational Institutions. If you have received this document
through email in error, please notify the system manager. This document
contains proprietary information and is intended only to the respective group /
learning community as intended. If you are not the addressee you should not
disseminate, distribute or copy through e-mail. Please notify the sender
immediately by e-mail if you have received this document by mistake and delete
this document from your system. If you are not the intended recipient you are
notified that disclosing, copying, distributing or taking any action in reliance on
the contents of this information is strictly prohibited.
22CS304
OPERATING SYSTEMS
(Lab Integrated)
UNIT I
S NO
CONTENTS PAGE NO
1 Contents 5
2 Course Objectives 6
8
3 Pre Requisites (Course Names with Code)
7 Lecture Plan 17
9 Lecture Notes 21
10 81
Lecture Slides
11 83
Lecture Videos
12 Assignments 84
13 Part A (Q & A) 87
14 Part B Qs 91
18 100
Assessment Schedule
COURSE OBJECTIVES
The Course will enable learners to:
Explain the basic concepts of operating systems and process.
Discuss threads and implement various CPU scheduling algorithms.
Describe the concept of process synchronization and implement deadlocks
Analyze various memory management schemes.
Describe I/O management and file systems.
Prerequisite
22CS304 OPERATING SYSTEMS
PREREQUISITE
Nil
Syllabus
22CS304 - OPERATING SYSTEMS
SYLLABUS 2023
PO’s/PSO’s
COs PO PO PO PO PO PO PO PO PO PO PO PO PSO PSO PSO
1 2 3 4 5 6 7 8 9 10 11 12 1 2 3
CO1 3 2 2 2 2 - - - 2 - - 2 2 - -
CO2
3 3 2 2 2 - - - 2 - - 2 2 - -
CO3
2 2 1 1 1 - - - 1 - - 1 2 - -
CO4 3 3 1 1 1 - - - 1 - - 1 2 - -
CO5
3 3 1 1 1 - - - 1 - - 1 3 1 -
Mode
No of Actual Taxo
S No Topics Proposed Pertainin of
periods Lecture nomy
date g CO delivery
Date level
Process
ICT
Synchronization -
1 1 CO3 K2 Tools
The critical-section
problem
Peterson’s Solution -
Synchronization ICT
2 1 CO3 K2 Tools
hardware,
Mutex locks,
ICT
3 Semaphores 1 CO3 K3 Tools
monitors, Liveness
ICT
4 1 CO3 K2 Tools
Classic problems of
synchronization – ICT
Bounded Buffer Tools
Problem - Reader’s
5 & Writer Problem, 1 CO3 K2
Dinning Philosopher
Problem, Barber’s
shop problem
Deadlock - System
model - Deadlock
ICT
6 characterization, 1 CO3 K2 Tools
Methods for
handling deadlocks
Deadlock prevention
- Deadlock ICT
7 avoidance 1 CO3 K3
Tools
while (true) {
/* produce an item in next produced */
while (counter == BUFFER SIZE)
; /* do nothing */
buffer[in] = next produced;
while (true) {
while (counter == 0)
; /* do nothing */
next consumed = buffer[out];
out = (out + 1) % BUFFER SIZE;
counter--;
/* consume the item in next consumed */
}
Although the producer and consumer routines shown above are correct
separately,they may not function correctly when executed concurrently. As an illustration,
suppose that the value of the variable count is currently 5 and that the producer and
consumer processes concurrently execute the statements “count++” and “count--”.
Following the execution of these two statements, the value of the variable
count may be 4, 5, or 6! The only correct result, though, is count == 5, which is
generated correctly if the producer and consumer execute separately.
We can show that the value of count may be incorrect as follows. Note that the
statement “count++” may be implemented in machine language as follows:
register1 = count
register1 = register1 + 1
count = register1
where register1 is one of the local CPU registers. Similarly, the statement “count--” is
implemented as follows:
register2 = count
register2 = register2 − 1
count = register2
Reference Video
Introduction to Threads
https://youtu.be/LOfGJcVnvAk
3.2 THE CRITICAL-SECTION PROBLEM
Consider a system consisting of n processes {P0, P1, ..., Pn−1}. Each process has
a segment of code, called a critical section, in which the process may be changing common
variables, updating a table, writing a file, and so on. The important feature of the system
is that, when one process is executing in its critical section, no other process is allowed to execute
in its critical section. Each process must request permission to enter its critical section.
The section of code implementing this request is the entry section. The critical
section may be followed by an exit section. The remaining code is the remainder section.
The general structure of a typical process Pi is shown in the figure 3.1.
Peterson’s solution requires the two processes to share two data items:
int turn;
boolean flag[2];
The variable turn indicates whose turn it is to enter its critical section. That is, if turn
== i, then process Pi is allowed to execute in its critical section. The flag array is used to indicate
if a process is ready to enter its critical section. For example, if flag[i] is true, Pi is ready
to enter its critical section.
To enter the critical section, process Pi first sets flag[i] to be true and then sets
turn to the value j, thereby asserting that if the other process wishes to enterthe
critical section, it can do so. If both processes try to enter at the same time, turn will be set
to both i and j at roughly the same time. Only one of these assignments will last; the other will
occur but will be overwritten immediately. The eventual value of turn determines which of the
two processes is allowed to enter its critical section first.
If turn == j, then Pj will enter the critical section. However, once Pj exits its critical
section, it will reset flag[j] to false, allowing Pi to enter its critical section. If Pj resets flag[j] to
true, it must also set turn to i. Thus, since Pi does not change the value of the variable turn while
executing the while statement, Pi will enter the critical section (progress) after at most one entry
by Pj (bounded waiting).
4. Hardware Support for Synchronization
Software-based solutions are not guaranteed to work on modern computer
architectures. There are three hardware instructions that provide support for solving the
critical-section problem. These primitive operations can be used directly as synchronization
tools, or they can be used to form the foundation of more abstract synchronization
mechanisms.
Memory Barriers
Hardware Instructions
Atomic variables
1. Memory Barriers
A system may reorder instructions, a policy that can lead to unreliable data
states. How a computer architecture determines what memory guarantees it will provide to
an application program is known as its memory model. In general, a memory model falls
into one of two categories:
Memory models vary by processor type, so kernel developers cannot make any
assumptions regarding the visibility of modifications to memory on a shared-memory
multiprocessor. To address this issue, computer architectures provide instructions that can
force any changes in memory to be propagated to all other processors, thereby ensuring that
memory modifications are visible to threads running on other processors.
while (!flag)
memory barrier();
print x;
x = 100;
memory barrier();
flag = true;
The important characteristic of test and set() instruction is that it is executed atomically.
Thus, if two test and set() instructions are executed simultaneously each on a different core),
they will be executed sequentially in some arbitrary order. If the machine supportsthe test
and set() instruction, then we can implement mutual exclusion by declaring a boolean
variable lock, initialized to false. The structure of process Pi is shown below.
The compare and swap() instruction (CAS), just like the test and set() instruction, operates
on two words atomically, but uses a different mechanism that is based on swapping the
content of two words. The CAS instruction operates on three operands and is defined as
below.
The operand value is set to new value only if the expression (*value ==
expected) is true. Regardless, CAS always returns the original value of the variable value.
The important characteristic of this instruction is that it is executed atomically. Thus, if two
CAS instructions are executed simultaneously (each on a different core), they will be executed
sequentially in some arbitrary order.
Mutual exclusion using CAS can be provided as follows: A global variable
(lock) is declared and is initialized to 0. The first process that invokes compare and swap()
will set lock to 1. It will then enter its critical section, because the original value of lock
was equal to the expected value of 0. Subse_x0002_quent calls to compare and swap()
will not succeed, because lock now is not equal to the expected value of 0. When a
process exits its critical section, it sets lock back to 0, which allows another process to
enter its critical section. The structure of process Pi is shown above.
These functions are often implemented using compare and swap() opera_x0002_tions.
As an example, the following increments the atomic integer sequence:
increment(&sequence);
int temp;
do
temp = *v;
{}
}
It is important to note that although atomic variables provide atomic
updates, they do not entirely solve race conditions in all circumstances.
acquire()
while (!available)
{
; /* busy wait */
available = false;
available = true;
{}
The main disadvantage of the implementation given here is that it
requires busy waiting. While a process is in its critical section, any other process that
tries to enter its critical section must loop continuously in the call to acquire(). This
continual looping is clearly a problem in a real multiprogramming system, where a single
CPU core is shared among many processes. Busy waiting also wastes CPU cycles that
some other process might be able to use productively.
The type of mutex lock we have been describing is also called a spinlock
because the process “spins” while waiting for the lock to become available. Spinlocks do
have an advantage, however, in that no context switch is required when a process
must wait on a lock, and a context switch may take considerable time. In certain
circumstances on multicore systems, spinlocks are in fact the preferable choice for
locking. If a lock is to be held for a short duration, one thread can “spin” on one
processing core while another thread performs its critical section on another core.
3.6 Semaphores
Mutex locks are generally considered the simplest of synchronization
tools. A semaphore S is an integer variable that, apart from initialization, is accessed
only through two standard atomic operations: wait() and signal(). Semaphores were
introduced by the Dutch computer scientist Edsger Dijkstra, and such, the wait()
operation was originally termed P (from the Dutch proberen, “to test”); signal() was
originally called V (from verhogen, “to increment”). The definition of wait() is as follows:
wait(S)
while (S
{ <= 0)
; // busy wait
S--;
signal(S) {
S++;
}
All modifications to the integer value of the semaphore in the wait() and signal()
operations must be executed atomically. That is, when one process modifies the
semaphore value, no other process can simultaneously modify that same semaphore
value. In addition, in the case of wait(S), the testing of the integer value of S (S ≤ 0), as
well as its possible modification (S--), must be executed without interruption. We shall
see how these operations can be implemented in Section 6.6.2. First, let’s see how
semaphores can be used.
3.6.1 Semaphore Usage
Operating systems often distinguish between counting and binary
semaphores. The value of a counting semaphore can range over an unrestricted domain.
The value of a binary semaphore can range only between 0 and 1. Thus, binary
semaphores behave similarly to mutex locks. On systems that do not provide mutex
locks, binary semaphores can be used instead for providing mutual exclusion.
S1;
signal(synch);
wait(synch);
S2;
However, rather than engaging in busy waiting, the process can suspend
itself. The suspend operation places a process into a waiting queue associated with the
semaphore, and the state of the process is switched to the waiting state. Then control is
transferred to the CPU scheduler, which selects another process to execute.
follows:
typedef struct {
int value;
} semaphore;
Each semaphore has an integer value and a list of processes list. When a process must
wait on a semaphore, it is added to the list of processes. A signal() operation removes
one process from the list of waiting processes and awakens that process.
Now, the wait() semaphore operation can be defined as
wait(semaphore *S)
S->value--;
if (S->value < 0)
S->list;
sleep();
}}
signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
wakeup(P);
}}
The sleep() operation suspends the process that invokes it. The
wakeup(P) operation resumes the execution of a suspended process P. These two
opera_x0002_tions are provided by the operating system as basic system calls.
3.7 Monitors
Although semaphores provide a convenient and effective mechanism for
process synchronization, using them incorrectly can result in timing errors that are difficult
to detect, since these errors happen only if particular execution sequences take place, and
these sequences do not always occur.
Suppose that a program interchanges the order in which the wait() and signal()
operations on the semaphore mutex are executed, resulting in the following
execution:
signal(mutex);
...
critical section
...
wait(mutex);
In this situation, several processes may be executing in their critical sections
simultaneously, violating the mutual-exclusion requirement. This error may be
discovered only if several processes are simultaneously active in their critical sections.
Note that this situation may not always be reproducible.
Suppose that a program replaces signal(mutex) with wait(mutex). That is, it executes
wait(mutex);
...
critical section
...
wait(mutex);
In this case, the process will permanently block on the second call to wait(), as the
semaphore is now unavailable.
Suppose that a process omits the wait(mutex), or the signal(mutex), or both. In this
case, either mutual exclusion is violated or the process will permanently block.
These examples illustrate that various types of errors can be generated easily when
programmers use semaphores or mutex locks incorrectly to solve the critical-section
problem. One strategy for dealing with such errors is to incorporate simple
synchronization tools as high-level language constructs. This leads to the one
fundamental high-level synchronization construct—the monitor type.
3.7.1 Monitor Usage
An abstract data type—or ADT—encapsulates data with a set of functions to
operate on that data that are independent of any specific implementation of the ADT.A
monitor type is an ADT that includes a set of programmer-defined perations
that are provided with mutual exclusion within the monitor. The monitor type
also declares the variables whose values define the state of an instance of that type, along
with the bodies of functions that operate on those variables.
The monitor construct ensures that only one process at a time is active within the
monitor. Consequently, the programmer does not need to code his synchronization
constraint explicitly.
These mechanisms are provided by the condition construct. A
programmer who needs to write a tailor-made synchronization scheme can define one or
more variables of type condition:
condition x, y;
The only operations that can be invoked on a condition variable are wait() and signal().
The operation
x.wait();
means that the process invoking this operation is suspended until another process
invokes
x.signal();
The x.signal() operation resumes exactly one suspended process. If no process is
suspended, then the signal() operation has no effect; that is, the state of x is the same
as if the operation had never been executed. Contrast this operation with the signal()
operation associated with semaphores, which always affects the state of the semaphore.
1.Signal and wait. P either waits until Q leaves the monitor or waits for another
condition.
2. Signal and continue. Q either waits until P leaves the monitor or waits for another
condition.
There are reasonable arguments in favor of adopting either option. On the one hand,
since P was already executing in the monitor, the signal-and continue method seems more
reasonable. On the other, if we allow thread P to continue, then by the time Q is resumed,
the logical condition for which Q was waiting may no longer hold. A compromise
between these two choices exists as well: when thread P executes the signal operation,
it immediately leaves the monitor. Hence, Q is immediately resumed.
Since a signaling process must wait until the resumed process either leaves or waits,
an additional binary semaphore, next, is introduced, initialized to 0. The signaling processes
can use next to suspend themselves. An integer variable next count is also provided to count
the number of processes suspended on next. Thus, each external function F is
replaced by
R.acquire(t);
...
...
R.release();
A process might access a resource without first gaining access permission to the
resource.
A process might never release a resource once it has been granted access to the
resource.
To ensure that the processes observe the appropriate sequences, we must inspect all the
programs that make use of the Resource Allocator monitor and its managed resource.
We must check two conditions to establish the correctness of this system.
First, user processes must always make their calls on the monitor in a correct
sequence.
Second, we must be sure that an uncooperative process does not simply ignore the
mutual-exclusion gateway provided by the monitor and try to access the shared
resource directly, without using the access protocols.
Only if these two conditions can be ensured can we guarantee that no time-
dependent errors will occur and that the scheduling algorithm will not be defeated.
Although this inspection may be possible for a small, static system, it is not
reasonable for a large system or a dynamic system. This access-control problem can be
solved only through the use of the additional mechanisms.
8. Liveness
One consequence of using synchronization tools to coordinate access to
critical sections is the possibility that a process attempting to enter its critical
section will wait indefinitely. The three criteria that solutions to the critical-section
problem must satisfy. Indefinite waiting violates two of these— the progress and bounded-
waiting criteria.
There are many different forms of liveness failure; however, all are
generally characterized by poor performance and responsiveness. A very simple example
of a liveness failure is an infinite loop. A busy wait loop presents the possibility of a liveness
failure, especially if a process may loop an arbitrarily long period of time. Effortsat
providing mutual exclusion using tools such as mutex locks and semaphores can often
lead to such failures in concurrent programming.
There are two situations that can lead to liveness failures - deadlock and
priority Inversion.
1. Deadlock
The implementation of a semaphore with a waiting queue may result in a
situation where two or more processes are waiting indefinitely for an event that
can be caused only by one of the waiting processes. The event in question is the
execution of a signal() operation. When such a state is reached, these processes are said to be
deadlocked.
To illustrate this, consider a system consisting of two processes, P0 and P1,
data structures:
int n;
semaphore mutex = 1;
semaphore empty = n;
semaphore full = 0
We assume that the pool consists of n buffers, each capable of holding one
item. The mutex binary semaphore provides mutual exclusion for accesses to the buffer
pool and is initialized to the value 1. The empty and full semaphores count the number of
empty and full buffers. The semaphore empty is initialized to the value n; thesemaphore
full is initialized to the value 0. Note the symmetry between the producer and the
consumer. We can interpret this code as the producer producing full buffers for the
consumer or as the consumer producing empty buffers for the producer.
3.9.2 The Readers –Writers Problem
Suppose that a database is to be shared among several concurrent processes. Some of
these processes may want only to read the database, whereas others may want to update
(that is, read and write) the database. We distinguish between these two typesof
processes by referring to the former as readers and to the latter as writers. Obviously,if
two readers access the shared data simultaneously, no adverse effects will result.
However, if a writer and some other process (either a reader or a writer) access the
database simultaneously, chaos may ensue.
To ensure that these difficulties do not arise, we require that the writers
have exclusive access to the shared database while writing to the database. This
synchronization problem is referred to as the readers–writers problem. Since it was
originally stated, it has been used to test nearly every new synchronization primitive.
The readers–writers problem has several variations, all involving priorities.
The simplest one, referred to as the first readers–writers problem, requires that no
reader be kept waiting unless a writer has already obtained permission to use the
shared object.
In other words, no reader should wait for other readers to finish simply
because a writer is waiting. The second readers–writers problem requires that, once a
writer is ready, that writer perform its write as soon as possible.
For this reason, other variants of the problem have been proposed. Next, we present
a solution to the first readers–writers problem. See the bibliographical notes at the end of
the chapter for references describing starvation-free solutions to the second readers–
writers problem.
In the solution to the first readers–writers problem, the reader processes share the
following data structures:
semaphore rw mutex = 1;
semaphore mutex = 1;
The binary semaphores mutex and rw mutex are initialized to 1; read count is a counting
semaphore initialized to 0. The semaphore rw mutex is common to both readerand writer
processes. The mutex semaphore is used to ensure mutual exclusion when the variable
read count is updated.
The read count variable keeps track of how many processes are currently
reading the object. The semaphore rw mutex functions as a mutual exclusion
semaphore for the writers. It is also used by the first or last reader that enters or exits
the critical section. It is not used by readers that enter or exit while other readers are in
their critical sections.
Note that, if a writer is in the critical section and n readers are waiting, then
one reader is queued on rw mutex, and n − 1 readers are queued on mutex. Also observe
that, when a writer executes signal(rw mutex), we may resume the executionof either
the waiting readers or a single waiting writer. The selection is made by the scheduler. The
readers–writers problem and its solutions have been generalized to provide reader–writer
locks on some systems.
•In applications that have more readers than writers. This is because reader–writer locks
Semaphore Solution
• Suppose that all five philosophers become hungry at the same time and each grabs her
left chopstick. All the elements of chopstick will now be equal to 0. When each
philosopher tries to grab her right chopstick, she will be delayed forever.
Several possible remedies to the deadlock problem are the following:
•Use an asymmetric solution— that is, an odd-numbered philosopher picks up first her
left chopstick and then her right chopstick, whereas an even numbered philosopher picks
up her right chopstick and then her left chopstick.
Monitor Solution
Let’s see a deadlock-free solution to the dining-philosophers problem. This
solution imposes the restriction that a philosopher may pick up her chopsticks only if both
of them are available. To 5code this solution, we need to distinguish among three states
in which we may find a philosopher. For this purpose, we introduce the following data
structure:
condition self[5];
This allows philosopher i to delay herself when she is hungry but is unable to obtain the
chopsticks she needs.
DiningPhilosophers.pickup(i);
...
eat
...
DiningPhilosophers.putdown(i);
It is easy to show that this solution ensures that no two neighbors are eating
simultaneously and that no deadlocks will occur. As we already noted, however, it is possible
for a philosopher to starve to death.
3. 10. DEADLOCKS
3.10.1. System Model
A thread must request a resource before using it and must release the
resource after using it. A thread may request as many resources as it requires to carry
out its designated task. Obviously, the number of resources requested may not exceed
the total number of resources available in the system. In other words, a thread cannot
request two network interfaces if the system has only one.
Under the normal mode of operation, a thread may utilize a resource in only
the following sequence:
1.Request. The thread requests the resource. If the request cannot be granted
immediately (for example, if a mutex lock is currently held by another thread), then the
requesting thread must wait until it can acquire the resource.
2. Use. The thread can operate on the resource (for example, if the resource is a mutex
A set of threads is in a deadlocked state when every thread in the set is waiting for an
event that can be caused only by another thread in the set. The events with which we are
mainly concerned here are resource acquisition and release. The resources are typically
logical (for example, mutex locks, semaphores, and files); however, other typesof events
may result in deadlocks, including read_x0002_ing from a network interface orthe IPC
(inter process communication) facilities.
3.10.2 Deadlock Characterization
Below mentioned are the four conditions that characterize deadlock.
Necessary Conditions
A deadlock situation can arise if the following four conditions hold
simultaneously in a system:
1. Mutual exclusion. At least one resource must be held in a non-sharable mode; that
is, only one thread at a time can use the resource. If another thread requests that
resource, the requesting thread must be delayed until the resource has been released.
2.Hold and wait. A thread must be holding at least one resource and waiting to acquire
Resource-Allocation Graph
Deadlocks can be described more precisely in terms of a directed graph
called a system resource-allocation graph. This graph consists of a set of vertices V
and a set of edges E. The set of vertices V is partitioned into two different types of nodes:
T = {T1, T2, ..., Tn}, the set consisting of all the active threads in the system, and R =
{R1, R2, ..., Rm}, the set consisting of all resource types in the system.
A directed edge from thread Ti to resource type Rj is denoted by Ti →
Rj; it signifies that thread Ti has requested an instance of resource type Rj and
is currently waiting for that resource. A directed edge from resource type Rj to thread Ti
is denoted by Rj → Ti ; it signifies that an instance of resource type Rj has been
allocated to thread Ti. A directed edge Ti → Rj is called a request edge; a directed
edge Rj → Ti is called an assignment edge. We represent each thread Ti as a circle
and each resource type Rj as a rectangle.
When thread Ti requests an instance of resource type Rj, a request edge is
inserted in the resource-allocation graph. When this request can be fulfilled, the request
edge is instantaneously transformed to an assignment edge. When thethread no
longer needs access to the resource, it releases the resource. As a result, the assignment
edge is deleted.
If each resource type has exactly one instance, then a cycle implies that
a deadlock has occurred. If the cycle involves only a set of resource types, each of
which has only a single instance, then a deadlock has occurred. Each thread involved in
the cycle is deadlocked. In this case, a cycle in the graph is both a necessary and
a sufficient condition for the existence of deadlock.
If each resource type has several instances, then a cycle does not
necessarily imply that a deadlock has occurred. In this case, a cycle in the graph
T1 → R1 → T2 → R3 → T3 → R2 → T1 T2 → R3 → T3 → R2 → T2
Threads T1, T2, and T3 are deadlocked. Thread T2 is waiting for the
resource R3, which is held by thread T3. Thread T3 is waiting for either thread T1 or
thread T2 to release resource R2. In addition, thread T1 is waiting for thread T2 to
release resource R1.
system.
• We can use a protocol to prevent or avoid deadlocks, ensuring that the system will
• We can allow the system to enter a deadlocked state, detect it, and recover.
The first solution is the one used by most operating systems, including Linux
and Windows. It is then up to kernel and application developers to write programsthat
handle deadlocks, typically using approaches outlined in the second solution. Some
systems—such as databases—adopt the third solution, allowing deadlocks to occur and
then managing the recovery.
To ensure that deadlocks never occur, the system can use either a
deadlock prevention or a deadlock-avoidance scheme. Deadlock prevention provides
a set of methods to ensure that at least one of the necessary conditions cannot
hold. These methods prevent deadlocks by constraining how requests for resources can
be made.
1. Mutual Exclusion
The mutual-exclusion condition must hold. That is, at least one resource must be non-
sharable. Sharable resources do not require mutually exclusive access and thus
cannot be involved in a deadlock. Read-only files are a good example of a sharable
resource. If several threads attempt to open a read-only file at the same time, they can
be granted simultaneous access to the file. A thread never needs to wait for a sharable
resource. In general, however, we cannot prevent deadlocks by denying the
mutual-exclusion condition, because some resources are intrinsically
non-sharable. For example, a mutex lock cannot be simultaneously shared by several
threads.
No Preemption
The third necessary condition for deadlocks is that there be no preemption
of resources that have already been allocated. To ensure that this condition does not
hold, we can use the following protocol. If a thread is holding some resources and
requests another resource that cannot be immediately allocated to it (that is, the thread
must wait), then all resources the thread is currently holding are preempted. In other
words, these resources are implicitly released.
The preempted resources are added to the list of resources for which the
thread is waiting. The thread will be restarted only when it can regain its old resources,
as well as the new ones that it is requesting. Alternatively, if a thread requests some
resources, we first check whether they are available. If they are, we allocate them. If
they are not, we check whether they are allocated to some other thread that is waiting
for additional resources.
If so, we preempt the desired resources from the waiting thread and
allocate them to the requesting thread. If the resources are neither available nor held by
a waiting thread, the requesting thread must wait. While it is waiting, some of its resources
may be preempted, but only if another thread requests them. A thread can berestarted
only when it is allocated the new resources it is requesting and recovers any resources
that were preempted while it was waiting.
This protocol is often applied to resources whose state can be easily saved
and restored later, such as CPU registers and database transactions. It cannot generally
be applied to such resources as mutex locks and semaphores, precisely the type of
To illustrate, we let R = {R1, R2, ..., Rm} be the set of resource types. We
assign to each resource type a unique integer number, which allows us to compare two
resources and to determine whether one precedes another in our ordering. Formally, we
define a one-to-one function F: R → N, where N is the set of natural numbers. We can
accomplish this scheme in an application program by developing an ordering among all
synchronization objects in the system.
3.13 Deadlock Avoidance
Deadlock-prevention algorithms prevent deadlocks by limiting how
requests can be made. The limits ensure that at least one of the necessary
conditions for deadlock cannot occur. Possible side effects of preventing deadlocks
by this method, however, are low device utilization and reduced system
throughput.
An alternative method for avoiding deadlocks is to require additional
information about how resources are to be requested. For example, in a system with
resources R1 and R2, the system might need to know that thread T will request first R1
and then R2 before releasing both resources, whereas thread Q will request R2 and then
R1. With this knowledge of the complete sequence of requests and releases for each
thread, the system can decide for each request whether or not the thread should wait in
order to avoid a possible future deadlock.
Each request requires that in making this decision the system consider the
resources currently available, the resources currently allocated to each thread, and the
future requests and releases of each thread. The various algorithms that use this
approach differ in the amount and type of information required. The simplest and most
useful model requires that each thread declare the maximum number of resources of each
type that it may need. Given this a priori information, it is possible to construct an
algorithm that ensures that the system will never enter a deadlocked state.
When Ti terminates, Ti+1 can obtain its needed resources, and so on. If no
such sequence exists, then the system state is said to be unsafe. A safe state is
not a deadlocked state. Conversely, a deadlocked state is an unsafe state. Not
all unsafe states are deadlocks, however. An unsafe state may lead to a
deadlock. As long as the state is safe, the operating system can avoid unsafe (and
deadlocked) states. In an unsafe state, the operating system cannot prevent threads from
requesting resources in such a way that a deadlock occurs. The behavior of the threads
controls unsafe states.
At time t0, the system is in a safe state. The sequence <T1, T0, T2>
satisfies the safety condition. Thread T1 can immediately be allocated all its
resources and then return them (the system will then have five available resources);
then thread T0 can get all its resources and return them (the system will then have ten
available resources); and finally thread T2 can get all its resources and return them (the
system will then have all twelve resources available).
A system can go from a safe state to an unsafe state. Suppose that, at time
t1, thread T2 requests and is allocated one more resource. The system is no longer in a
safe state. At this point, only thread T1 can be allocated all its resources. When it returns
them, the system will have only four available resources.
Since thread T0 is allocated five resources but has a maximum of ten, it may
request five more resources. If it does so, it will have to wait, because they are
unavailable. Similarly, thread T2 may request six additional resources and have to wait,
resulting in a deadlock. Our mistake was in granting the request from thread T2 for one
more resource. If we had made T2 wait until either of the other threads had finished and
released its resources, then we could have avoided the deadlock.
Given the concept of a safe state, we can define avoidance algorithms that ensure that
the system will never deadlock. The idea is simply to ensure that the system will always
remain in a safe state. Initially, the system is in a safe state.
Whenever a thread requests a resource that is currently available, the system must decide
whether the resource can be allocated immediately or the thread must wait. The request
is granted only if the allocation leaves the system in a safe state.
3.13.2 Resource-Allocation-Graph Algorithm
If we have a resource-allocation system with only one instance of each
resource type, we can use a variant of the resource-allocation graph defined earlier for
deadlock avoidance.
Now suppose that thread Ti requests resource Rj. The request can be
granted only if converting the request edge Ti → Rj to an assignment edge Rj → Ti does
not result in the formation of a cycle in the resource-allocation graph. We check for safety
by using a cycle-detection algorithm. An algorithm for detecting a cycle in this graph
requires an order of n2 operations, where n is the number of threads in the system.
If no cycle exists, then the allocation of the resource will leave the system
in a safe state. If a cycle is found, then the allocation will put the system in an unsafe
state. In that case, thread Ti will have to wait for its requests to be satisfied.
3.13. 3 Banker’s Algorithm
The resource-allocation-graph algorithm is not applicable to a resource
allocation system with multiple instances of each resource type. The deadlock-avoidance
algorithm is applicable to such a system but is less efficient than the resource-allocation
graph scheme. This algorithm is commonly known as the banker’s algorithm. The
name was chosen because the algorithm could be used in a banking system to ensure
that the bank never allocated its available cash in such a way that it could no longer satisfy
the needs of all its customers.
When a new thread enters the system, it must declare the maximum number of
instances of each resource type that it may need. This number may not exceed
the total number of resources in the system. When a user requests a set of
resources, the system must determine whether the allocation of these resources
will leave the system in a safe state. If it will, the resources are allocated; otherwise,
the thread must wait until some other thread releases enough resources. Several
data structures must be maintained to implement the banker’s algorithm.
These data structures encode the state of the resource- allocation
system, where n is the number of threads in the system and m is the number of
resource types:
When a request for resources is made by thread Ti, the following actionsare taken:
3.14 Deadlock Detection
If a system does not employ either a deadlock-prevention or a deadlock
avoidance algorithm, then a deadlock situation may occur. In this environment, the system
may provide:
•An algorithm that examines the state of the system to determine whethera deadlock
has occurred
The ≤ relation between two vectors is defined earlier. To simplify notation, we again treat
the rows in the matrices Allocation and Request as vectors;
we refer to them as Allocationi and Requesti. The detection algorithm described here
simply investigates every possible allocation sequence for the threads that remain to be
completed.
You may wonder why we reclaim the resources of thread Ti (in step 3) as soon
as we determine that Requesti ≤ Work (in step 2b). We know that Ti is currently not involved
in a deadlock (since Requesti ≤ Work). Thus, we take an optimistic attitude and assume
that Ti will require no more resources to complete its task; it will thus soon return all currently
allocated resources to the system. If our assumption is incorrect, a deadlock may occur later.
That deadlock will be detected the next time the deadlock-detection algorithm is invoked.
3.14.3 Detection-Algorithm Usage
When should we invoke the detection algorithm? The answer depends on
two factors:
1. How often is a deadlock likely to occur?
2. How many threads will be affected by deadlock when it happens?
If deadlocks occur frequently, then the detection algorithm should be
invoked frequently. Resources allocated to deadlocked threads
will be idle until the deadlock can be broken. In addition, the number of threads involved
in the deadlock cycle may grow.
Deadlocks occur only when some thread makes a request that cannot be
granted immediately. This request may be the final request that completes a chain of
waiting threads. In the extreme, then, we can invoke the deadlock detection algorithm
every time a request for allocation cannot be granted immediately. In this case, we can
identify not only the deadlocked set of threads but also the specific thread that
“caused” the deadlock. (In reality, each of the deadlocked threads is a link in the cycle
inthe resource graph, so all of them, jointly, caused the deadlock.) If there are many
different resource types, one request may create many cycles in the resource graph, each
cycle completed by the most recent request and “caused” by the one identifiable thread.
15. Recovery from Deadlock
When a detection algorithm determines that a deadlock exists, several
alternatives are available. One possibility is to inform the operator that a deadlock as
occurred and to let the operator deal with the deadlock manually. Another possibility is
to let the system recover from the deadlock automatically. There are two
options for breaking a deadlock. One is simply to abort one or more threads to
break the circular wait. The other is to preempt some resources from one or more ofthe
deadlocked threads.
•Abort one process at a time until the deadlock cycle is eliminated. This
method incurs considerable overhead, since after each process is aborted, a deadlock-
detection algorithm must be invoked to determine whether any processes are still
deadlocked.
Aborting a process may not be easy. If the process was in the midst of
updating a file, terminating it may leave that file in an incorrect state. Similarly,
if the process was in the midst of updating shared data while holding a mutex lock, the
system must restore the status of the lock as being available, although no guarantees
can be made regarding the integrity of the shared data.
preemption is required to deal with deadlocks, then three issues need to be addressed:
1.Selecting a victim. Which resources and which processes are to be preempted? As in
process termination, we must determine the order of preemption to minimize cost. Cost
factors may include such parameters as thenumber of resources a deadlocked process is
holding and the amount of time the process has thus far consumed.
2. Rollback. If we preempt a resource from a process, what should be donewith that
process? Clearly, it cannot continue with its normal execution; it ismissing some needed
resource. We must roll back the process to some safestate and restart it from that
state. Since, in general, it is difficult todetermine what a safe state is, the simplest
solution is a total rollback: abortthe process and then restart it. Although it is more
effective to roll back theprocess only as far as necessary to break the deadlock, this
method requiresthe system to keep more information about the state of all running
processes.
3. 3.Starvation. How do we ensure that starvation will not occur? That is, howcan we
guarantee that resources will not always be preempted from the sameprocess? In a system
where victim selection is based primarily on cost factors,it may happen that the same
process is always picked as a victim.
As a result,this process never completes its designated task, a starvation situation
anypractical system must address. Clearly, we must ensure that a process can bepicked as
a victim only a (small) finite number of times The most commonsolution is to include
Lecture Slides
https://drive.google.com/drive/folders/1JfKycT1w27UIf8QudCcuI4
Wdfgzd3pYV?usp=sharing
Lecture Videos
Lecture Videos
Lecture Videos
https://drive.google.com/drive/folders/1LW0_bSIJD2AzyHeKvpgK
9PpIY87fFprN?usp=sharing
Assignment
Assignment
2
Assignment
3. a. Show that, if the wait() and signal() semaphore operations are not
executed atomically, then mutual exclusion may be violated.
5
b. Write about Deadlock Prevention Methods.
Part A Q & A
1. What is critical section problem?(K1)(CO3)
The important feature of the system is that, when one process is executing in its
critical section, no other process is allowed to execute in its critical section. That is,
no two processes are executing in their critical sections at the same time.
Progress
Bounded waiting
6. Define DeadLocks.(K1)(CO3)
A process requests resources; if the resources are not available at that time, the
process enters a waiting state. Sometimes, a waiting process is never again able
to change state, because the resources it has requested are held by other waiting
processes. This situation is called a deadlock.
7. Define Starvation in deadlock? (K1)(CO3)
A problem related to deadlock is indefinite blocking or starvation, a situation where
processes wait indefinitely within a semaphore. Indefinite blocking may occur if we
add and remove processes from the list associated with a semaphore in LIFO order.
Under the normal mode of operation, a process may utilize a resource in only the
following sequence:
10. Give the condition necessary for a deadlock situation to arise? (K1)(CO3)
A deadlock situation can arise if the following 4 condition hold simultaneously in a
system.
x Mutual Exclusion
x No preemption
x Circular Wait
90
Supportive Online
Certification courses
SUPPORTIVE ONLINE COURSES
Course
S No Course title Link
provider
https://www.udemy.co
m/course/operating-
Operating Systems from systems-from-scratch-
1 Udemy scratch - Part 1 part1/
https://www.udacity.co
m/course/introduction-
Introduction to Operating
2 Udacity Systems to-operating-systems
https://www.coursera.o
https://www.edx.org/c
ourse/computer-
edX Computer Hardware and hardware-and-
4 Operating Systems operating-systems
92
Real life Applications in
day to day life and to
Industry
REAL TIME APPLICATIONS IN DAY TO DAY LIFE
AND TO INDUSTRY
https://www.freer tos.org/Inter-Task-
Communication.html
96
Assessment Schedule
ASSESSMENT SCHEDULE
Proposed
18.09.2024 08.10.2024 17.09.2024 07.10.2024
Dates
Prescribed Text books &
Reference books
PRESCRIBED TEXT BOOKS AND REFERENCE BOOKS
TEXT BOOKS
1. Abraham Silberschatz, Peter Baer Galvin and Greg Gagne, “Operating
System Concepts” II, 10th Edition, John Wiley and Sons Inc., 2018.
[EBOOK]
2. Andrew S Tanenbaum, "Modern Operating Systems", Pearson, 5th
Edition, 2022 New Delhi.
REFERENCE BOOKS
1. William Stallings, "Operating Systems: Internals and Design Principles",
7th Edition, Prentice Hall, 2018.
2. Achyut S.Godbole, Atul Kahate, “Operating Systems”, McGraw Hill
Education, 2016.
Mini Project
Suggestions
MINI PROJECT SUGGESTIONS
1 Producer-Consumer Problem:
Implement a simulation of the classic producer-consumer problem using
multiple threads or processes. Ensure proper synchronization to avoid
issues like race conditions and deadlocks.
Disclaimer :
This document is confidential and intended solely for the educational purpose of RMK Group of
Educational Institutions. If you have received this document through email in error, please notify the
system manager. This document contains proprietary information and is intended only to the respective
group / learning community as intended. If you are not the addressee you should not disseminate,
distribute or copy through e-mail. Please notify the sender immediately by e-mail if you have received
this document by mistake and delete this document from your system. If you are not the intended
recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the
contentsof this information is strictly prohibited.