OS Unit3
OS Unit3
In the above image, if Process1 and Process2 happen at the same time, user 2 will get the wrong
account balance as Y because of Process1 being transacted when the balance is X.
Inconsistency of data can occur when various processes share a common resource in a system which is
why there is a need for process synchronization in the operating system.
Let us take a look at why exactly we need Process Synchronization. For example, If a process1 is
trying to read the data present in a memory location while another process2 is trying to change the data
present at the same location, there is a high chance that the data read by the process1 will be incorrect.
Race Condition
When more than one process is either running the same code or modifying the same memory or
any shared data, there is a risk that the result or value of the shared data may be incorrect because all
processes try to access and modify this shared resource. Thus, all the processes race to say that my
result is correct. This condition is called the race condition. Since many processes use the same data,
the results of the processes may depend on the order of their execution.
This is mostly a situation that can arise within the critical section. In the critical section, a race
condition occurs when the end result of multiple thread executions varies depending on the sequence
in which the threads execute.
The critical section cannot be executed by more than one process at the same time; operating system
faces the difficulties in allowing and disallowing the processes from entering the critical section.
The critical section problem is used to design a set of protocols which can ensure that the Race
condition among the processes will never arise.
In order to synchronize the cooperative processes, our main task is to solve the critical section
problem. We need to provide a solution in such a way that the following conditions can be satisfied.
do{
//entry section
Critical section
//exit section
Remainder section
}while(True);
Progress
Progress means that if one process doesn't need to execute into critical section then it
should not stop other processes to get into the critical section.
Bounded Waiting
We should be able to predict the waiting time for every process to get into the critical
section. The process must not be endlessly waiting for getting into the critical section.
Peterson's solution:
Peterson’s solution provides a good algorithmic description of solving the critical-section problem and
illustrates some of the complexities involved in designing software that addresses the requirements of
mutual exclusion, progress, and bounded waiting.
do {
flag[i] = true;
turn = j;
while (flag[j] && turn == j);
/* critical section */
flag[i] = false;
/* remainder section */
}
while (true);
The structure of process Pi in Peterson’s solution. This solution is restricted to two processes that
alternate execution between their critical sections and remainder sections. The processes are numbered
P0 and P1. We use Pj for convenience to denote the other process when Pi is present; that is, j equals 1
− I, Peterson’s solution requires the two processes to share two data items –
int turn;
boolean flag[2];
p(x) peterson’s : p(y) peterson’s:
do do
{ {
Flag[x]=true; Flag[y]=true;
Trun = y; Trun = x;
}while(flag[y] && turn==[y]); }while(flag[x]&& turn==[x]);
//critical section //critical section
Flag[x]=false; Flag[y]=false;
//remainder section //remainder section
}while(True); }while(True);
Synchronization Hardware:
Process Synchronization problem can be solved by software as well as a hardware solution. Peterson
solution is one of the software solutions to the process synchronization problem. Peterson algorithm
allows two or more processes to share a single-use resource without any conflict. The hardware
solution is as follows:
1. Test and Set
2. Swap
3. Unlock and Lock
In the above algorithm the TestAndSet() function takes a boolean value and returns the same
value. TestAndSet() function sets the lock variable to true.
When lock varibale is initially false the TestAndSet(lock) condition checks for
TestAndSet(false). As TestAndSet function returns the same value as its
argument, TestAndSet(false) returns false. Now, while loop while(TestAndSet(lock)) breaks and the
process enters the critical section.
As one process is inside the critical section and lock value is now 'true', if any other process
tries to enter the critical section then the new process checks for while(TestAndSet(true)) which will
return true inside while loop and as a result the other process keeps executing the while loop.
In test and set algorithm the incoming process trying to enter the critical section does not wait
in a queue so any process may get the chance to enter the critical section as soon as the process finds
the lock variable to be false. It may be possible that a particular process never gets the chance to enter
the critical section and that process waits indefinitely
Swap:
Swap function uses two boolean variables lock and key. Both lock and key variables are
initially initialized to false. Swap algorithm is the same as lock and set algorithm. The Swap
algorithm uses a temporary variable to set the lock to true when a process enters the critical section of
the program.
In the code above when a process P1 enters the critical section of the program it first executes the
while loop.
As key value is set to true just before the for loop so swap(lock, key) swaps the value
of lock and key. Lock becomes true and the key becomes false. In the next iteration of the while loop
breaks and the process, P1 enters the critical section.
The value of lock and key when P1 enters the critical section is lock = true and key = false.
Let's say another process, P2, tries to enter the critical section while P1 is in the critical section. Let's
take a look at what happens if P2 tries to enter the critical section.
key is set to true again after the first while loop is executed i.e while(1). Now, the second while
loop in the program i.e while(key) is checked. As key is true the process enters the second while
loop. swap(lock, key) is executed again. as both key and lock are true so after swapping also both will
be true. So, the while keeps executing and the process P2 keeps running the while loop until
Process P1 comes out of the critical section and makes lock false.
When Process P1 comes out of the critical section the value of lock is again set to false so that
other processes can now enter the critical section. When a process is inside the critical section than all
other incoming process trying to enter the critical section is not maintained in any order or queue. So
any process out of all the waiting process can get the chance to enter the critical section as the lock
becomes false. So, there may be a process that may wait indefinitely. So, bounded waiting is not
ensured in Swap algorithm also.
while(1){
waiting[i] = true;
key = true;
while(waiting[i] && key){
key = TestAndSet(lock);
}
CRITICAL SECTION CODE
j = (i+1) % n;
while(j != i && !waiting[j])
j = (j+1) % n;
if(j == i)
lock = false;
else
waiting[j] = false;
REMAINDER SECTION CODE
}
In Unlock and lock algorithm the lock is not set to false as one process comes out of the critical
section. In other algorithms like swap and Test and set the lock was being set to false as the process
comes out of the critical section so that any other process can enter the critical section.
But in Unlock and lock, once the ith process comes out of the critical section the algorithm
checks the waiting queue for the next process waiting to enter the critical section i.e jth process. If
there is a jth process waiting in the ready queue to enter the critical section, the waiting[j] of
the jth process is set to false so that the while loop while(waiting[i] && key) becomes false and
the jth process enters the critical section.
If no process is waiting in the ready queue to enter the critical section the algorithm then sets
the lock to false so that any other process comes and enters the critical section easily.
Since a ready queue is always maintained for the waiting processes, the Unlock and lock
algorithm ensures bounded waiting.
Semaphores:
Generally we know the race condition, which mean when two processes wants to enters into
critical section then this problem occurs and due to this condition, problems such as deadlock may
occur. Hence we need proper synchronization between processes, and to prevent these, we use a
signaling integer variable, called - Semaphore. Since Semaphores are integer variables, their value
acts as a signal, which allows or does not allow a process to access the critical section of code or
certain other resources.
Types of semaphores:
There are mainly two types of Semaphores, or two types of signaling integer variables:
1. Binary Semaphores
In these type of Semaphores the integer value of the semaphore can only be either 0 or 1. If the
value of the Semaphore is 1, it means that the process can proceed to the critical section (the common
section that the processes need to access). However, if the value of binary semaphore is 0, then the
process cannot continue to the critical section of the code. When a process is using the critical
section of the code, we change the Semaphore value to 0, and when a process is not using it, or we can
allow a process to access the critical section, we change the value of semaphore to 1. Binary
semaphore is also called mutex lock.
2. Counting Semaphores:
Counting semaphores are signalling integers that can take on any integer value. Using
these Semaphores we can coordinate access to resources and here the Semaphore count is the number
of resources available. If the value of the Semaphore is anywhere above 0, processes can access the
critical section, or the shared resources. The number of processes that can access the resources / code is
the value of the semaphore. However, if the value is 0, it means that there aren't any resources that are
available or the critical section is already being accessed by a number of processes and cannot be
accessed by more processes. Counting semaphores are generally used when the number of
instances of a resource are more than 1, and multiple processes can access the resource.
The wait operation, sleep, decrease or down operation, is the semaphore operation that controls the
entry of a process into critical section. If the value of the mutex/semaphore is positive then we
decrease the value of the semaphore and let the process enter the critical section.
Wait(s){
if (s>0):
s--;
Signal Operation
The function increase or up operation is the same as the signal function, and as we know, once a
process has exited the critical section, we must update the value of the semaphore so that we can signal
the new processes to be able to access the critical section.
For the updating of the value, once the process has exited the critical section, since we had decreased
the value of the semaphore by 1 in the wait operation, here we simply increment it. Note that this
function is added only after the process exits the critical section and cannot be added before the
process enters the section.
signal(s){
s++;
If the value of the semaphore was initially 1, then on the entering of the process into the critical
section, the wait function would have decremented the value to 0 meaning that no more processes can
access the critical section (making sure of mutual exclusion -- only in binary semaphores). Once the
process exits the critical section, the signal operation is executed and the value of the semaphore is
incremented by 1, meaning that the critical section can now be accessed by another process.
Certain conditions must be met by the Producer and the Consumer processes to have consistent data
synchronization:
1. The Producer process must not produce an item if the shared buffer is full.
2. The Consumer process must not consume an item if the shared buffer is empty.
3. Access to the shared buffer must be mutually exclusive; this means that at any given instance,
only one process should be able to access the shared buffer and make changes to it.
Producer:
Int count=0;
Void producer(void)
{
Int itemp;
While(1)
{
Producer_Item(itemp);
While(count==n);
Buff[in]=itemp;
in=(in+1)mod n;
Count=count+1;
}
}
Consumer:
Int count;
Void consumer(void)
{
Int itemc;
While(1)
{
While(count==0);
itemc =Buff[out];
out=(out+1)mod n;
Count=count-1;
}
}
Solution
The solution to the Producer-Consumer problem involves three semaphore variables.
semaphore Full: Tracks the space filled by the Producer process. It is initialized with
a value of 00 as the buffer will have 00 filled spaces at the beginning
semaphore Empty: Tracks the empty space in the buffer. It is initially set
to buffer_size as the whole buffer is empty at the beginning.
semaphore mutex: Used for mutual exclusion so that only one process can access the
shared buffer at a time.
Using the signal() and wait() operations on these semaphores, we can arrive at a solution.
Let’s look at the code for the Producer and Consumer processes
void Producer(){
while(true){
// Produce an item
wait(Empty);
wait(s);
produce(itemp);
buff[in]=itemp; Wait(s)
{
in = (in+1) mod n; While(s<=0);
signal(s); s--;
signal(Full); }
} Signal (s)
} {
S++;
The code for the Consumer process is as follows. }
void Consumer(){
while(true){
wait(Full);
wait(s);
itemc=buff[out];
out=[out+1]mod n;
signal(mutex);
signal(Empty)
}
}
2. Dining-Philosophers Problem:
The dining philosopher's problem is the classical problem of synchronization which says that
Five philosophers are sitting around a circular table and their job is to think and eat alternatively. A
bowl of noodles is placed at the center of the table along with five chopsticks for each of the
philosophers. To eat a philosopher needs both their right and a left chopstick. A philosopher can only
eat if both immediate left and right chopsticks of the philosopher is available. In case if both
immediate left and right chopsticks of the philosopher are not available then the philosopher puts down
their (either left or right) chopstick and starts thinking again.
The dining philosopher demonstrates a large class of concurrency control problems hence it's a
classic synchronization problem.
Semaphore: A semaphore is an integer variable in S, that apart from initialization is accessed by only
two standard atomic operations - wait and signal, whose definitions are as follows:
1. wait( S )
{
while( S <= 0) ;
S--;
}
2. signal( S )
{
S++;
}
From the above definitions of wait, it is clear that if the value of S <= 0 then it will enter into an
infinite loop(because of the semicolon; after while loop). Whereas the job of the signal is to increment
the value of S.
The structure of the chopstick is an array of a semaphore which is represented as shown below -
semaphore C[5];
Initially, each element of the semaphore C0, C1, C2, C3, and C4 are initialized to 1 as the chopsticks
are on the table and not picked up by any of the philosophers.
Let's modify the above code of the Dining Philosopher Problem by using semaphore operations wait
and signal, the desired code looks like
void Philosopher
{
while(1)
{
Wait( take_chopstickC[i] );
Wait( take_chopstickC[(i+1) % 5] ) ;
..
. EATING THE NOODLE
.
Signal( put_chopstickC[i] );
Signal( put_chopstickC[ (i+1) % 5] ) ;
.
. THINKING
}
}
In the above code, first wait operation is performed on take_chopstickC[i] and take_chopstickC
[ (i+1) % 5]. This shows philosopher i have picked up the chopsticks from its left and right. The
eating function is performed after that.
On completion of eating by philosopher i the, signal operation is performed on take_chopstickC[i]
and take_chopstickC [ (i+1) % 5]. This shows that the philosopher i have eaten and put down both
the left and right chopsticks. Finally, the philosopher starts thinking again.
The readers-writers problem is used for managing synchronization among various reader and writer
process so that there are no problems with the data sets, i.e. no inconsistency is generated.
Let's understand with an example - If two or more than two readers want to access the file at the same
point in time there will be no problem. However, in other situations like when two writers or one
reader and one writer wants to access the file at the same point of time, there may occur some
problems, hence the task is to design the code in such a manner that if one reader is reading then no
writer is allowed to update at the same point of time, similarly, if one writer is writing no reader is
allowed to read the file at that point of time and if one writer is updating a file other writers should not
be allowed to update the file at the same point of time. However, multiple readers can access the object
at the same time.
Let us understand the possibility of reading and writing with the table given below:
TABLE 1
The solution of readers and writers can be implemented using binary semaphores.
We use two binary semaphores "write" and "mutex", where binary semaphore can be defined as:
Semaphore: A semaphore is an integer variable in S, that apart from initialization is accessed by only
two standard atomic operations - wait and signal, whose definitions are as follows:
1. wait( S )
{
while( S <= 0) ;
S--;
}
2. signal( S )
{
S++;
}
From the above definitions of wait, it is clear that if the value of S <= 0 then it will enter into an
infinite loop (because of the semicolon; after while loop). Whereas the job of the signal is to increment
the value of S.
The below code will provide the solution of the reader-writer problem, reader and writer process codes
are given as follows -
wait(mutex);
readcount --; // on every exit of reader decrement readcount
if (readcount == 0)
{
signal (write);
}
signal(mutex);
In the above code of reader, mutex and write are semaphores that have an initial value of 1,
whereas the readcount variable has an initial value as 0. Both mutex and write are common in reader
and writer process code, semaphore mutex ensures mutual exclusion and semaphore write handles the
writing mechanism
The readcount variable denotes the number of readers accessing the file concurrently. The moment
variable readcount becomes 1, wait operation is used to write semaphore which decreases the value
by one. This means that a writer is not allowed how to access the file anymore. On completion of the
read operation, readcount is decremented by one. When readcount becomes 0, the signal operation
which is used to write permits a writer to access the file.
Code for Writer Process
The code that defines the writer process is given below:
wait(write);
WRITE INTO THE FILE
signal(wrt);
If a writer wishes to access the file, wait operation is performed on write semaphore, which
decrements write to 0 and no other writer can access the file. On completion of the writing job by the
writer who was accessing the file, the signal operation is performed on write.
Monitors:
It is a synchronization technique that enables threads to mutual exclusion and the wait() for a
given condition to become true. It is an abstract data type. It has a shared variable and a collection of
procedures executing on the shared variable. A process may not directly access the shared data
variables, and procedures are required to allow several processes to access the shared data variables
simultaneously.
At any particular time, only one process may be active in a monitor. Other processes that
require access to the shared variables must queue and are only granted access after the previous
process releases the shared variables.
In other words, monitors are defined as the construct of programming language, which helps in
controlling shared data access. The Monitor is a module or package which encapsulates shared data
structure, procedures, and the synchronization between the concurrent procedure invocations.
Characteristics of Monitors.
1. Inside the monitors, we can only execute one process at a time.
2. Monitors are the group of procedures, and condition variables that are merged together in a
special type of module.
3. If the process is running outside the monitor, then it cannot access the monitor’s internal
variable. But a process can call the procedures of the monitor.
4. Monitors offer high-level of synchronization
5. Monitors were derived to simplify the complexity of synchronization problems.
6. There is only one process that can be active at a time inside the monitor.
Components of Monitor
There are four main components of the monitor:
1. Initialization
2. Private data
3. Monitor procedure
4. Monitor entry queue
Initialization: - Initialization comprises the code, and when the monitors are created, we use this code
exactly once.
Private Data: - Private data is another component of the monitor. It comprises all the private data, and
the private data contains private procedures that can only be used within the monitor. So, outside the
monitor, private data is not visible.
Monitor Procedure: - Monitors Procedures are those procedures that can be called from outside the
monitor.
Monitor Entry Queue: - Monitor entry queue is another essential component of the monitor that
includes all the threads, which are called procedures.
Syntax of monitor
Condition Variables
There are two types of operations that we can perform on the condition variables of the monitor:
1. Wait
2. Signal
Condition variables are present in the Condition variables are not present in the
monitor. semaphore.
Atomic Transaction:
The mutual exclusion of critical sections ensures that the critical sections are executed atomically. That
is, if two critical sections are executed concurrently, the result is equivalent to their sequential
execution in some unknown order. Although this property is useful in many application domains, in
many cases we would like to make sure that a critical section forms a single logical unit of work that
either is performed in its entirety or is not performed at all. An example is funds transfer, in which one
account is debited and another is credited. Clearly, it is essential for data consistency either that both
the credit and debit occur or that neither occurs.
Consistency of data, along with storage and retrieval of data, is a concern often associated with
database systems. Recently, there has been an upsurge of interest in using database-systems techniques
in operating systems. Operating systems can be viewed as manipulators of data; as such, they can
benefit from the advanced techniques and models available from database research. For instance, many
of the ad hoc techniques used in operating systems to manage files could be more flexible and
powerful if more formal database methods were used in their place.
System Model: - A collection of instructions (or operations) that performs a single logical function is
called a transaction. A major issue in processing transactions is the preservation of atomicity despite
the possibility of failures within the computer system.
A commit operation signifies that the transaction has terminated its execution successfully, whereas an
abort operation signifies that the transaction has ended its normal execution due to some logical error
or a system failure. If a terminated transaction has completed its execution successfully, it is
committed; otherwise, it is aborted
Since an aborted transaction may already have modified the data that it has accessed, the state of these
data may not be the same as it would have been if the transaction had executed atomically. So that
atomicity is ensured, an aborted transaction must have no effect on the state of the data that it has
already modified. Thus, the state of the data accessed by an aborted transaction must be restored to
what it was just before the transaction started executing. We say that such a transaction has been rolled
back. It is part of the responsibility of the system to ensure this property.
To determine how the system should ensure atomicity, we need first to identify the properties of
devices used for storing the various data accessed by the transactions. Various types of storage media
are distinguished by their relative speed, capacity, and resilience to failure.
• Volatile storage. Information residing in volatile storage does not usually survive system crashes.
Examples of such storage are main and cache memory. Access to volatile storage is extremely fast,
both because of the speed of the memory access itself and because it is possible to access directly any
data item in volatile storage.
Non-volatile storage. Information residing in non volatile storage usually survives system crashes.
Examples of media for such storage are disks and magnetic tapes. Disks are more reliable than main
memory but less reliable than magnetic tapes. Both disks and tapes, however, are subject to failure,
which may result in loss of information. Currently, non volatile storage is slower than volatile storage
by several orders of magnitude, because disk and tape devices are electromechanical and require
physical motion to access data.
• Stable storage. Information residing in stable storage is never lost (never should be taken with a
grain of salt, since theoretically such absolutes cannot be guaranteed). To implement an approximation
of such storage, we need to replicate information in several non volatile storage caches (usually disk)
with independent failure modes and to update the information in a controlled manner.
Deadlock:
A deadlock occurs when a set of processes is stalled because each process is holding a resource and
waiting for another process to acquire another resource. In the diagram below, for example, Process
1 is holding Resource 1 while Process 2 acquires Resource 2, and Process 2 is waiting for Resource
1.
System Model :
For the purposes of deadlock discussion, a system can be modeled as a collection of
limited resources that can be divided into different categories and allocated to a variety of
processes, each with different requirements.
Memory, printers, CPUs, open files, tape drives, CD-ROMs, and other resources are
examples of resource categories.
By definition, all resources within a category are equivalent, and any of the resources
within that category can equally satisfy a request from that category. If this is not the case
(i.e. if there is some difference between the resources within a category), then that
category must be subdivided further. For example, the term “printers” may need to be
subdivided into “laser printers” and “color inkjet printers.”
Some categories may only have one resource.
The kernel keeps track of which resources are free and which are allocated, to which
process they are allocated, and a queue of processes waiting for this resource to become
available for all kernel-managed resources. Mutexes or wait() and signal() calls can be
used to control application-managed resources (i.e. binary or counting semaphores. )
When every process in a set is waiting for a resource that is currently assigned to another
process in the set, the set is said to be deadlocked.
Operations :
In normal operation, a process must request a resource before using it and release it when finished,
as shown below.
1. Request –
If the request cannot be granted immediately, the process must wait until the resource(s)
required to become available. The system, for example, uses the functions open(),
malloc(), new(), and request ().
2. Use –
The process makes use of the resource, such as printing to a printer or reading from a file.
3. Release –
The process relinquishes the resource, allowing it to be used by other processes.
Necessary Conditions / Deadlock characterization :
There are four conditions that must be met in order to achieve deadlock as follows.
1. Mutual Exclusion –
At least one resource must be kept in a non-shareable state; if another process requests it,
it must wait for it to be released.
3. No preemption –
Once a process holds a resource (i.e. after its request is granted), that resource cannot be
taken away from that process until the process voluntarily releases it.
4. Circular Wait –
There must be a set of processes P0, P1, P2,…, PN such that every P[I] is waiting for P[(I
+ 1) percent (N + 1)]. (It is important to note that this condition implies the hold-and-wait
condition, but dealing with the four conditions is easier if they are considered separately).
2. Deadlock Avoidance:
When a process requests a resource, the deadlock avoidance algorithm examines the resource-
allocation state. If allocating that resource sends the system into an unsafe state, the request is got
granted.
Therefore, it requires additional information such as how many resources of each type is required by a
process. If the system enters into an unsafe state, it has to take a step back to avoid deadlock.
Deadlock Prevention:
We can prevent Deadlock by eliminating any of the above four conditions.
1. Eliminate Mutual Exclusion:
It is not possible to dis-satisfy the mutual exclusion because some resources, such as the tape drive
and printer, are inherently non-shareable.
2. Eliminate Hold and wait:
1. Allocate all required resources to the process before the start of its execution, this way
hold and wait condition is eliminated but it will lead to low device utilization. for
example, if a process requires printer at a later time and we have allocated printer before
the start of its execution printer will remain blocked till it has completed its execution.
2. The process will make a new request for resources after releasing the current set of
resources. This solution may lead to starvation.
3. Eliminate No Preemption:
Preempt resources from the process when resources required by other high priority processes.
4. Eliminate Circular Wait:
Each resource will be assigned with a numerical number. A process can request the resources
increasing/decreasing. order of numbering.
For Example, if P1 process is allocated R5 resources, now next time if P1 ask for R4, R3 lesser than
R5 such request will not be granted, only request for resources more than R5 will be granted
Deadlock Avoidance:
Banker algorithm used to avoid deadlock and allocate resources safely to each process in the
computer system. The 'S-State' examines all possible tests or activities before deciding whether the
allocation should be allowed to each process. It also helps the operating system to successfully share
the resources between all the processes. The banker's algorithm is named because it checks whether a
person should be sanctioned a loan amount or not to help the bank system safely simulate allocation
resources. In this section, we will learn the Banker's Algorithm in detail. Also, we will solve
problems based on the Banker's Algorithm. To understand the Banker's Algorithm first we will see a
real word example of it.
Advantages:
1. It contains various resources that meet the requirements of each process.
2. Each process should provide information to the operating system for upcoming resource
requests, the number of resources, and how long the resources will be held.
3. It helps the operating system manage and control process requests for each type of resource in
the computer system.
4. The algorithm has a Max resource attribute that represents indicates each process can hold the
maximum number of resources in a system.
Disadvantages
1. It requires a fixed number of processes, and no additional processes can be started in the
system while executing the process.
2. The algorithm does no longer allows the processes to exchange its maximum needs while
processing its tasks.
3. Each process has to know and state their maximum resource requirement in advance for the
system.
4. The number of resource requests can be granted in a finite time, but the time limit for allocating
the resources is one year.
When working with a banker's algorithm, it requests to know about three things:
1. How much each process can request for each resource in the system. It is denoted by the
[MAX] request.
2. How much each process is currently holding each resource in a system. It is denoted by the
[ALLOCATED] resource.
3. It represents the number of each resource currently available in the system. It is denoted by the
[AVAILABLE] resource.
Following are the important data structures terms applied in the banker's algorithm as follows:
Suppose n is the number of processes, and m is the number of each type of resource used in a
computer system.
1. Available: It is an array of length 'm' that defines each type of resource available in the system.
When Available[j] = K, means that 'K' instances of Resources type R[j] are available in the
system.
2. Max: It is a [n x m] matrix that indicates each process P[i] can store the maximum number of
resources R[j] (each type) in a system.
3. Allocation: It is a matrix of m x n orders that indicates the type of resources currently allocated
to each process in the system. When Allocation [i, j] = K, it means that process P[i] is currently
allocated K instances of Resources type R[j] in the system.
4. Need: It is an M x N matrix sequence representing the number of remaining resources for each
process. When the Need[i] [j] = k, then process P[i] may require K more instances of resources
type Rj to complete the assigned work.
Need[i][j] = Max[i][j] - Allocation[i][j].
5. Finish: It is the vector of the order m. It includes a Boolean value (true/false) indicating
whether the process has been allocated to the requested resources, and all resources have been
released after finishing its task.
The Banker's Algorithm is the combination of the safety algorithm and the resource request algorithm
to control the processes and avoid deadlock in a system:
Safety Algorithm:
It is a safety algorithm used to check whether or not a system is in a safe state or follows the safe
sequence in a banker's algorithm:
1. There are two vectors Work and Finish of length m and n in a safety algorithm.
Initialize: Work = Available
Finish[i] = false; (for I = 0, 1, 2, 3, 4… n – 1).
2. Check the availability status for each type of resources [i], such as:
Finish[i] == false
Need[i] <= Work //need should be less than work otherwise choose another process
If the i does not exist, go to step 4.
3. Work = Work +Allocation(i) // to get new resource allocation
Finish[i] = true
Go to step 2 to check the status of resource availability for the next process.
4. If Finish[i] == true; it means that the system is safe for all processes.
Example: Consider a system that contains five processes P1, P2, P3, P4, P5 and the three resource
types A, B and C. Following are the resources types: A has 10, B has 5 and the resource type C has 7
instances.
P1 0 1 0 7 5 3 3 3 2
P2 2 0 0 3 2 2
P3 3 0 2 9 0 2
P4 2 1 1 2 2 2
P5 0 0 2 4 3 3
Process Need
A B C
P1 7 4 3
P2 1 2 2
P3 6 0 0
P4 0 1 1
P5 4 3 1
Now we check if each type of resource request is available for each process.
5, 3, 2 + 2, 1, 1 => 7, 4, 3
7, 4, 3 + 0, 0, 2 => 7, 4, 5
Now, we again examine each type of resource request for processes P1 and P3.
7, 4, 5 + 0, 1, 0 => 7, 5, 5
Hence, we execute the banker's algorithm to find the safe state and the safe sequence like P2, P4,
P5, P1 and P3.
P1 0 1 0 7 5 3 3 3 2
P2 2 0 0 3 2 2
P3 3 0 2 9 0 2
P4 2 1 1 2 2 2
P5 0 0 2 4 3 3
Process Need
A B C
P1 7 4 3
P2 1 2 2
P3 6 0 0
P4 0 1 1
P5 4 3 1
Solution:
Initially find need of P2
Need (p2) = max(p2) – allocation (p2)
=332–200
Need (p2) = (1 2 2)
1. Request <= Need
(1 0 2) <= (1 2 2) True
P1 0 1 0 7 5 3 2 3 0
P2 3 0 2 3 2 2
P3 3 0 2 9 0 2
P4 2 1 1 2 2 2
P5 0 0 2 4 3 3
Process Need
A B C
P1 7 4 3
P2 0 2 0
P3 6 0 0
P4 0 1 1
P5 4 3 1
Now apply Banker’s Algorithm for the above updated table to check whether the system is in safe state
or not.
Available Resources of A, B and C are 2, 3, and 0.
Now we check if each type of resource request is available for each process.
(2, 3, 0) + (3, 0, 2) = 5, 3, 2
5, 3, 2 + 2, 1, 1 => 7, 4, 3
7, 4, 3 + 0, 0, 2 => 7, 4, 5
Now, we again examine each type of resource request for processes P1 and P3.
7, 4, 5 + 0, 1, 0 => 7, 5, 5
7, 5, 5 + 3, 0, 2 => 10, 5, 7
Hence, we execute the banker's algorithm to find the safe state and the safe sequence like P2, P4,
P5, P1 and P3.
Deadlock Detection
If a system does not employ either a deadlock-prevention or deadlock-avoidance algorithm, then there
are chances of occurrence of a deadlock.
In this case, the system may provide two things:
An algorithm is used to examines the state of the system in order to determine whether a
deadlock has occurred.
An algorithm that is used to recover from the deadlock.
Thus order to get rid of deadlocks the operating system periodically checks the system for any
deadlock. After Finding the deadlock the operating system will recover from it using recovery
techniques.
Now, the main task of the operating system is to detect the deadlocks and this is done with the help of
Resource Allocation Graph.
Single Instance of Each Resource Type:
If all the resources have only a single instance, then a deadlock-detection algorithm can be
defined that mainly uses the variant of the resource-allocation graph and is known as a wait-for graph.
This wait-for graph is obtained from the resource-allocation graph by removing its resource nodes and
collapsing its appropriate edges.
An edge from Pi to Pj in a wait-for graph simply implies that process Pi is basically waiting for
process Pj in order to release a resource that it needs. An edge Pi, Pj exists in a wait-for graph if and
only if the corresponding resource allocation graph contains two edges Pi,Rq and Rq,Pj for a resource
Rq.
A deadlock exists in the system if and only if there is a cycle in the wait-for graph. In order to
detect the deadlock, the system needs to maintain the wait-for graph and periodically system invokes
an algorithm that searches for the cycle in the wait-for graph.
The algorithm that is used to detect the cycle in the graph mainly requires n² operations; where
n indicates the number of vertices in the graph.
Now we will see the working of this Algorithm with an Example. Consider a Resource Allocation
Graph with 4 Processes P1, P2, P3, P4, and 4 Resources R1, R2, R3, R4. Find if there is a deadlock
in the Graph using the Wait for Graph-based deadlock detection algorithm.
Step 1: First take Process P1 which is waiting for Resource R1, resource R1 is acquired by Process
P2, Start a Wait-for-Graph for the above Resource Allocation Graph.
Step 2: Now we can observe that there is a path from P1 to P2 as P1 is waiting for R1 which is been
acquired by P2. Now the Graph would be after removing resource R1 looks like.
Step 3: From P2 we can observe a path from P2 to P3 as P2 is waiting for R4 which is acquired by
P3. So make a path from P2 to P3 after removing resource R4 looks like.
Step 4: From P3 we find a path to P4 as it is waiting for P3 which is acquired by P4. After removing
R3 the graph looks like this.
Step 5: Here we can find Process P4 is waiting for R2 which is acquired by P1. So finally the Wait-
for-Graph is as follows:
Step 6: Finally In this Graph, we found a cycle as the Process P4 again came back to the Process P1
which is the starting point (i.e., it’s a closed-loop). So, According to the Algorithm if we found a
closed loop, then the system is in a deadlock state. So here we can say the system is in a deadlock
state
Example:
Available
A B C
0 0 0
Work = 0 0 0
P0 = [0 0 0] < = [0 0 0] True
Work = [0 0 0] + [0 1 0]
= [0 1 0] < P0 >
P1 = [0 2 0] <= [0 1 0] False
P2 = [0 0 0] < = [0 1 0] True
Work = [0 1 0] + [3 0 3] = [3 1 3] < P0, P2>
P3 = [1 0 0] <= [3 1 3] True
Work = [3 1 3] + [2 1 1]
= [5 2 4] < P0,P2,P3>
P4 = [0 0 2] <= [5 2 4] True
Work = [5 2 4]+ [0 0 2]
= [5 2 6] < P0,P2,P3,P4>
P1= [2 0 2] <= [5 2 6]
Work = [5 2 6] + [2 0 0]
= [7 2 6] < P0,P2,P3,P4,P1>
Hence the System has no deadlock
When a Deadlock Detection Algorithm determines that a deadlock has occurred in the system, the
system must recover from that deadlock. There are two approaches of breaking a Deadlock:
1. Process Termination:
To eliminate the deadlock, we can simply kill one or more processes. For this, we use two
methods:
(a). Abort all the Deadlocked Processes:
Aborting all the processes will certainly break the deadlock, but with a great expense. The
deadlocked processes may have computed for a long time and the result of those partial
computations must be discarded and there is a probability to recalculate them later.