KEMBAR78
Os Unit 3 Digital Notes | PDF | Thread (Computing) | Process (Computing)
0% found this document useful (0 votes)
64 views106 pages

Os Unit 3 Digital Notes

Uploaded by

23102208
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views106 pages

Os Unit 3 Digital Notes

Uploaded by

23102208
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 106

Please read this disclaimer before proceeding:

This document is confidential and intended solely for the educational purpose of
RMK Group of Educational Institutions. If you have received this document
through email in error, please notify the system manager. This document
contains proprietary information and is intended only to the respective group /
learning community as intended. If you are not the addressee you should not
disseminate, distribute or copy through e-mail. Please notify the sender
immediately by e-mail if you have received this document by mistake and delete
this document from your system. If you are not the intended recipient you are
notified that disclosing, copying, distributing or taking any action in reliance on
the contents of this information is strictly prohibited.
22CS304
OPERATING SYSTEMS
(Lab Integrated)

UNIT I

Department : CSE, IT & AIML

Batch/Year : 2023 - 2027 /II

Created by : Dr.R.Sasikumar, Prof/CSE


Dr.C.S.Anita, Prof/AIML
Dr.K.Saravanan, ASP/IT
Dr.M.Vedaraj, ASP/CSE
Mrs.K.Balasaranya, AP/CSE
Date : 12.07.2024
Table of Contents

S NO
CONTENTS PAGE NO

1 Contents 5

2 Course Objectives 6
8
3 Pre Requisites (Course Names with Code)

4 Syllabus (With Subject Code, Name, LTPC 10


details)
5 Course Outcomes 13

6 CO- PO/PSO Mapping 15

7 Lecture Plan 17

8 Activity Based Learning 19

9 Lecture Notes 21
10 81
Lecture Slides
11 83
Lecture Videos

12 Assignments 84

13 Part A (Q & A) 87

14 Part B Qs 91

15 Supportive Online Certification Courses 94

16 Real time Applications in day to day life 96


and to Industry

17 Contents Beyond the Syllabus 98

18 100
Assessment Schedule

19 Prescribed Text Books & Reference Books 102

20 Mini Project Suggestions 104


Course Objectives
22CS4304 OPERATING SYSTEMS

COURSE OBJECTIVES
The Course will enable learners to:
Explain the basic concepts of operating systems and process.
Discuss threads and implement various CPU scheduling algorithms.
Describe the concept of process synchronization and implement deadlocks
Analyze various memory management schemes.
Describe I/O management and file systems.
Prerequisite
22CS304 OPERATING SYSTEMS

PREREQUISITE
Nil
Syllabus
22CS304 - OPERATING SYSTEMS

SYLLABUS 2023

UNIT I INTRODUCTION TO OPERATING SYSTEMS AND


PROCESSES

Introduction to OS –Computer system organization - architecture – Resource


management - Protection and Security – Virtualization - Operating System Structures -
Services - User and Operating-System Interface - System Calls - System Services -
Design and Implementation - Building and Booting an Operating System - Process
Concept - Process Scheduling - Operations on Processes – Inter process
Communication - IPC in Shared-Memory Systems - IPC in Message-Passing Systems
List of Exercise/Experiments:
1. Basic Unix file system commands such as ls, cd, mkdir, rmdir, cp, rm, mv, more,
lpr, man, grep, sed, etc..
2. Programs using Shell Programming.
3. Implementation of Unix System Calls.
4. Implementation of IPC using message queue
a. Get the input data (integer value) from a process called sender
b. Use Message Queue to transfer this data from sender to receiver process
c. The receiver does the prime number checking on the received data
d. Communicate the verified/status result from receiver to sender process, this
status should be displayed in the Sender process.
Note: Simultaneously execute two or more processes. Don’t do it as a single process

UNIT II THREADS AND CPU SCHEDULING


Threads & Concurrency: Overview - Multicore Programming - Multithreading Models -
Thread Libraries - Implicit Threading - Threading Issues - CPU Scheduling: Basic
Concepts - Scheduling Criteria - Scheduling Algorithms - Thread Scheduling - Multi-
Processor Scheduling - Real-Time CPU Scheduling
List of Exercise/Experiments:
1. Write a program to implement the following actions using pthreads
a. Create a thread in a program and called Parent thread, this parent thread
creates another thread (Child thread) to print out the numbers from 1 to
20. The Parent thread waits till the child thread finishes
b. Create a thread in the main program, this program passes the 'count' as
arguments to that thread function and this created thread function has to
print your name 'count' times.
2. Write C programs to implement the various CPU Scheduling Algorithms.
22CS304 - OPERATING SYSTEMS

UNIT III PROCESS SYNCHRONISATION AND DEADLOCKS


Process Synchronization - The critical-section problem, Peterson’s Solution -
Synchronization hardware, Mutex locks, Semaphores, monitors, Liveness - Classic
problems of synchronization – Bounded Buffer Problem - Reader’s & Writer Problem,
Dinning Philosopher Problem, Barber’s shop problem. Deadlock - System model -
Deadlock characterization, Methods for handling deadlocks - Deadlock prevention -
Deadlock avoidance - Deadlock detection - Recovery from deadlock
List of Exercise/Experiments:
1. Process Synchronization using Semaphores. A shared data has to be accessed by
two categories of processes namely A and B. Satisfy the following constraints to
access the data without any data loss.
a. When a process A1 is accessing the database another process of the
same category is permitted.
b. When a process B1 is accessing the database neither process A1 nor
another 74 processB2 is permitted.
c. When a process A1 is accessing the database process B1 should not be
allowed to access the database. Write appropriate code for both A and B
satisfying all the above constraints using semaphores.
Note: The time-stamp for accessing is approximately 10 sec.
2. Bankers Algorithm for Deadlock Avoidance

UNIT IV MEMORY MANAGEMENT


Memory Management: Contiguous Memory Allocation - Paging - Structure of the Page
Table – Swapping - Virtual Memory: Demand Paging – Copy-on write – Page
Replacement – Allocation of frames – Thrashing - Memory Compression
List of Exercise/Experiments:
1. Analysis and Simulation of Memory Allocation and Management Techniques
i. First Fit ii. Best Fit iii. Worst Fit
2. Implementation of Page Replacement Techniques
i. FIFO ii. LRU iii. Optimal page replacement

UNIT V STORAGE MANAGEMENT


Mass Storage Structure: Overview of Mass Storage Structure- HDD scheduling – Swap
Space Management, I/O systems: I/O Hardware, Application I/O interface, Kernel I/O
Subsystem, File System Interface: File Concept – Access Methods – Directory
Structure – Protection, File-System Implementation: File-System Structure- File-
System Operations - Directory Implementation - Allocation Methods - Free-Space
Management, Case Study-Linux
List of Exercise/Experiments:
1. Simulation of File Allocation Techniques
i. Sequential ii. Linked list iii. indexed
2. Implementation of File Organization Strategies
i.Single level directory ii. Two level directory iii. Hierarchical level directory
Course Outcomes
COURSE OUTCOMES

CO1: Implement the operating system concepts and process.


CO2: Analyse various CPU scheduling algorithms and thread
mechanism.

CO3: Implement process synchronization and deadlock problems


CO4: Design various memory management schemes to given situation

CO5: Implement various I/O and file management techniques


CO – PO/ PSO
Mapping
CO-PO MAPPING

PO’s/PSO’s
COs PO PO PO PO PO PO PO PO PO PO PO PO PSO PSO PSO
1 2 3 4 5 6 7 8 9 10 11 12 1 2 3

CO1 3 2 2 2 2 - - - 2 - - 2 2 - -
CO2
3 3 2 2 2 - - - 2 - - 2 2 - -
CO3
2 2 1 1 1 - - - 1 - - 1 2 - -
CO4 3 3 1 1 1 - - - 1 - - 1 2 - -
CO5
3 3 1 1 1 - - - 1 - - 1 3 1 -

1 – Low, 2 – Medium, 3 – Strong


Lecture Plan
LECTURE PLAN

Mode
No of Actual Taxo
S No Topics Proposed Pertainin of
periods Lecture nomy
date g CO delivery
Date level

Process
ICT
Synchronization -
1 1 CO3 K2 Tools
The critical-section
problem
Peterson’s Solution -
Synchronization ICT
2 1 CO3 K2 Tools
hardware,
Mutex locks,
ICT
3 Semaphores 1 CO3 K3 Tools
monitors, Liveness
ICT
4 1 CO3 K2 Tools
Classic problems of
synchronization – ICT
Bounded Buffer Tools
Problem - Reader’s
5 & Writer Problem, 1 CO3 K2
Dinning Philosopher
Problem, Barber’s
shop problem
Deadlock - System
model - Deadlock
ICT
6 characterization, 1 CO3 K2 Tools
Methods for
handling deadlocks
Deadlock prevention
- Deadlock ICT
7 avoidance 1 CO3 K3
Tools

Deadlock detection ICT


8 - Recovery from 1 CO3 K3 Tools
deadlock
Revision – Quiz ICT
Activity Tools
9 1 CO3 K2
Activity Based Learning
ACTIVITY BASED LEARNING

1. Role Play for Process synchronization


Lecture Notes
3 . PROCESS SYNCHRONISATION AND DEADLOCKS

3.1 PROCESS SYNCHRONIZATION


Consider the bounded buffer problem. The original solution allowed at most
BUFFER SIZE − 1 items in the buffer at the same time. Suppose we want to modify the
algorithm to remedy this deficiency. One possibility is to add an integer variable counter,
initialized to 0. Counter is incremented every time we add a new item to the buffer and is
decremented every time we remove one item from the buffer. The code for the producer
process can be modified as follows:

while (true) {
/* produce an item in next produced */
while (counter == BUFFER SIZE)
; /* do nothing */
buffer[in] = next produced;

in = (in + 1) % BUFFER SIZE;


counter++;

The code for the consumer process can be modified as follows:

while (true) {
while (counter == 0)
; /* do nothing */
next consumed = buffer[out];
out = (out + 1) % BUFFER SIZE;

counter--;
/* consume the item in next consumed */
}
Although the producer and consumer routines shown above are correct
separately,they may not function correctly when executed concurrently. As an illustration,
suppose that the value of the variable count is currently 5 and that the producer and
consumer processes concurrently execute the statements “count++” and “count--”.

Following the execution of these two statements, the value of the variable
count may be 4, 5, or 6! The only correct result, though, is count == 5, which is
generated correctly if the producer and consumer execute separately.

We can show that the value of count may be incorrect as follows. Note that the
statement “count++” may be implemented in machine language as follows:

register1 = count

register1 = register1 + 1

count = register1

where register1 is one of the local CPU registers. Similarly, the statement “count--” is
implemented as follows:

register2 = count

register2 = register2 − 1

count = register2

where again register2 is one of the local CPU registers.


Even though register1 and register2 may be the same physical register,
remember that the contents of this register will be saved and restored by the interrupt handler.

The concurrent execution of “count++” and “count--” is equivalent to a sequential execution in


which the lower-level statements presented previously are interleaved in some arbitrary order
(but the order within each high-level statement is preserved). One such interleaving is the
following:
Notice that we have arrived at the incorrect state “count == 4”, indicating that
four buffers are full, when, in fact, five buffers are full. If we reversed the order of the statements
at T4 and T5, we would arrive at the incorrect state “count == 6”.
We would arrive at this incorrect state because we allowed both processes to
manipulate the variable counter concurrently. A situation like this, where several processes
access and manipulate the same data concurrently and the outcome of the
execution depends on the particular order in which the access takes place, is called a
race condition. To guard against the race condition above, we need to ensure that only
one process at a time can be manipulating the variable counter. To make such a
guarantee, we require that the processes be synchronized in some way.

Reference Video

Introduction to Threads
https://youtu.be/LOfGJcVnvAk
3.2 THE CRITICAL-SECTION PROBLEM

Consider a system consisting of n processes {P0, P1, ..., Pn−1}. Each process has
a segment of code, called a critical section, in which the process may be changing common
variables, updating a table, writing a file, and so on. The important feature of the system
is that, when one process is executing in its critical section, no other process is allowed to execute
in its critical section. Each process must request permission to enter its critical section.

The section of code implementing this request is the entry section. The critical
section may be followed by an exit section. The remaining code is the remainder section.
The general structure of a typical process Pi is shown in the figure 3.1.

Figure 3.1 - General structure of a typical process


A solution to the critical-section problem must satisfy the following three requirements:
1.Mutual exclusion. If process Pi is executing in its critical section, then no other processes
can be executing in their critical sections.
2. Progress. If no process is executing in its critical section and some processes wish to enter
their critical sections, then only those processes that are not executing in their remainder sections
can participate in deciding which will enter its critical section next, and this selection cannot be
postponed indefinitely.
3.Bounded waiting. There exists a bound, or limit, on the number of times that other processes
are allowed to enter their critical sections after a process has made a request to enter its critical
section and before that request is granted.
Two general approaches are used to handle critical sections in operating systems:
preemptive kernels and non-preemptive kernels. A preemptive kernel allows a process to be
preempted while it is running in kernel mode. A non-preemptive kernel does not allow a
process running in kernel mode to be preempted; a kernel-mode process will run until it exits
kernel mode, blocks, or voluntarily yields control of the CPU.

3.3 PETERSON’S SOLUTION


Peterson’s solution is restricted to two processes that alternate execution
between their critical sections and remainder sections. The processes are numbered P0 and P1.
For convenience, when presenting Pi and Pj ae used to denote the two processes; that is, j
equals 1 − i.

Peterson’s solution requires the two processes to share two data items:

int turn;

boolean flag[2];

The variable turn indicates whose turn it is to enter its critical section. That is, if turn
== i, then process Pi is allowed to execute in its critical section. The flag array is used to indicate
if a process is ready to enter its critical section. For example, if flag[i] is true, Pi is ready
to enter its critical section.

To enter the critical section, process Pi first sets flag[i] to be true and then sets
turn to the value j, thereby asserting that if the other process wishes to enterthe
critical section, it can do so. If both processes try to enter at the same time, turn will be set
to both i and j at roughly the same time. Only one of these assignments will last; the other will
occur but will be overwritten immediately. The eventual value of turn determines which of the
two processes is allowed to enter its critical section first.

We now prove that this solution is correct. We need to show that:

1. Mutual exclusion is preserved.

2. The progress requirement is satisfied.

3. The bounded-waiting requirement is met.


To prove property 1, we note that each Pi enters its critical section only if either
flag[j] == false or turn == i. Also note that, if both processes can be executing in their critical
sections at the same time, then flag[0] == flag[1] == true. These two observations imply that
P0 and P1 could not have successfully executed their while statements at about the same time,
since the value of turn can be either 0 or 1 but cannot be both. Hence, one of the processes—
say, Pj—must have successfully executed the while statement, whereas Pi had to execute at
least one additional statement (“turn == j”). However, at that time, flag[j] == true and turn
== j, and this condition will persist as long as Pj is in its critical section; as a result, mutual
exclusion is preserved.

To prove properties 2 and 3, we note that a process Pi can be prevented from


entering the critical section only if it is stuck in the while loop with the condition flag[j] == true
and turn == j; this loop is the only one possible. If Pj is not ready to enter the critical section,
then flag[j] == false, and Pi can enter its critical section. If Pj has set flag[j] to true and is also
executing in its while statement, then either turn == i or turn == j. If turn == i, then Pi will
enter the critical section.

If turn == j, then Pj will enter the critical section. However, once Pj exits its critical
section, it will reset flag[j] to false, allowing Pi to enter its critical section. If Pj resets flag[j] to
true, it must also set turn to i. Thus, since Pi does not change the value of the variable turn while
executing the while statement, Pi will enter the critical section (progress) after at most one entry
by Pj (bounded waiting).
4. Hardware Support for Synchronization
Software-based solutions are not guaranteed to work on modern computer
architectures. There are three hardware instructions that provide support for solving the
critical-section problem. These primitive operations can be used directly as synchronization
tools, or they can be used to form the foundation of more abstract synchronization
mechanisms.

Memory Barriers

Hardware Instructions

Atomic variables

1. Memory Barriers
A system may reorder instructions, a policy that can lead to unreliable data
states. How a computer architecture determines what memory guarantees it will provide to
an application program is known as its memory model. In general, a memory model falls
into one of two categories:

1.Strongly ordered, where a memory modification on one processor is immediately visible


to all other processors.

2.Weakly ordered, where modifications to memory on one processor may not be


immediately visible to other processors.

Memory models vary by processor type, so kernel developers cannot make any
assumptions regarding the visibility of modifications to memory on a shared-memory
multiprocessor. To address this issue, computer architectures provide instructions that can
force any changes in memory to be propagated to all other processors, thereby ensuring that
memory modifications are visible to threads running on other processors.

Such instructions are known as memory barriers or memory fences. When


a memory barrier instruction is performed, the system ensures that all loads and stores are
completed before any subsequent load or store operations are performed. Therefore, even if
instructions were reordered, the memory barrier ensures that the store operations are
completed in memory and visible to other processors before future load or store operations
are performed.
If we add a memory barrier operation to Thread 1

while (!flag)

memory barrier();

print x;

we guarantee that the value of flag is loaded before the value of x.

Similarly, if we place a memory barrier between the assignments performed by Thread 2

x = 100;

memory barrier();

flag = true;

we ensure that the assignment to x occurs before the assignment to flag.

3.4.2 Hardware Instructions


Many modern computer systems provide special hardware instructions that
allow us either to test and modify the content of a word or to swap the contents of two
words atomically— that is, as one uninterruptible unit. This is accomplished by test and
set() and compare and swap() instructions.

The important characteristic of test and set() instruction is that it is executed atomically.
Thus, if two test and set() instructions are executed simultaneously each on a different core),
they will be executed sequentially in some arbitrary order. If the machine supportsthe test
and set() instruction, then we can implement mutual exclusion by declaring a boolean
variable lock, initialized to false. The structure of process Pi is shown below.
The compare and swap() instruction (CAS), just like the test and set() instruction, operates
on two words atomically, but uses a different mechanism that is based on swapping the
content of two words. The CAS instruction operates on three operands and is defined as
below.

The operand value is set to new value only if the expression (*value ==
expected) is true. Regardless, CAS always returns the original value of the variable value.
The important characteristic of this instruction is that it is executed atomically. Thus, if two
CAS instructions are executed simultaneously (each on a different core), they will be executed
sequentially in some arbitrary order.
Mutual exclusion using CAS can be provided as follows: A global variable
(lock) is declared and is initialized to 0. The first process that invokes compare and swap()
will set lock to 1. It will then enter its critical section, because the original value of lock
was equal to the expected value of 0. Subse_x0002_quent calls to compare and swap()
will not succeed, because lock now is not equal to the expected value of 0. When a
process exits its critical section, it sets lock back to 0, which allows another process to
enter its critical section. The structure of process Pi is shown above.

Although this algorithm satisfies the mutual-exclusion requirement, it does


not satisfy the bounded-waiting requirement.

3.4.3 Atomic Variables


The compare and swap() instruction is not used directly to provide mutual
exclusion. Rather, it is used as a basic building block for constructing other tools that solve
the critical-section problem. One such tool is an atomic variable, which provides atomic
operations on basic data types such as integers and booleans.

Incrementing or decrementing an integer value may produce a race


condition. Atomic variables can be used in to ensure mutual exclusion in situations where
there may be a data race on a single variable while it is being updated, as whena counter
is incremented. Most systems that support atomic variables provide special atomic data
types as well as functions for accessing and manipulating atomic variables.

These functions are often implemented using compare and swap() opera_x0002_tions.
As an example, the following increments the atomic integer sequence:

increment(&sequence);

where the increment() function is implemented using the CAS instruction:


void increment(atomic int *v)

int temp;

do

temp = *v;

{}

while (temp != compare and swap(v, temp, temp+1));

}
It is important to note that although atomic variables provide atomic
updates, they do not entirely solve race conditions in all circumstances.

3.5 Mutex Locks


The hardware-based solutions to the critical-section problem are
complicated as well as generally inaccessible to application programmers. Instead,
operating-system designers build higher-level software tools to solve the critical-section
problem. The simplest of these tools is the mutex lock. (In fact, the term mutex is short
for mutual exclusion.) We use the mutex lock to protect critical sections and thus prevent
race conditions. That is, a process must acquire the lock before enteringa critical
section; it releases the lock when it exits the critical section. The acquire()function
acquires the lock, and the release() function releases the lock.

Figure 3.1 - General structure of a typical process


A mutex lock has a boolean variable available whose value indicates if the
lock is available or not. If the lock is available, a call to acquire() succeeds, and the lock
is then considered unavailable. A process that attempts to acquire an unavailable lock is
blocked until the lock is released.

The definition of acquire() is as follows:

acquire()
while (!available)
{
; /* busy wait */
available = false;

The definition of release() is as follows:


release()

available = true;

{}
The main disadvantage of the implementation given here is that it
requires busy waiting. While a process is in its critical section, any other process that
tries to enter its critical section must loop continuously in the call to acquire(). This
continual looping is clearly a problem in a real multiprogramming system, where a single
CPU core is shared among many processes. Busy waiting also wastes CPU cycles that
some other process might be able to use productively.

The type of mutex lock we have been describing is also called a spinlock
because the process “spins” while waiting for the lock to become available. Spinlocks do
have an advantage, however, in that no context switch is required when a process
must wait on a lock, and a context switch may take considerable time. In certain
circumstances on multicore systems, spinlocks are in fact the preferable choice for
locking. If a lock is to be held for a short duration, one thread can “spin” on one
processing core while another thread performs its critical section on another core.
3.6 Semaphores
Mutex locks are generally considered the simplest of synchronization
tools. A semaphore S is an integer variable that, apart from initialization, is accessed
only through two standard atomic operations: wait() and signal(). Semaphores were
introduced by the Dutch computer scientist Edsger Dijkstra, and such, the wait()
operation was originally termed P (from the Dutch proberen, “to test”); signal() was
originally called V (from verhogen, “to increment”). The definition of wait() is as follows:

wait(S)

while (S

{ <= 0)

; // busy wait

S--;

The definition of signal() is as follows:

signal(S) {

S++;

}
All modifications to the integer value of the semaphore in the wait() and signal()
operations must be executed atomically. That is, when one process modifies the
semaphore value, no other process can simultaneously modify that same semaphore
value. In addition, in the case of wait(S), the testing of the integer value of S (S ≤ 0), as
well as its possible modification (S--), must be executed without interruption. We shall
see how these operations can be implemented in Section 6.6.2. First, let’s see how
semaphores can be used.
3.6.1 Semaphore Usage
Operating systems often distinguish between counting and binary
semaphores. The value of a counting semaphore can range over an unrestricted domain.
The value of a binary semaphore can range only between 0 and 1. Thus, binary
semaphores behave similarly to mutex locks. On systems that do not provide mutex
locks, binary semaphores can be used instead for providing mutual exclusion.

Counting semaphores can be used to control access to a given resource


consisting of a finite number of instances. The semaphore is initialized to the number of
resources available. Each process that wishes to use a resource performs a wait()
operation on the semaphore (thereby decrementing the count). When a process
releases a resource, it performs a signal() operation (incrementing the count). When
the count for the semaphore goes to 0, all resources are being used. After
that, processes that wish to use a resource will block until the count becomes
greater than 0.

Semaphores can be used to solve various synchronization problems.


For example, consider two concurrently running processes: P1 with a statement S1 and
P2 with a statement S2. Suppose we require that S2 be executed only after S1 has
completed. We can implement this scheme readily by letting P1 and P2 share a common
semaphore synch, initialized to 0. In process P1, we insert the statements

S1;

signal(synch);

In process P2, we insert the statements

wait(synch);

S2;

Because synch is initialized to 0, P2 will execute S2 only after P1 has invoked


signal(synch), which is after statement S1 has been executed.
3.6.2 Semaphore Implementation
Recall that the implementation of mutex locks suffers from busy waiting.
The definitions of the wait() and signal() semaphore operations just described present the
same problem. To overcome this problem, we can modify the definition of the wait()and
signal() operations as follows: When a process executes the wait() operation and
finds that the semaphore value is not positive, it must wait.

However, rather than engaging in busy waiting, the process can suspend
itself. The suspend operation places a process into a waiting queue associated with the
semaphore, and the state of the process is switched to the waiting state. Then control is
transferred to the CPU scheduler, which selects another process to execute.

A process that is suspended, waiting on a semaphore S, should be


restarted when some other process executes a signal() operation. The process is restarted
by a wakeup() operation, which changes the process from the waiting state to the ready
state. The process is then placed in the ready queue. (The CPU may or may not be
switched from the running process to the newly ready process, depending on theCPU-
scheduling algorithm.)

To implement semaphores under this definition, we define a semaphore as

follows:

typedef struct {

int value;

struct process *list;

} semaphore;
Each semaphore has an integer value and a list of processes list. When a process must
wait on a semaphore, it is added to the list of processes. A signal() operation removes
one process from the list of waiting processes and awakens that process.
Now, the wait() semaphore operation can be defined as

wait(semaphore *S)

S->value--;

if (S->value < 0)

add this process to

S->list;

sleep();

}}

and the signal() semaphore operation can be defined as

signal(semaphore *S) {

S->value++;

if (S->value <= 0) {

remove a process P from S->list;

wakeup(P);

}}
The sleep() operation suspends the process that invokes it. The
wakeup(P) operation resumes the execution of a suspended process P. These two
opera_x0002_tions are provided by the operating system as basic system calls.
3.7 Monitors
Although semaphores provide a convenient and effective mechanism for
process synchronization, using them incorrectly can result in timing errors that are difficult
to detect, since these errors happen only if particular execution sequences take place, and
these sequences do not always occur.

All processes share a binary semaphore variable mutex, which is initialized


to 1. Each process must execute wait(mutex) before entering the critical section and
signal(mutex) afterward. If this sequence is not observed, two processes may be in
their critical sections simultaneously. Next, we list several difficulties that may result. Note
that these difficulties will arise even if a single process is not well behaved. This situation
may be caused by an honest programming error or an uncooperative programmer.

Suppose that a program interchanges the order in which the wait() and signal()
operations on the semaphore mutex are executed, resulting in the following
execution:

signal(mutex);

...

critical section

...

wait(mutex);
In this situation, several processes may be executing in their critical sections
simultaneously, violating the mutual-exclusion requirement. This error may be
discovered only if several processes are simultaneously active in their critical sections.
Note that this situation may not always be reproducible.
 Suppose that a program replaces signal(mutex) with wait(mutex). That is, it executes

wait(mutex);

...

critical section

...

wait(mutex);
In this case, the process will permanently block on the second call to wait(), as the
semaphore is now unavailable.

Suppose that a process omits the wait(mutex), or the signal(mutex), or both. In this
case, either mutual exclusion is violated or the process will permanently block.

These examples illustrate that various types of errors can be generated easily when
programmers use semaphores or mutex locks incorrectly to solve the critical-section
problem. One strategy for dealing with such errors is to incorporate simple
synchronization tools as high-level language constructs. This leads to the one
fundamental high-level synchronization construct—the monitor type.
3.7.1 Monitor Usage
An abstract data type—or ADT—encapsulates data with a set of functions to
operate on that data that are independent of any specific implementation of the ADT.A
monitor type is an ADT that includes a set of programmer-defined perations
that are provided with mutual exclusion within the monitor. The monitor type
also declares the variables whose values define the state of an instance of that type, along
with the bodies of functions that operate on those variables.

The representation of a monitor type cannot be used directly by the various


processes. Thus, a function defined within a monitor can access only those variables
declared locally within the monitor and its formal parameters. Similarly, the local variables
of a monitor can be accessed by only the local functions.

The monitor construct ensures that only one process at a time is active within the
monitor. Consequently, the programmer does not need to code his synchronization
constraint explicitly.
These mechanisms are provided by the condition construct. A
programmer who needs to write a tailor-made synchronization scheme can define one or
more variables of type condition:

condition x, y;

The only operations that can be invoked on a condition variable are wait() and signal().
The operation

x.wait();
means that the process invoking this operation is suspended until another process
invokes

x.signal();
The x.signal() operation resumes exactly one suspended process. If no process is
suspended, then the signal() operation has no effect; that is, the state of x is the same
as if the operation had never been executed. Contrast this operation with the signal()
operation associated with semaphores, which always affects the state of the semaphore.

Fig 3.1 Schematic view of a monitor.


Now suppose that, when the x.signal() operation is invoked by a process P,
there exists a suspended process Q associated with condition x. Clearly, if the
suspended process Q is allowed to resume its execution, the signaling process P must
wait. Otherwise, both P and Q would be active simultaneously within the monitor. Note,
however, that conceptually both processes can continue with their execution. Two
possibilities exist:

1.Signal and wait. P either waits until Q leaves the monitor or waits for another
condition.

2. Signal and continue. Q either waits until P leaves the monitor or waits for another
condition.

There are reasonable arguments in favor of adopting either option. On the one hand,
since P was already executing in the monitor, the signal-and continue method seems more
reasonable. On the other, if we allow thread P to continue, then by the time Q is resumed,
the logical condition for which Q was waiting may no longer hold. A compromise
between these two choices exists as well: when thread P executes the signal operation,
it immediately leaves the monitor. Hence, Q is immediately resumed.

Fig 3.2 Monitor with condition variable


3.7.2 Implementing a Monitor Using Semaphores
Consider a possible implementation of the monitor mechanism using
semaphores. For each monitor, a binary semaphore mutex (initialized to 1) is provided
to ensure mutual exclusion. A process must execute wait(mutex) before entering the
monitor and must execute signal(mutex) after leaving the monitor.

Since a signaling process must wait until the resumed process either leaves or waits,
an additional binary semaphore, next, is introduced, initialized to 0. The signaling processes
can use next to suspend themselves. An integer variable next count is also provided to count
the number of processes suspended on next. Thus, each external function F is
replaced by

Mutual exclusion within a monitor is ensured.


For each condition x, we introduce a binary semaphore x sem and an integer variable x
count, both initialized to 0. The operation x.wait() can now be implemented as

The operation x.signal() can be implemented as


3.7.3 Resuming Processes within a Monitor
If several processes are suspended on condition x, and an x.signal()
operation is executed by some process, then how do we determine which of the
suspended processes should be resumed next? One simple solution is to use a
first-come, first-served (FCFS) ordering, so that the process that has been waiting
the longest is resumed first.

Conditionalwait construct can be used. This construct has the form


x.wait(c);
where c is an integer expression that is evaluated when the wait() operation is executed.
The value of c, which is called a priority number, is then stored with the name of
the process that is suspended. When x.signal() is executed, the process with
the smallest priority number is resumed next.
Consider the Resource Allocator monitor, which controls the allocation of a
single resource among competing processes. Each process, when requesting an
allocation of this resource, specifies the maximum time it plans to use the resource. The
monitor allocates the resource to the process that has the shortest time-allocation request.
A process that needs to access the resource in question must observe the following
sequence:

R.acquire(t);

...

access the resource;

...

R.release();

where R is an instance of type ResourceAllocator.


Unfortunately, the monitor concept cannot guarantee that the preceding
access sequence will be observed. In particular, the following problems can occur:

 A process might access a resource without first gaining access permission to the
resource.

 A process might never release a resource once it has been granted access to the
resource.

 A process might attempt to release a resource that it never requested.


 A process might request the same resource twice (without first releasing the
resource)
The same difficulties are encountered with the use of semaphores, and
these difficulties are similar in nature to those that encouraged us to develop the monitor
constructs in the first place. Previously, we had to worry about the correct useof
semaphores. Now, we have to worry about the correct use of higher-level
programmer-defined operations, with which the compiler can no longer assist us.

One possible solution to the current problem is to include the resource


access operations within the Resource Allocator monitor. However, using this solution will
mean that scheduling is done according to the built-in monitor-scheduling algorithm rather
than the one we have coded.

To ensure that the processes observe the appropriate sequences, we must inspect all the
programs that make use of the Resource Allocator monitor and its managed resource.
We must check two conditions to establish the correctness of this system.

 First, user processes must always make their calls on the monitor in a correct
sequence.

 Second, we must be sure that an uncooperative process does not simply ignore the
mutual-exclusion gateway provided by the monitor and try to access the shared
resource directly, without using the access protocols.

 Only if these two conditions can be ensured can we guarantee that no time-
dependent errors will occur and that the scheduling algorithm will not be defeated.

Although this inspection may be possible for a small, static system, it is not
reasonable for a large system or a dynamic system. This access-control problem can be
solved only through the use of the additional mechanisms.
8. Liveness
One consequence of using synchronization tools to coordinate access to
critical sections is the possibility that a process attempting to enter its critical
section will wait indefinitely. The three criteria that solutions to the critical-section
problem must satisfy. Indefinite waiting violates two of these— the progress and bounded-
waiting criteria.

Liveness refers to a set of properties that a system must satisfy to ensure


that processes make progress during their execution life cycle. A process waiting
indefinitely under the circumstances just described is an example of a “liveness failure.”

There are many different forms of liveness failure; however, all are
generally characterized by poor performance and responsiveness. A very simple example
of a liveness failure is an infinite loop. A busy wait loop presents the possibility of a liveness
failure, especially if a process may loop an arbitrarily long period of time. Effortsat
providing mutual exclusion using tools such as mutex locks and semaphores can often
lead to such failures in concurrent programming.

There are two situations that can lead to liveness failures - deadlock and
priority Inversion.

1. Deadlock
The implementation of a semaphore with a waiting queue may result in a
situation where two or more processes are waiting indefinitely for an event that
can be caused only by one of the waiting processes. The event in question is the
execution of a signal() operation. When such a state is reached, these processes are said to be
deadlocked.
To illustrate this, consider a system consisting of two processes, P0 and P1,

each accessing two semaphores, S and Q, set to the value 1:

Suppose that P0 executes wait(S) and then P1 executes wait(Q). When P0


executes wait(Q), it must wait until P1 executes signal(Q). Similarly, when 1 executes
wait(S), it must wait until P0 executes signal(S). Since these signal() operations cannot
be executed, P0 and P1 are deadlocked.

We say that a set of processes is in a deadlocked state when every process


in the set is waiting for an event that can be caused only by another process in the set.
The “events” with which we are mainly concerned here are the acquisition and release of
resources such as mutex locks and semaphores.

3.8.2 Priority Inversion


A scheduling challenge arises when a higher-priority process needs to
read or modify kernel data that are currently being accessed by a lower-
priority process—or a chain of lower-priority processes. Since kernel data are
typically protected with a lock, the higher-priority process will have to wait for a lower-
priority one to finish with the resource. The situation becomes more complicated if the
lower-priority process is preempted in favor of another process with a higher priority.
This liveness problem is known as priority inversion, and it can occur only in
systems with more than two priorities. Typically, priority inversion is avoided by
implementing a priority-inheritance protocol. According to this protocol, all processes that
are accessing resources needed by a higher-priority process inherit the higher priority until
they are finished with the resources in question. When they are finished, their priorities
revert to their original values.

In the example above, a priority-inheritance protocol would allow process


L to temporarily inherit the priority of process H, thereby preventing process M from
preempting its execution. When process L had finished using resource S, it would
relinquish its inherited priority from H and assume its original priority. Because resource
S would now be available, process H—not M—would run next.
3.9 Classic Problems of Synchronization
There are a number of synchronization problems as examples of a large
class of concurrency-control problems. These problems are used for testing nearly
every newly proposed synchronization scheme.

3.9.1 The Bounded-Buffer Problem


In our problem, the producer and consumer processes share the following

data structures:

int n;
semaphore mutex = 1;

semaphore empty = n;

semaphore full = 0

We assume that the pool consists of n buffers, each capable of holding one
item. The mutex binary semaphore provides mutual exclusion for accesses to the buffer
pool and is initialized to the value 1. The empty and full semaphores count the number of
empty and full buffers. The semaphore empty is initialized to the value n; thesemaphore
full is initialized to the value 0. Note the symmetry between the producer and the
consumer. We can interpret this code as the producer producing full buffers for the
consumer or as the consumer producing empty buffers for the producer.
3.9.2 The Readers –Writers Problem
Suppose that a database is to be shared among several concurrent processes. Some of
these processes may want only to read the database, whereas others may want to update
(that is, read and write) the database. We distinguish between these two typesof
processes by referring to the former as readers and to the latter as writers. Obviously,if
two readers access the shared data simultaneously, no adverse effects will result.
However, if a writer and some other process (either a reader or a writer) access the
database simultaneously, chaos may ensue.

To ensure that these difficulties do not arise, we require that the writers
have exclusive access to the shared database while writing to the database. This
synchronization problem is referred to as the readers–writers problem. Since it was
originally stated, it has been used to test nearly every new synchronization primitive.
The readers–writers problem has several variations, all involving priorities.
The simplest one, referred to as the first readers–writers problem, requires that no
reader be kept waiting unless a writer has already obtained permission to use the
shared object.

In other words, no reader should wait for other readers to finish simply
because a writer is waiting. The second readers–writers problem requires that, once a
writer is ready, that writer perform its write as soon as possible.

In other words, if a writer is waiting to access the object, no new readers


may start reading. A solution to either problem may result in starvation. In the first case,
writers may starve; in the second case, readers may starve.

For this reason, other variants of the problem have been proposed. Next, we present
a solution to the first readers–writers problem. See the bibliographical notes at the end of
the chapter for references describing starvation-free solutions to the second readers–
writers problem.

In the solution to the first readers–writers problem, the reader processes share the
following data structures:

semaphore rw mutex = 1;

semaphore mutex = 1;

int read count = 0;

The binary semaphores mutex and rw mutex are initialized to 1; read count is a counting
semaphore initialized to 0. The semaphore rw mutex is common to both readerand writer
processes. The mutex semaphore is used to ensure mutual exclusion when the variable
read count is updated.

The read count variable keeps track of how many processes are currently
reading the object. The semaphore rw mutex functions as a mutual exclusion
semaphore for the writers. It is also used by the first or last reader that enters or exits
the critical section. It is not used by readers that enter or exit while other readers are in
their critical sections.
Note that, if a writer is in the critical section and n readers are waiting, then
one reader is queued on rw mutex, and n − 1 readers are queued on mutex. Also observe
that, when a writer executes signal(rw mutex), we may resume the executionof either
the waiting readers or a single waiting writer. The selection is made by the scheduler. The
readers–writers problem and its solutions have been generalized to provide reader–writer
locks on some systems.

Acquiring a reader–writer lock requires specifying the mode of the lock:


either read or write access. When a process wishesonly to read shared data, it requests
the reader–writer lock in read mode. A process wishing to modify the shared data must
request the lock in write mode. Multiple processes are permitted to concurrently acquire
a reader–writer lock in read mode, but only one process may acquire the lock for writing,
as exclusive access is required for writers.
Reader–writer locks are most useful in the following situations:
•In applications where it is easy to identify which processes only read shared data and

which processes only write shared data.

•In applications that have more readers than writers. This is because reader–writer locks

generally require more overhead to establish than semaphores or mutual-exclusionlocks.


The increased concurrency of allowing multiple readers compensates for the overhead
involved in setting up the reader–writer lock.

3.9.3 The Dining-Philosophers Problem


Consider five philosophers who spend their lives thinking and eating.
The philosophers share a circular table surrounded by five chairs, each belonging to
one philosopher. In the center of the table is a bowl of rice, and the table is laid with
five single chopsticks. When a philosopher thinks, she does not interact with her
colleagues. From time to time, a philosopher gets hungry and tries to pick up the two
chopsticks that are closest to her (the chopsticks that are between her and her left and
right neighbors). A philosopher may pick up only one chopstick at a time. Obviously,
she cannot pick up a chopstick that is already in the hand of a neighbor. When a hungry
philosopher has both her chopsticks at the same time, she eats without releasing the
chopsticks. When she is finished eating, she puts down both chopsticks and starts
thinking again.

Fig 3.3 The situation of the dining philosophers


The dining-philosophers problem is considered a classic synchronization problem neither
because of its practical importance nor because computer scientists dislike philosophers
but because it is an example of a large class of concurrency-control problems. It is a
simple representation of the need to allocate several resources among several processes
in a deadlock-free and starvation-free manner.

Semaphore Solution

• One simple solution is to represent each chopstick with a semaphore.


• A philosopher tries to grab a chopstick by executing a wait() operation on that
semaphore. She releases her chopsticks by executing the signal() operation on the
appropriate semaphores. Thus, the shared data are semaphore chopstick[5]; where
all the elements of chopstick are initialized to 1.

• The structure of philosopher i is shown above.


• Although this solution guarantees that no two neighbors are eating simultaneously, it
nevertheless must be rejected because it could create a deadlock.

• Suppose that all five philosophers become hungry at the same time and each grabs her
left chopstick. All the elements of chopstick will now be equal to 0. When each
philosopher tries to grab her right chopstick, she will be delayed forever.
Several possible remedies to the deadlock problem are the following:

• Allow at most four philosophers to be sitting simultaneously at the table.


•Allow a philosopher to pick up her chopsticks only if both chopsticks are available (to do

this, she must pick them up in a critical section).

•Use an asymmetric solution— that is, an odd-numbered philosopher picks up first her

left chopstick and then her right chopstick, whereas an even numbered philosopher picks
up her right chopstick and then her left chopstick.

Monitor Solution
Let’s see a deadlock-free solution to the dining-philosophers problem. This
solution imposes the restriction that a philosopher may pick up her chopsticks only if both
of them are available. To 5code this solution, we need to distinguish among three states
in which we may find a philosopher. For this purpose, we introduce the following data
structure:

enum {THINKING, HUNGRY, EATING} state[5];


Philosopher i can set the variable state[i] = EATING only if her two neighbors are not
eating: (state[(i+4) % 5] != EATING) and (state[(i+1) % 5] != EATING).

We also need to declare

condition self[5];
This allows philosopher i to delay herself when she is hungry but is unable to obtain the
chopsticks she needs.

We are now in a position to describe our solution to the


dining_x0002_philosophers problem. The distribution of the chopsticks is controlled by the
monitor Dining Philosophers
Each philosopher, before starting to eat, must invoke the operation pickup(). This act may
result in the suspension of the philosopher process. After the successful completion of the
operation, the philosopher may eat. Following this, the philosopher invokes the putdown()
operation. Thus, philosopher i must invoke the operations pickup() and putdown() in the
following sequence:

DiningPhilosophers.pickup(i);

...

eat

...

DiningPhilosophers.putdown(i);
It is easy to show that this solution ensures that no two neighbors are eating
simultaneously and that no deadlocks will occur. As we already noted, however, it is possible
for a philosopher to starve to death.
3. 10. DEADLOCKS
3.10.1. System Model

A system consists of a finite number of resources to be distributed among a number of


competing threads. The resources may be partitioned into several types (or classes), each
consisting of some number of identical instances. CPU cycles, files, and I/O devices(such
as network interfaces and DVD drives) are examples of resource types. If a system has
four CPUs, then the resource type CPU has four instances. Similarly, the resource type
network may have two instances. If a thread requests an instance of a resource type, the
allocation of any instance of the type should satisfy the request. If it does not, then the
instances are not identical, and the resource type classes have not been defined properly.

A thread must request a resource before using it and must release the
resource after using it. A thread may request as many resources as it requires to carry
out its designated task. Obviously, the number of resources requested may not exceed
the total number of resources available in the system. In other words, a thread cannot
request two network interfaces if the system has only one.
Under the normal mode of operation, a thread may utilize a resource in only
the following sequence:
1.Request. The thread requests the resource. If the request cannot be granted

immediately (for example, if a mutex lock is currently held by another thread), then the
requesting thread must wait until it can acquire the resource.
2. Use. The thread can operate on the resource (for example, if the resource is a mutex

lock, the thread can access its critical section).


3. Release. The thread releases the resource

A set of threads is in a deadlocked state when every thread in the set is waiting for an
event that can be caused only by another thread in the set. The events with which we are
mainly concerned here are resource acquisition and release. The resources are typically
logical (for example, mutex locks, semaphores, and files); however, other typesof events
may result in deadlocks, including read_x0002_ing from a network interface orthe IPC
(inter process communication) facilities.
3.10.2 Deadlock Characterization
Below mentioned are the four conditions that characterize deadlock.

Necessary Conditions
A deadlock situation can arise if the following four conditions hold
simultaneously in a system:
1. Mutual exclusion. At least one resource must be held in a non-sharable mode; that
is, only one thread at a time can use the resource. If another thread requests that
resource, the requesting thread must be delayed until the resource has been released.
2.Hold and wait. A thread must be holding at least one resource and waiting to acquire

additional resources that are currently being held by other threads.


3. No preemption. Resources cannot be preempted; that is, a resource can be released
only voluntarily by the thread holding it, after that thread has completed its task.
4.Circular wait. A set {T0, T1, ..., Tn} of waiting threads must exist such that T0 is
waiting for a resource held by T1, T1 is waiting for a resource held by T2, ..., Tn−1 is
waiting for a resource held by Tn, and Tn is waiting for a resource held by T0.

Resource-Allocation Graph
Deadlocks can be described more precisely in terms of a directed graph
called a system resource-allocation graph. This graph consists of a set of vertices V
and a set of edges E. The set of vertices V is partitioned into two different types of nodes:
T = {T1, T2, ..., Tn}, the set consisting of all the active threads in the system, and R =
{R1, R2, ..., Rm}, the set consisting of all resource types in the system.
A directed edge from thread Ti to resource type Rj is denoted by Ti →
Rj; it signifies that thread Ti has requested an instance of resource type Rj and
is currently waiting for that resource. A directed edge from resource type Rj to thread Ti
is denoted by Rj → Ti ; it signifies that an instance of resource type Rj has been
allocated to thread Ti. A directed edge Ti → Rj is called a request edge; a directed
edge Rj → Ti is called an assignment edge. We represent each thread Ti as a circle
and each resource type Rj as a rectangle.
When thread Ti requests an instance of resource type Rj, a request edge is
inserted in the resource-allocation graph. When this request can be fulfilled, the request
edge is instantaneously transformed to an assignment edge. When thethread no
longer needs access to the resource, it releases the resource. As a result, the assignment
edge is deleted.

3.4 Resource allocation graph

The resource-allocation graph shown above depicts the following situation.


• The sets T, R, and E:
◦ T = {T1, T2, T3}
◦ R = {R1, R2, R3, R4}
◦ E = {T1 → R1, T2 → R3, R1 → T2, R2 → T2, R2 → T1, R3 → T3}
• Resource instances:
◦ One instance of resource type R1
◦ Two instances of resource type R2
◦ One instance of resource type R3
◦ Three instances of resource type R4
• Thread states:
◦ Thread T1 is holding an instance of resource type R2 and is waiting for an instance of
resource type R1.
◦Thread T2 is holding an instance of R1 and an instance of R2 and is waiting for an
instance of R3.

◦ Thread T3 is holding an instance of R3.


Given the definition of a resource-allocation graph, it can be shown that, if
the graph contains no cycles, then no thread in the system is deadlocked. If the graph
does contain a cycle, then a deadlock may exist.

If each resource type has exactly one instance, then a cycle implies that
a deadlock has occurred. If the cycle involves only a set of resource types, each of
which has only a single instance, then a deadlock has occurred. Each thread involved in
the cycle is deadlocked. In this case, a cycle in the graph is both a necessary and
a sufficient condition for the existence of deadlock.

If each resource type has several instances, then a cycle does not
necessarily imply that a deadlock has occurred. In this case, a cycle in the graph

is a necessary but not a sufficient condition for the existence of deadlock.

Fig 3.5 Resource allocation graph


Suppose that thread T3 requests an instance of resource type R2. Since noresource
instance is currently available, we add a request edge T3 → R2 to the graph.At this
point, two minimal cycles exist in the system:

T1 → R1 → T2 → R3 → T3 → R2 → T1 T2 → R3 → T3 → R2 → T2
Threads T1, T2, and T3 are deadlocked. Thread T2 is waiting for the
resource R3, which is held by thread T3. Thread T3 is waiting for either thread T1 or
thread T2 to release resource R2. In addition, thread T1 is waiting for thread T2 to
release resource R1.

We also have a cycle:


T1 → R1 → T3 → R2 → T1
However, there is no deadlock. Observe that thread T4 may release its
instance of resource type R2. That resource can then be allocated to T3, breaking the
cycle.

In summary, if a resource-allocation graph does not have a cycle, then


the system is not in a deadlocked state. If there is a cycle, then the system may
or may not be in a deadlocked state.

3.11 Methods for Handling Deadlocks


Generally speaking, we can deal with the deadlock problem in one of three
ways:
• We can ignore the problem altogether and pretend that deadlocks never occur in the

system.
• We can use a protocol to prevent or avoid deadlocks, ensuring that the system will

never enter a deadlocked state.

• We can allow the system to enter a deadlocked state, detect it, and recover.
The first solution is the one used by most operating systems, including Linux
and Windows. It is then up to kernel and application developers to write programsthat
handle deadlocks, typically using approaches outlined in the second solution. Some
systems—such as databases—adopt the third solution, allowing deadlocks to occur and
then managing the recovery.

To ensure that deadlocks never occur, the system can use either a
deadlock prevention or a deadlock-avoidance scheme. Deadlock prevention provides
a set of methods to ensure that at least one of the necessary conditions cannot
hold. These methods prevent deadlocks by constraining how requests for resources can
be made.

Deadlock avoidance requires that the operating system be given


additional information in advance concerning which resources a thread will request and
use during its lifetime. With this additional knowledge, the operating system can decide
for each request whether or not the thread should wait.
To decide whether the current request can be satisfied or must be delayed, the system
must consider the resources currently available, the resources currently allocated to each
thread, and the future requests and releases of each thread.
However, there is no deadlock. Observe that thread T4 may release its
instance of resource typenal knowledge, the operating system can decide for each request
whether or not the thread should wait. To decide whether the current request can be
satisfied or must be delayed, the system must consider the resources currently available,
the resources currently allocated to each thread, and the future requests and releases of
each thread.

3.12 Deadlock Prevention


For a deadlock to occur, each of the four necessary conditions must
hold. By ensuring that at least one of these conditions cannot hold, we can prevent the
occurrence of a deadlock.

1. Mutual Exclusion
The mutual-exclusion condition must hold. That is, at least one resource must be non-
sharable. Sharable resources do not require mutually exclusive access and thus
cannot be involved in a deadlock. Read-only files are a good example of a sharable
resource. If several threads attempt to open a read-only file at the same time, they can
be granted simultaneous access to the file. A thread never needs to wait for a sharable
resource. In general, however, we cannot prevent deadlocks by denying the
mutual-exclusion condition, because some resources are intrinsically
non-sharable. For example, a mutex lock cannot be simultaneously shared by several
threads.

2. Hold and Wait


To ensure that the hold-and-wait condition never occurs in the system, we
must guarantee that, whenever a thread requests a resource, it does not hold
any other resources. One protocol that we can use requires each thread to request and
be allocated all its resources before it begins execution. This is, of course, impractical
for most applications due to the dynamic nature of requesting resources.
An alternative protocol allows a thread to request resources only when
it has none. A thread may request some resources and use them. Before it can request
any additional resources, it must release all the resources that it is currently allocated.
Both these protocols have two main disadvantages. First, resource
utilization may be low, since resources may be allocated but unused for a long period.
For example, a thread may be allocated a mutex lock for its entire execution, yet only
require it for a short duration. Second, starvation is possible. A thread that needs several
popular resources may have to wait indefinitely, because at least one of the resources
that it needs is always allocated to some other thread.

No Preemption
The third necessary condition for deadlocks is that there be no preemption
of resources that have already been allocated. To ensure that this condition does not
hold, we can use the following protocol. If a thread is holding some resources and
requests another resource that cannot be immediately allocated to it (that is, the thread
must wait), then all resources the thread is currently holding are preempted. In other
words, these resources are implicitly released.

The preempted resources are added to the list of resources for which the
thread is waiting. The thread will be restarted only when it can regain its old resources,
as well as the new ones that it is requesting. Alternatively, if a thread requests some
resources, we first check whether they are available. If they are, we allocate them. If
they are not, we check whether they are allocated to some other thread that is waiting
for additional resources.

If so, we preempt the desired resources from the waiting thread and
allocate them to the requesting thread. If the resources are neither available nor held by
a waiting thread, the requesting thread must wait. While it is waiting, some of its resources
may be preempted, but only if another thread requests them. A thread can berestarted
only when it is allocated the new resources it is requesting and recovers any resources
that were preempted while it was waiting.

This protocol is often applied to resources whose state can be easily saved
and restored later, such as CPU registers and database transactions. It cannot generally
be applied to such resources as mutex locks and semaphores, precisely the type of

resources where deadlock occurs most commonly


Circular Wait
The three options presented thus far for deadlock prevention are generally
impractical in most situations. However, the fourth and final condition for deadlocks —
the circular-wait condition — presents an opportunity for a practical solution by
invalidating one of the necessary conditions. One way to ensure that this condition never
holds is to impose a total ordering of all resource types and to require that each thread
requests resources in an increasing order of enumeration.

To illustrate, we let R = {R1, R2, ..., Rm} be the set of resource types. We
assign to each resource type a unique integer number, which allows us to compare two
resources and to determine whether one precedes another in our ordering. Formally, we
define a one-to-one function F: R → N, where N is the set of natural numbers. We can
accomplish this scheme in an application program by developing an ordering among all
synchronization objects in the system.
3.13 Deadlock Avoidance
Deadlock-prevention algorithms prevent deadlocks by limiting how
requests can be made. The limits ensure that at least one of the necessary
conditions for deadlock cannot occur. Possible side effects of preventing deadlocks
by this method, however, are low device utilization and reduced system
throughput.
An alternative method for avoiding deadlocks is to require additional
information about how resources are to be requested. For example, in a system with
resources R1 and R2, the system might need to know that thread T will request first R1
and then R2 before releasing both resources, whereas thread Q will request R2 and then
R1. With this knowledge of the complete sequence of requests and releases for each
thread, the system can decide for each request whether or not the thread should wait in
order to avoid a possible future deadlock.

Each request requires that in making this decision the system consider the
resources currently available, the resources currently allocated to each thread, and the
future requests and releases of each thread. The various algorithms that use this
approach differ in the amount and type of information required. The simplest and most
useful model requires that each thread declare the maximum number of resources of each
type that it may need. Given this a priori information, it is possible to construct an
algorithm that ensures that the system will never enter a deadlocked state.

A deadlock avoidance algorithm dynamically examines the


resource-allocation state to ensure that a circular-wait condition can never
exist. The resource-allocation state is defined by the number of available and allocated
resources and the maximum demands of the threads.
3.13.1 Safe State
A state is safe if the system can allocate resources to each thread (up
to its maximum) in some order and still avoid a deadlock.
A system is in a safe state only if there exists a safe sequence. A
sequence of threads <T1, T2, ..., Tn> is a safe sequence for the current allocation state
if, for each Ti, the resource requests that Ti can still make can be satisfied by the currently
available resources plus the resources held by all Tj , with j < i. In this situation, if the
resources that Ti needs are not immediately available, then Ti can wait until all Tj have
finished. When they have finished, Ti can obtain all of its needed resources, complete its
designated task, return its allocated resources, and terminate.

When Ti terminates, Ti+1 can obtain its needed resources, and so on. If no
such sequence exists, then the system state is said to be unsafe. A safe state is
not a deadlocked state. Conversely, a deadlocked state is an unsafe state. Not

all unsafe states are deadlocks, however. An unsafe state may lead to a
deadlock. As long as the state is safe, the operating system can avoid unsafe (and
deadlocked) states. In an unsafe state, the operating system cannot prevent threads from
requesting resources in such a way that a deadlock occurs. The behavior of the threads
controls unsafe states.

To illustrate, consider a system with twelve resources and three threads:


T0, T1, and T2. Thread T0 requires ten resources, thread T1 may need as many as four,
and thread T2 may need up to nine resources. Suppose that, at time t0, thread T0 is
holding five resources, thread T1 is holding two resources, and thread T2 is holding two
resources. (Thus, there are three free resources.)

At time t0, the system is in a safe state. The sequence <T1, T0, T2>
satisfies the safety condition. Thread T1 can immediately be allocated all its
resources and then return them (the system will then have five available resources);
then thread T0 can get all its resources and return them (the system will then have ten
available resources); and finally thread T2 can get all its resources and return them (the
system will then have all twelve resources available).
A system can go from a safe state to an unsafe state. Suppose that, at time
t1, thread T2 requests and is allocated one more resource. The system is no longer in a
safe state. At this point, only thread T1 can be allocated all its resources. When it returns
them, the system will have only four available resources.

Since thread T0 is allocated five resources but has a maximum of ten, it may
request five more resources. If it does so, it will have to wait, because they are
unavailable. Similarly, thread T2 may request six additional resources and have to wait,
resulting in a deadlock. Our mistake was in granting the request from thread T2 for one
more resource. If we had made T2 wait until either of the other threads had finished and
released its resources, then we could have avoided the deadlock.
Given the concept of a safe state, we can define avoidance algorithms that ensure that
the system will never deadlock. The idea is simply to ensure that the system will always
remain in a safe state. Initially, the system is in a safe state.
Whenever a thread requests a resource that is currently available, the system must decide
whether the resource can be allocated immediately or the thread must wait. The request
is granted only if the allocation leaves the system in a safe state.
3.13.2 Resource-Allocation-Graph Algorithm
If we have a resource-allocation system with only one instance of each
resource type, we can use a variant of the resource-allocation graph defined earlier for
deadlock avoidance.

In addition to the request and assignment edges already described, we


introduce a new type of edge, called a claim edge. A claim edge Ti → Rj indicates that
thread Ti may request resource Rj at some time in the future. This edge resembles a
request edge in direction but is represented in the graph by a dashed line.

When thread Ti requests resource Rj, the claim edge Ti → Rj is converted


to a request edge. Similarly, when a resource Rj is released by Ti, the assignment edge
Rj → Ti is reconverted to a claim edge Ti → Rj. Note that the resources must be claimed
a priori in the system. That is, before thread Ti starts executing, all its claim edges must
already appear in the resource-allocation graph. We can relax this condition by allowing
a claim edge Ti → Rj to be added to the graph only if all the edges associated with thread
Ti are claim edges.

Now suppose that thread Ti requests resource Rj. The request can be
granted only if converting the request edge Ti → Rj to an assignment edge Rj → Ti does
not result in the formation of a cycle in the resource-allocation graph. We check for safety
by using a cycle-detection algorithm. An algorithm for detecting a cycle in this graph
requires an order of n2 operations, where n is the number of threads in the system.
If no cycle exists, then the allocation of the resource will leave the system
in a safe state. If a cycle is found, then the allocation will put the system in an unsafe
state. In that case, thread Ti will have to wait for its requests to be satisfied.
3.13. 3 Banker’s Algorithm
The resource-allocation-graph algorithm is not applicable to a resource
allocation system with multiple instances of each resource type. The deadlock-avoidance
algorithm is applicable to such a system but is less efficient than the resource-allocation
graph scheme. This algorithm is commonly known as the banker’s algorithm. The
name was chosen because the algorithm could be used in a banking system to ensure
that the bank never allocated its available cash in such a way that it could no longer satisfy
the needs of all its customers.
When a new thread enters the system, it must declare the maximum number of
instances of each resource type that it may need. This number may not exceed
the total number of resources in the system. When a user requests a set of
resources, the system must determine whether the allocation of these resources
will leave the system in a safe state. If it will, the resources are allocated; otherwise,
the thread must wait until some other thread releases enough resources. Several
data structures must be maintained to implement the banker’s algorithm.
These data structures encode the state of the resource- allocation
system, where n is the number of threads in the system and m is the number of
resource types:

 Available. A vector of length m indicates the number of available


resources of each type. If Available[j] equals k, then k instances of

resource type Rj are available.


 Max. An n × m matrix defines the maximum demand of each
thread. If Max[i][j] equals k, then thread Ti may request at most k

instances of resource type Rj.


 Max. An n × m matrix defines the maximum demand of each thread. If
Max[i][j] equals k, then thread Ti may request at most k instances of resource type
Rj.

 Allocation. An n × m matrix defines the number of resources of each type


currently allocated to each thread. If Allocation[i][j]

equals k, then thread Ti is currently allocated k instances of resource type Rj.


 Need. An n × m matrix indicates the remaining resource need of each thread.
If Need[i][j] equals k, then thread Ti may need k more
instances of resource type Rj to complete its task.

Need[i][j] = Max[i][j] − Allocation[i][j].


These data structures vary over time in both size and value. To simplify the
presentation of the banker’s algorithm, we next establish some notation. Let X and Y be
vectors of length n. We say that X ≤ Y if and onlyif X[i] ≤ Y[i] for all i = 1, 2, ..., n. For
example, if X = (1,7,3,2) and Y = (0,3,2,1), then Y ≤ X. In addition, Y < X if Y ≤ X and Y
≠ X. We can treat each row in the matrices Allocation and Need as vectors and refer to
them as Allocationi and Needi.

The vector Allocationi specifies the resources currently allocatedto thread Ti


; the vector Needi specifies the additional resources that thread

Ti may still request to complete its task.


SAFETY ALGORITHM
This algorithm is used for finding out whether or not a system is in a safe state

RESOURCE REQUEST ALGORITHM


This algorithm is used for determining whether requests can be safely
granted. Let Requesti be the request vector for thread Ti. If Requesti [j] == k, then
thread Ti wants k instances of resource type Rj.

When a request for resources is made by thread Ti, the following actionsare taken:
3.14 Deadlock Detection
If a system does not employ either a deadlock-prevention or a deadlock
avoidance algorithm, then a deadlock situation may occur. In this environment, the system
may provide:
•An algorithm that examines the state of the system to determine whethera deadlock
has occurred

• An algorithm to recover from the deadlock

3.14.1 Single Instance of Each Resource Type


If all resources have only a single instance, then we can define a deadlock
detection algorithm that uses a variant of the resource-allocation graph, called
a wait-for graph. We obtain this graph from the resource-allocation graph by
removing the resource nodes and collapsing the appropriate edges.
More precisely, an edge from Ti to Tj in a wait-for graph impliesthat thread
Ti is waiting for thread Tj to release a resource that Tineeds. An edge Ti →
Tj exists in a wait-for graph if and only if the corresponding resource-allocation
graph contains two edges Ti → Rq andRq → Tj for some resource Rq. Below Figure
represent a resource- allocation graph and the corresponding wait-for graph.
As before, a deadlock exists in the system if and only if the wait-for
graph contains a cycle. To detect deadlocks, the system needs to maintain the wait
for graph and periodically invoke an algorithm that searches for a cycle in the graph. An
algorithm to detect a cycle in a graph requires O(n2) operations, where n is the number
of vertices in the graph.
3.14.2 Several Instances of a Resource Type
The wait-for graph scheme is not applicable to a resource- allocation
system with multiple instances of each resource type. We turnnow to a deadlock
detection algorithm that is applicable to such a system. The
algorithm employs several time-varying data structures that are similar to
those used in the banker’s algorithm.

 Available. A vector of length m indicates the number of availableresources


of each type.
 Allocation. An n × m matrix defines the number of resources of each typecurrently
allocated to each thread.
 Request. An n × m matrix indicates the current request of each thread. If
Request[i][j] equals k, then thread Ti is requesting k more instances of
resource type Rj.

The ≤ relation between two vectors is defined earlier. To simplify notation, we again treat
the rows in the matrices Allocation and Request as vectors;
we refer to them as Allocationi and Requesti. The detection algorithm described here
simply investigates every possible allocation sequence for the threads that remain to be
completed.
You may wonder why we reclaim the resources of thread Ti (in step 3) as soon
as we determine that Requesti ≤ Work (in step 2b). We know that Ti is currently not involved
in a deadlock (since Requesti ≤ Work). Thus, we take an optimistic attitude and assume
that Ti will require no more resources to complete its task; it will thus soon return all currently
allocated resources to the system. If our assumption is incorrect, a deadlock may occur later.

That deadlock will be detected the next time the deadlock-detection algorithm is invoked.
3.14.3 Detection-Algorithm Usage
When should we invoke the detection algorithm? The answer depends on
two factors:
1. How often is a deadlock likely to occur?
2. How many threads will be affected by deadlock when it happens?
If deadlocks occur frequently, then the detection algorithm should be
invoked frequently. Resources allocated to deadlocked threads
will be idle until the deadlock can be broken. In addition, the number of threads involved
in the deadlock cycle may grow.

Deadlocks occur only when some thread makes a request that cannot be
granted immediately. This request may be the final request that completes a chain of
waiting threads. In the extreme, then, we can invoke the deadlock detection algorithm
every time a request for allocation cannot be granted immediately. In this case, we can
identify not only the deadlocked set of threads but also the specific thread that
“caused” the deadlock. (In reality, each of the deadlocked threads is a link in the cycle
inthe resource graph, so all of them, jointly, caused the deadlock.) If there are many
different resource types, one request may create many cycles in the resource graph, each
cycle completed by the most recent request and “caused” by the one identifiable thread.
15. Recovery from Deadlock
When a detection algorithm determines that a deadlock exists, several
alternatives are available. One possibility is to inform the operator that a deadlock as
occurred and to let the operator deal with the deadlock manually. Another possibility is
to let the system recover from the deadlock automatically. There are two
options for breaking a deadlock. One is simply to abort one or more threads to
break the circular wait. The other is to preempt some resources from one or more ofthe
deadlocked threads.

1. Process and Thread Termination


To eliminate deadlocks by aborting a process or thread, we use one of two
methods. In both methods, the system reclaims all resources
allocated to the terminated processes.
•Abort all deadlocked processes. This method clearly will break the deadlock cycle,
but at great expense. The deadlocked processes may have computed for a long
time, and the results of these partial computations must be discarded and probably
will have to be recomputed later.

•Abort one process at a time until the deadlock cycle is eliminated. This
method incurs considerable overhead, since after each process is aborted, a deadlock-
detection algorithm must be invoked to determine whether any processes are still

deadlocked.
Aborting a process may not be easy. If the process was in the midst of
updating a file, terminating it may leave that file in an incorrect state. Similarly,
if the process was in the midst of updating shared data while holding a mutex lock, the
system must restore the status of the lock as being available, although no guarantees
can be made regarding the integrity of the shared data.

If the partial termination method is used, then we must determine


which deadlocked process (or processes) should be terminated. This
determination is a policy decision, similar to CPU-
scheduling decisions. The question is basically an economic one; we should abort those
processes whose termination will incur the minimum cost. Unfortunately, the term
minimum cost is not a precise one. Many factors may affect which process is chosen,
including:

1. What the priority of the process is


2.How long the process has computed and how much longer the processwill
compute before completing its designated task
3.How many and what types of resources the process has used (for
example, whether the resources are simple to preempt)

4. How many more resources the process needs in order to complete


5. How many processes will need to be terminated
3.15.2 Resource Preemption
To eliminate deadlocks using resource preemption, we successively
preempt some resources from processes and give these resources to other
processes until the deadlock cycle is broken. If

preemption is required to deal with deadlocks, then three issues need to be addressed:
1.Selecting a victim. Which resources and which processes are to be preempted? As in
process termination, we must determine the order of preemption to minimize cost. Cost
factors may include such parameters as thenumber of resources a deadlocked process is

holding and the amount of time the process has thus far consumed.
2. Rollback. If we preempt a resource from a process, what should be donewith that
process? Clearly, it cannot continue with its normal execution; it ismissing some needed
resource. We must roll back the process to some safestate and restart it from that
state. Since, in general, it is difficult todetermine what a safe state is, the simplest
solution is a total rollback: abortthe process and then restart it. Although it is more
effective to roll back theprocess only as far as necessary to break the deadlock, this
method requiresthe system to keep more information about the state of all running
processes.
3. 3.Starvation. How do we ensure that starvation will not occur? That is, howcan we
guarantee that resources will not always be preempted from the sameprocess? In a system
where victim selection is based primarily on cost factors,it may happen that the same
process is always picked as a victim.
As a result,this process never completes its designated task, a starvation situation
anypractical system must address. Clearly, we must ensure that a process can bepicked as
a victim only a (small) finite number of times The most commonsolution is to include

the number of rollbacks in the cost factor.


Lecture Slides
Lecture Slides

Lecture Slides

https://drive.google.com/drive/folders/1JfKycT1w27UIf8QudCcuI4
Wdfgzd3pYV?usp=sharing
Lecture Videos
Lecture Videos

Lecture Videos

https://drive.google.com/drive/folders/1LW0_bSIJD2AzyHeKvpgK
9PpIY87fFprN?usp=sharing
Assignment
Assignment

S.No Question Categ


ory

1. a. Race conditions are possible in many computer systems. Consider a


banking system that maintains an account balance with two functions:
deposit(amount) and withdraw(amount). These two functions are
passed the amount that is to be deposited or withdrawn from the bank 1
account balance. Assume that a husband and wife share a bank account.
Concurrently, the husband calls the withdraw() function, and the wife
calls deposit(). Describe how a race condition is possible and what
might be done to prevent the race condition from occurring.
b. Consider the following snapshot of a system:

Answer the following questions using the banker’s algorithm:


a. What is the content of the matrix Need?
b. Is the system in a safe state?
c.If a request from thread T1 arrives for (0,4,2,0), can the request be granted
immediately?
2. a. Illustrate how a binary semaphore can be used to implement mutual
exclusion among n processes.

b. Describe how deadlock is possible with the dining-philosophers problem.

2
Assignment

S.No Question Categ


ory

3. a. Show that, if the wait() and signal() semaphore operations are not
executed atomically, then mutual exclusion may be violated.

b. Explain Dead lock detection (Banker’s Algorithm) with Example.


3

4. a. Explain the solution for Dining-Philosophers Problem.

b. Write about Deadlock Detection Methods with examples


4

5. a. What is critical section problem? Explain with example?

5
b. Write about Deadlock Prevention Methods.
Part A Q & A
1. What is critical section problem?(K1)(CO3)
The important feature of the system is that, when one process is executing in its
critical section, no other process is allowed to execute in its critical section. That is,
no two processes are executing in their critical sections at the same time.

2.What are the requirements of the critical section problem?(K1)(CO3)


Mutual exclusion.

Progress

Bounded waiting

3. Define bounded vs unbounded buffer.(K1)(CO3)


The unbounded buffer places no practical limit on the size of the buffer. The
consumer may have to wait for new items, but the producer can always produce
new items. The bounded buffer assumes a fixed buffer size. In this case, the
consumer must wait if the buffer is empty, and the producer must wait if the
buffer is full.

4. What is semaphores? (K1)(CO3)


A semaphore S is an integer variable that, apart from initialization, is accessed only
through two standard atomic operations: wait() and signal().The wait() operation
was originally termed P and signal() was originally called V.

5.What is monitors? (K1)(CO3)


Monitor is the construct which provides the solution to the critical section problem
by ensuring that only one process at a time is active within the monitor.

6. Define DeadLocks.(K1)(CO3)
A process requests resources; if the resources are not available at that time, the
process enters a waiting state. Sometimes, a waiting process is never again able
to change state, because the resources it has requested are held by other waiting
processes. This situation is called a deadlock.
7. Define Starvation in deadlock? (K1)(CO3)
A problem related to deadlock is indefinite blocking or starvation, a situation where
processes wait indefinitely within a semaphore. Indefinite blocking may occur if we
add and remove processes from the list associated with a semaphore in LIFO order.

8. Name dome classic problem of synchronization? (K1)(CO3)

x The Bounded – Buffer Problem

x The Reader – Writer Problem

x The Dining –Philosophers Problem


9. What is the sequence of operation by which a process utilizes a
resource? (K1)(CO3)

Under the normal mode of operation, a process may utilize a resource in only the
following sequence:

x Request: If the request cannot be granted immediately, then the requesting


process must wait until it can acquire the response.

x Use: The process can operate on the resource.

x Release: The process releases the resource

10. Give the condition necessary for a deadlock situation to arise? (K1)(CO3)
A deadlock situation can arise if the following 4 condition hold simultaneously in a
system.

x Mutual Exclusion

x Hold and Wait

x No preemption

x Circular Wait

11. Define ‘Safe State”? (K1)(CO3)


A state is safe if the system allocates resources to each process in some order
and still avoid deadlock.
12. Define deadlock-avoidance algorithm? (K1)(CO3)
A deadlock-avoidance algorithm dynamically examines the resource allocation state to
ensure that a circular wait condition can never exist. The resource allocation state is defined
by the number of available and allocated resources, and the maximum demand ofthe
processes

13. Define race condition. (K1)(CO3)


When several process access and manipulate same data concurrently, then the outcome
of the execution depends on particular order in which the access takes place is called race
condition. To avoid race condition, only one process at a time can manipulate the shared
variable

14. Define busy waiting and spinlock. (K1)(CO3)


When a process is in its critical section, any other process that tries to enter its critical
section must loop continuously in the entry code. This is called as busy waiting and this
type of semaphore is also called a spinlock, because the process while waiting for the lock

15. Define entry section and exit section. (K1)(CO3)


The critical section problem is to design a protocol that the processes can use to cooperate.
Each process must request permission to enter its critical section. The section of the code
implementing this request is the entry section. The critical section is followed by an exit
section. The remaining code is the remainder section.

16. What is a resource-allocation graph? (K1)(CO3)


Deadlocks can be described more precisely in terms of a directed graph called a system
resource allocation graph. This graph consists of a set of vertices V and a set of edges E.
The set of vertices V is partitioned into two different types of nodes; P the set consisting of
all active processes in the system and R the set consisting of all resource types in the
system.
Part B Q
PART-B
1. Consider the following snapshot of a system (K3) (CO3)
Allocation Max Available

ABC ABC ABC


P0 010 753 021
P1 200 322
P2 302 902
P3 211 222
P4 002 433

Is the system in a safe state?(6)


What is the content of matrix Need?(2)
What are the total number of resources available for each resource type?(2)
If a request from process P1 arrives for ( 1,2,1) can the request be granted
immediately?(4)

2. Consider the following snapshot of a system (K3) (CO3)

Allocation Max Need Available


A B C D A B C D A B C D
P0 0 0 1 2 0 0 1 2 1 5 2 0
P1 1 0 0 0 1 7 5 0
P2 1 3 5 4 2 3 5 6
P3 0 6 3 2 0 6 5 2
P4 0 0 1 4 0 6 5 6

Answer the following question using banker’s algorithm


a) What is the content of the matrix Need? (2)
b) Is the system in a safe state? (6)
c)If a request from process p1 arrives for (0,4,2,0) can the request be granted
immediately? (6)

3. Illustrate with examples about Dining Philospher’s problem. (13) (K2)(CO2)


4. Illustrate with examples about Reader-Writer problem (13) (K2)(CO2)
5. Illustrate with examples about Producer-Consumer problem.
(13)(K2)(CO2)
6. Consider the following snapshot of a system (K3,CO3)

(i) What is the content of matrix Need? (3)


(ii) Is the system in a safe state? (6)
(iii) If a request from process P1 arrives for (1,2,1) can the request be granted
immediately? (6)

90
Supportive Online
Certification courses
SUPPORTIVE ONLINE COURSES

Course
S No Course title Link
provider

https://www.udemy.co
m/course/operating-
Operating Systems from systems-from-scratch-
1 Udemy scratch - Part 1 part1/

https://www.udacity.co
m/course/introduction-
Introduction to Operating
2 Udacity Systems to-operating-systems

https://www.coursera.o

Operating Systems and You: rg/learn/os-power-user


3 Coursera Becoming a Power User

https://www.edx.org/c
ourse/computer-
edX Computer Hardware and hardware-and-
4 Operating Systems operating-systems

92
Real life Applications in
day to day life and to
Industry
REAL TIME APPLICATIONS IN DAY TO DAY LIFE

AND TO INDUSTRY

1. Apply the functionality of operating system in air traffic control system.


(K4, CO3)
Content beyond
Syllabus
Contents beyond the Syllabus

Intertask Communication in RTOS


Reference Video – Content Beyond Syllabus

https://www.freer tos.org/Inter-Task-
Communication.html

96
Assessment Schedule
ASSESSMENT SCHEDULE

Cycle Test - MCQ MCQ


SIAT
II Test 1 & 2 Test 3 & 4

Proposed
18.09.2024 08.10.2024 17.09.2024 07.10.2024
Dates
Prescribed Text books &
Reference books
PRESCRIBED TEXT BOOKS AND REFERENCE BOOKS

TEXT BOOKS
1. Abraham Silberschatz, Peter Baer Galvin and Greg Gagne, “Operating
System Concepts” II, 10th Edition, John Wiley and Sons Inc., 2018.
[EBOOK]
2. Andrew S Tanenbaum, "Modern Operating Systems", Pearson, 5th
Edition, 2022 New Delhi.
REFERENCE BOOKS
1. William Stallings, "Operating Systems: Internals and Design Principles",
7th Edition, Prentice Hall, 2018.
2. Achyut S.Godbole, Atul Kahate, “Operating Systems”, McGraw Hill
Education, 2016.
Mini Project
Suggestions
MINI PROJECT SUGGESTIONS

Category Mini Project Title

1 Producer-Consumer Problem:
Implement a simulation of the classic producer-consumer problem using
multiple threads or processes. Ensure proper synchronization to avoid
issues like race conditions and deadlocks.

Resource Allocation Graph Simulator:


Develop a program that simulates the resource allocation graph and its
evolution over time in a system with multiple processes and resources.
Introduce scenarios that can lead to deadlocks and implement detection
and recovery mechanisms.

2 Dining Philosophers Problem:


Model and solve the dining philosophers problem. Create a program where
a group of philosophers is sitting at a dining table, and they must share
forks. Implement proper synchronization techniques to avoid conflicts and
ensure fair access to resources.
Deadlock Prevention in a File System:
Create a simplified file system where multiple processes can request access
to files. Implement a deadlock prevention strategy, such as the Banker's
algorithm, to ensure that the system remains deadlock-free. Demonstrate
how your system handles requests and avoids deadlock situations.
3 Readers-Writers Problem:
Develop a solution for the readers-writers problem. Create a system with
multiple reader and writer threads where readers can access a shared
resource concurrently, but writers must have exclusive access. Ensure
proper synchronization to avoid data corruption and maintain consistency.

Traffic Intersection Simulation:


Design a simulation of a traffic intersection where multiple roads intersect.
Model the traffic flow with vehicles representing processes. Introduce
deadlock scenarios, such as conflicting resource requests at the
intersection, and implement a deadlock prevention or avoidance strategy.
4 Create on bankers algorithm and its real-time application.

Implement synchronization mechanisms to control the state transitions of


the traffic lights (Green, Yellow, Red).
5 Generate a Report on Dining Philosopher problem.

Model an intersection with multiple traffic lights (North-South, East-West).


Thank you

Disclaimer :

This document is confidential and intended solely for the educational purpose of RMK Group of
Educational Institutions. If you have received this document through email in error, please notify the
system manager. This document contains proprietary information and is intended only to the respective
group / learning community as intended. If you are not the addressee you should not disseminate,
distribute or copy through e-mail. Please notify the sender immediately by e-mail if you have received
this document by mistake and delete this document from your system. If you are not the intended
recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the
contentsof this information is strictly prohibited.

You might also like