OS Unit 3
OS Unit 3
Objective:
• To discuss the communication mechanism among processes.
• To explore different approach to achieve inter process communication
Interprocess Communication
• Interprocess Communication (IPC) refers to the mechanisms provided
by the operating system that allow processes to communicate with
each other and synchronize their actions.
• IPC is essential for the development of complex software systems
where multiple processes need to work together and share data.
• Here are some common IPC mechanisms:
i) Pipes
ii) Message Passing
iii) Shared Memory
iv) Socket
v) Semaphores
vi) Remote Procedure Call (RPC)
Interprocess Communication
• Processes within a system may be independent or cooperating
• Cooperating process can affect or be affected by other processes, including
sharing data
• Reasons for cooperating processes:
• Information sharing
• Computation speedup
• Modularity
• Convenience
• Cooperating processes need interprocess communication (IPC)
• Two models of IPC
• Shared memory
• Message passing
Communications Models
Objective:
• To understand the concepts of Interprocess Communication through Message passing
• To understand the concepts of Interprocess Communication through Shared memory
• To understand the concepts of Interprocess Communication through Pipes
10
Interprocess Communication – Message Passing
• Consumer reads from the other end (the read-end of the pipe)
File Descriptor fd(0) is the read end of the pipe and fd(1) is the write end
Named Pipes
• Named Pipes are more powerful than ordinary pipes
• Communication is bidirectional
Objectives:
17
Introduction
• Concurrent access to shared data may result in data inconsistency
• Maintaining data consistency requires mechanisms to ensure the orderly
execution of cooperating processes
• Example: The producer-consumer problem.
• Suppose that we wanted to provide a solution to the consumer-producer problem
that fills all the buffers. We can do so by having an integer count that keeps track of
the number of full buffers. Initially, count is set to 0. It is incremented by the
producer after it produces a new buffer and is decremented by the consumer after
it consumes a buffer.
Race Condition:
•A race condition occurs when multiple processes access and manipulate shared data concurrently,
leading to unexpected results.
•Example: Incrementing and decrementing a shared counter.
Race Condition
Race Condition
• count++ could be implemented as
register1 = count
register1 = register1 + 1
count = register1
• count-- could be implemented as
register2 = count
register2 = register2 - 1
count = register2
• Consider this execution interleaving with “count = 5” initially:
S0: producer execute register1 = count {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = count {register2 = 5}
S3: consumer execute register2 = register2 - 1 {register2 = 4}
S4: producer execute count = register1 {count = 6 }
S5: consumer execute count = register2 {count = 4}
Producer-Consumer problem
• The Producer-Consumer problem is a classical multi-process
synchronization problem, that is we are trying to achieve
synchronization between more than one process.
20
Producer Consumer Problem Cont..
• The producer should produce data only when the buffer is not full. In
case it is found that the buffer is full, the producer is not allowed to
store any data into the memory buffer.
• Data can only be consumed by the consumer if and only if the memory
buffer is not empty. In case it is found that the buffer is empty, the
consumer is not allowed to use any data from the memory buffer.
• Accessing memory buffer should not be allowed to producer and
consumer at the same time.
21
Producer
count=0; //Shared Variable
buffer [BUFFER_SIZE]; // Shared Buffer
while (true) {
/* produce an item and put in nextProduced */
while (count == BUFFER_SIZE) //Buffer Full
; // do nothing
buffer [in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
count=count++;
}
Note: Initially in =0
Consumer
while (true) {
while (count == 0)
; // do nothing
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
count--;
/* consume the item in nextConsumed
}
25
Critical Section
• The Critical Section refers to a section of code in a concurrent program where shared
resources (like variables, data structures, or files) are accessed.
• Only one process or thread should be allowed to execute in the critical section at a time to
prevent race conditions, where multiple processes or threads attempt to modify the shared
resource simultaneously, leading to unpredictable results.
Solution to Critical-Section Problem
1. Mutual Exclusion - If process Pi is executing in its critical section, then no other
processes can be executing in their critical sections.
2. Progress - If no process is executing in its critical section and there exist some processes
that wish to enter their critical section, then the selection of the processes that will enter
the critical section next cannot be postponed indefinitely
3. Bounded Waiting - A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its
critical section and before that request is granted
L4 Solution to Critical section Problem: Hardware and
Software Solutions,
Objectives:
• To understand the classical problems of Critical Section
• To device and develop the solution related to classical problems of Critical Section
28
Peterson’s Solution (Software Solution 1)
• Classical software based two process solution
• Assume that the LOAD and STORE instructions are atomic; that is,
cannot be interrupted.
• The two processes share two variables:
• int turn;
• Boolean flag[2]
• The variable turn indicates whose turn it is to enter the critical
section.
• The flag array is used to indicate if a process is ready to enter the
critical section. flag[i] = true implies that process Pi is ready!
Algorithm for Process Pi & Pj
Initially, the flags are false
Pi Pj
do { do {
flag[i] = TRUE; flag[j] = TRUE;
turn = j; turn = i;
while (flag[j]==TRUE && turn == j); while (flag[i]==TRUE && turn == i);
critical section critical section
flag[i] = FALSE; flag[j] = FALSE;
remainder section remainder section
} while (TRUE); } while (TRUE);
Peterson’s Solution
It preserves all three conditions of C.S. :
• Mutual Exclusion
• Progress
• Bounded Waiting
Disadvantages:
• Waiting for the other processes to exit the critical region may take a long
time (busy waiting).
• It is limited to 2 processes
• On systems that have multiple CPUs, this algorithm might not function.
Solution to Critical-section Problem Using Locks
do {
acquire lock • It executes in user mode
• Multiple processes can be synchronized
critical section • Does not guarantee mutual Exclusion
release lock (if preemption takes place between
step 1 and step 2)
remainder section
} while (TRUE); 1. While(LOCK==1);
2. LOCK=1
3. Critical Section
4. LOCK=0
Hardware Solutions
These solutions rely on machine-level features that support synchronization.
a) Atomic Operations
• Many modern processors offer atomic operations (like test-and-set, compare-and-swap)
that can execute without interference from other processes.
• Test-and-Set Lock: A simple locking mechanism that checks and sets a lock in one atomic
operation. If the lock is already set, the process must wait.
b) Interrupt Disabling
• A hardware method that temporarily disables interrupts to prevent context switching and
ensure exclusive access to shared resources.
Disadvantages: This approach is generally only used for short periods because disabling
interrupts affects the entire system and could lead to performance issues.
33
TestAndSet Instruction (Hardware Solution)
• Shared boolean variable lock (initialized to false).
do {
boolean TestAndSet (boolean *target)
while ( TestAndSet (&lock ))
{
; // do nothing
boolean rv = *target;
// critical section
*target = TRUE;
lock = FALSE;
return rv:
// remainder section
}
} while (TRUE);
Swap Instruction
• Definition:
// critical section
lock = FALSE;
// remainder section
} while (TRUE);
Bounded-waiting Mutual Exclusion with TestandSet()
do {
waiting[i] = TRUE;
key = TRUE;
while (waiting[i] && key)
key = TestAndSet(&lock);
waiting[i] = FALSE;
// critical section
j = (i + 1) % n;
while ((j != i) && !waiting[j])
j = (j + 1) % n;
if (j == i)
lock = FALSE;
else
waiting[j] = FALSE;
// remainder section
} while (TRUE);
L5 Software Solutions: Semaphores: Counting semaphore,
Binary semaphore,
Objective
• To understand the concept of semaphores and their types
• To implement the different types of Semaphores
38
Semaphore
• Semaphores are a synchronization mechanism used to control access
to a common resource in concurrent programming and operating
systems.
• They are particularly useful for managing access to shared resources
and ensuring that multiple processes or threads do not
simultaneously access a critical section, leading to race conditions or
inconsistent data.
39
Basic Operations: Semaphores
Support two atomic operations:
• wait (P or down): Decreases the semaphore value. If the value is already 0, the process or
thread is blocked until the semaphore value becomes positive.
• signal (V or up): Increases the semaphore value. If there are any processes or threads
waiting for the semaphore, one of them is unblocked.
wait(S):
while S <= 0:
// Busy wait (or block the process/thread)
S=S-1
signal(S):
S=S+1
40
Semaphore as General Synchronization Tool
• Counting semaphore – integer value can range over an unrestricted domain [ –infinity to n].
• Maximum value (n) is equal to no of instances available of a resource.
• Binary semaphore – integer value can range only between 0 and 1; can be simpler to implement
• Also known as mutex locks
• Can implement a counting semaphore S as a binary semaphore
• Provides mutual exclusion
• Two operations:
• block – place the process invoking the operation on the appropriate waiting
queue.
• wakeup – remove one of processes in the waiting queue and place it in the
ready queue.
Counting Semaphore
• Integer variables, initialized to no. of instances (value) of Resource
Type R.
• Assume each process needs one instance of type R. (initially value>0)
• If value of counting semaphore is
• +ve -> no of instances free for allocation
• -ve -> no of processes in the waiting queue
• 0 -> no instance is free and no process in the waiting queue
44
Semaphore Implementation with no Busy waiting (Cont.)
• Starvation – indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended
• Priority Inversion - Scheduling problem when lower-priority process holds a lock needed by higher-priority process
Problem 1
• Suppose we want to synchronize two concurrent processes P and Q using binary semaphores S and T.
The code for the processes P and Q is shown below.
Process P: Process Q:
while (1) { while (1) {
W: Y:
print(‘0’); print (‘1’);
print(‘0’); print (‘1’);
X: Z:
} }
Synchronization statements can be inserted only at points W, X, Y and Z. Which of the following
will always lead to an output starting with '001100110011' ?
A. P(S) at W, V(S) at X, P(T) at Y, V(T) at Z, S and T initially 1
B. P(S) at W, V(T) at X, P(T) at Y, V(S) at Z, S initially 1, and T initially 0
C. P(S) at W, V(T) at X, P(T) at Y, V(S) at Z, S and T initially 1
D. P(S) at W, V(S) at X, P(T) at Y, V(T) at Z, S initially 1, and T initially 0
47
Problem 1
• Suppose we want to synchronize two concurrent processes P and Q using binary semaphores S and T.
The code for the processes P and Q is shown below.
Process P: Process Q:
while (1) { while (1) {
W: Y:
print(‘0’); print (‘1’);
print(‘0’); print (‘1’);
X: Z:
} }
Synchronization statements can be inserted only at points W, X, Y and Z. Which of the following
will always lead to an output starting with '001100110011' ?
A. P(S) at W, V(S) at X, P(T) at Y, V(T) at Z, S and T initially 1
B. P(S) at W, V(T) at X, P(T) at Y, V(S) at Z, S initially 1, and T initially 0
C. P(S) at W, V(T) at X, P(T) at Y, V(S) at Z, S and T initially 1
D. P(S) at W, V(S) at X, P(T) at Y, V(T) at Z, S initially 1, and T initially 0
48
Problem 2
Three processes, P1, P2, and P3, want to access a shared resource. They use semaphores S1 and S2 for
synchronization, both initialized to 1. The processes follow the following sequence:
• P1: wait(S1) → wait(S2) → critical section → signal(S2) → signal(S1)
• P2: wait(S1) → critical section → signal(S1)
• P3: wait(S2) → wait(S1) → critical section → signal(S1) → signal(S2)
Question:
1. When P1 enters its critical section, what will be the values of S1 and S2?
2. If P2 attempts to enter its critical section while P1 is inside, what will be the state of P2?
3. If P3 attempts to enter its critical section while P1 is inside, what will be the state of P3?
4. Can deadlock occur in this situation? If yes, explain when it would happen.
49
Solution of Producer Consumer Problem Using
Semaphores
Key Semaphores: Producer:
while (true) {
• empty: This semaphore
counts the number of int item = produce_item(); // Produces an item to be added to the buffer
empty slots in the buffer wait(empty); // Check if there is space in the buffer
(initialized to the buffer wait(mutex); // Enter the critical section
size). add_item_to_buffer(item); // Add the item to the buffer
signal(mutex); // Exit the critical section
• full: This semaphore
counts the number of full signal(full); // Signal that the buffer has one more item
slots in the buffer }
(initialized to 0, as the
buffer is initially empty).
Consumer:
• mutex: This binary
semaphore ensures while (true) {
mutual exclusion, wait(full); // Check if there are items in the buffer
allowing only one process wait(mutex); // Enter the critical section
(either producer or int item = remove_item_from_buffer(); // Remove the item from the buffer
consumer) to access the signal(mutex); // Exit the critical section
buffer at a time. signal(empty); // Signal that the buffer has one more empty slot
(Initialized to 1) consume_item(item); // Consume the item
}
50
Example: Semaphore
A bounded buffer of size 5 is used between a producer and a consumer. Semaphores empty (initialized to 5) and
full (initialized to 0) control access to the buffer. The binary semaphore mutex is initialized to 1.
Question:
1. If the producer produces 4 items, what will be the values of empty, full, and mutex after these operations?
2. After the consumer consumes 2 items, what will be the values of empty, full, and mutex?
• Shared Data
• Data set
• Semaphore mutex initialized to 1
• Semaphore wrt initialized to 1
• Integer readcount initialized to 0
Readers-Writers Problem (Cont.)
• The structure of a writer process
do {
wait (wrt) ;
// writing is performed
signal (wrt) ;
} while (TRUE);
Readers-Writers Problem (Cont.)
• The structure of a reader process
do {
wait (mutex) ;
readcount ++ ;
if (readcount == 1)
wait (wrt) ;
signal (mutex)
// reading is performed
wait (mutex) ;
readcount - - ;
if (readcount == 0)
signal (wrt) ;
signal (mutex) ;
} while (TRUE);
Dining-Philosophers Problem
• Classic synchronization problem
• Five philosophers sitting around a circular table
• Each philosopher has a plate of rice in front of
them and a (chopstick/fork) to the left and
right.
• To eat, a philosopher must use both the fork
on their left and the fork on their right. The
challenge is to create a protocol that ensures:
• No two philosophers are using the same
fork simultaneously (i.e., avoiding a
conflict). • Shared data
• No philosopher is starved (i.e., every • Bowl of rice (data set)
philosopher eventually gets a chance to • Semaphore chopstick [5] initialized to 1
eat).
Dining-Philosophers Problem (Cont.)
• Semaphores are used to manage the The structure of Philosopher i:
availability of chopsticks (forks). A semaphore chopstick[5] = {1, 1, 1, 1, 1}; // Chopsticks initialized to 1
// eat
• Each fork is represented by a
semaphore initialized to 1, meaning it signal ( chopstick[i] ); // Put down left chopstick
can be held by only one philosopher at signal (chopstick[ (i + 1) % 5] ); // Put down right chopstick
a time.
// think
• The challenge is ensuring that
philosophers take forks in a way that } while (TRUE);
prevents deadlock.
Deadlock Scenario
• Consider five philosophers and five chopsticks arranged in a circular
manner:
• Each philosopher first picks up the chopstick on their left (chopstick[i]) and
then tries to pick up the chopstick on their right (chopstick[(i + 1) % 5]).
• If all philosophers pick up their left chopstick simultaneously, they will all
wait indefinitely for the right one, resulting in a deadlock.
57
Deadlock Avoidance Strategies : Dining Philosophers
Asymmetric Fork-Picking:
1. Philosophers pick up the forks in a different order to prevent circular waiting.
2. For example, the last philosopher (Philosopher 4) picks up the right fork first, while the others
pick up the left fork first.
do {
if (i == 4) { // Last philosopher picks right fork first
wait(chopstick[(i + 1) % 5]);
wait(chopstick[i]);
} else { // Other philosophers pick left fork first
wait(chopstick[i]);
wait(chopstick[(i + 1) % 5]);
}
// Eat
signal(chopstick[i]);
signal(chopstick[(i + 1) % 5]);
// Think
} while (TRUE);
58
Deadlock Avoidance Strategies : Dining Philosophers
Limit Concurrent Philosophers (Using Mutex):
1. Use a mutex (another semaphore) to ensure that only 4 philosophers are allowed to pick up forks at
the same time.
2. By keeping one philosopher out, we prevent a situation where all are waiting on resources, thus
avoiding deadlock. semaphore mutex = 4; // Only 4 philosophers can pick chopsticks at once
semaphore chopstick[5] = {1, 1, 1, 1, 1}; // Chopsticks initialized to 1
do {
wait(mutex); // Enter the dining room
wait(chopstick[i]); // Pick up left chopstick
wait(chopstick[(i + 1) % 5]); // Pick up right chopstick
// Eat
// Think
} while (TRUE); 59
L6 Software Solutions: Monitors
Objective:
• To explore the software solutions using Monitors
60
Monitors
• A high-level abstraction that provides a monitor monitor-name
convenient and effective mechanism
for process synchronization {
// shared variable declarations
• Monitors are provided by programming
languages such as JAVA (OS directly procedure P1 (…) { …. }
may not implement monitors) …
• A procedure defined within a monitor
can access only those variables procedure Pn (…) {……}
declared locally within the monitor and
its formal parameters.
Initialization code ( ….) { … }
• Similarly, local variables of a monitor
can be accessed by only the local …
procedures. }
• Only one process at a time can be }
active within the monitor
Schematic view of a Monitor
Shared Variables:
• state[5]: An array to keep track of each philosopher’s state (THINKING, HUNGRY, EATING).
• self[i]: A condition variable for each philosopher i to manage waiting when forks are not available.
Monitor Functions:
• pickup(i): This function is called by philosopher i when they want to pick up forks. It checks if both the left
and right forks are available and updates the state to EATING if so. If not, the philosopher waits on their
condition variable.
• putdown(i): After the philosopher has finished eating, they release the forks, change their state to
THINKING, and check if either neighbor is hungry and can start eating.
• test(i): This function checks if philosopher i can eat by ensuring both their left and right forks are
available. If true, the philosopher is woken up from their waiting state.
• init(): Initializes the state of all philosophers as THINKING and the monitor.
monitor DiningPhilosophers { Solution to Dining Philosophers using
enum {THINKING, HUNGRY, EATING} state[5]; // state of each Monitor
philosopher
condition self[5]; // one condition variable per philosopher
Advantages:
• No Deadlock: Philosophers only wait if they can’t access forks, and the monitor ensures that
neighboring philosophers get a chance to eat when possible.
• No Starvation: Every philosopher eventually gets a chance to eat because of the fairness
introduced by condition variables.
Solution to Dining Philosophers (cont)
DiningPhilosophters.pickup (i);
EAT
DiningPhilosophers.putdown (i);
L7 & 8
Software Solutions: Peterson Solution, Bakery Algorithm
Objective:
• To understand the software solutions using various algorithms
69
Bakery Algorithm
• The Bakery Algorithm, proposed by Leslie Lamport, is a mutual exclusion algorithm for
ensuring that multiple processes can access shared resources in a concurrent system
without conflicts, similar to customers waiting in line at a bakery.
• Key Idea:
• The Bakery Algorithm assigns each process a number, similar to a ticket at a bakery.
• The process with the smallest ticket gets to enter the critical section first.
• Shared variables:
• boolean choosing[i]; // A boolean array indicating if process i is in the process of
choosing a number.
• int number[i]; //An integer array representing the number (or ticket) assigned to
each process.
70
Steps of the Algorithm:
1.Entry Section:
• When process i wants to enter the critical section, it first sets choosing[i] to true to indicate that it is
choosing a number.
• It assigns itself a number that is one more than the maximum number assigned to any other process
(or 1 if no other process has a number).
• After getting its number, the process sets choosing[i] to false to indicate it is done choosing.
• Process i then waits until:
• Every other process j has either finished choosing (choosing[j] == false), or
• Process j has a number, but its number is larger than i's number, or if the numbers are equal, j's
index is greater than i.
2.Critical Section:
• Process i enters the critical section once it has the smallest number compared to others.
3.Exit Section:
• After finishing its work in the critical section, process i sets its number[i] back to 0, signaling that it no
longer wants to be in the critical section.
71
Bakery Algorithm Cont..
do {
choosing[i] = true;
number[i] = 1 + max(number[0], number[1], ..., number[N-1]);
choosing[i] = false;
for (j = 0; j < N; j++) {
while (choosing[j]);
while ((number[j] != 0) && ((number[j], j) < (number[i], i)));
}
// Critical section If there are two processes,
number[i] = 0; with the same num value,
// Remainder section favour the process with the
smaller id(i)
} while (true);
// Wait if process j has a smaller number or has the same number but a smaller index
while ((number[j] != 0) &&
(number[j] < number[i] || (number[j] == number[i] && j < i))) { /* wait */ }
}
73
Pros and Cons of Bakery Algorithm
• Pros:
• Simplicity: The algorithm is relatively simple to understand and implement.
• Works for Multiple Processes: Unlike simpler algorithms like Peterson’s
algorithm, the Bakery Algorithm can handle more than two processes.
• Cons:
• Unbounded Ticket Numbers: Ticket numbers can grow indefinitely, though this is
usually manageable in practice.
• Busy Waiting: Processes may continuously check the states of other processes,
leading to inefficiency (busy-wait loops).
74
L9 Classic process synchronization problems (case studies)
Objective:
• To understand the classic Synchronization examples and case studies
75
Classical Problems of Synchronization
• Bounded-Buffer Problem
• Readers and Writers Problem
• Dining-Philosophers Problem
77