KEMBAR78
OS Unit 3 | PDF | Process (Computing) | Synchronization
0% found this document useful (0 votes)
33 views77 pages

OS Unit 3

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views77 pages

OS Unit 3

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 77

1

Unit 3 : Inter Process


Communication and Synchronization

School of Computer Science


UPES, Dehradun
India
Unit 3: Inter-Process Communication

L1 Inter Process Communication (IPC), IPC mechanisms: Shared Memory and


Message Passing
L2 Message Passing: Shared Memory, Pipes and Named pipes in Linux

L3 Critical Section Problem, Race Condition


L4 Producer Consumer Problem, Solution to Critical section Problem: Hardware and
Software Solutions,
L5 Software Solutions: Semaphores: Counting semaphore, Binary semaphore,

L6 Software Solutions: Monitors


L7 Software Solutions: Algorithm 1, Algorithm 2,
L8 Software Solutions: Algorithm 3/Peterson Solution, Bakery Algorithm
L9 Classic process synchronization problems (case studies).
Unit 3: Inter-Process Communication

Lecture 1 Inter Process Communication (IPC), IPC mechanisms: Shared


Memory and Message Passing

Objective:
• To discuss the communication mechanism among processes.
• To explore different approach to achieve inter process communication
Interprocess Communication
• Interprocess Communication (IPC) refers to the mechanisms provided
by the operating system that allow processes to communicate with
each other and synchronize their actions.
• IPC is essential for the development of complex software systems
where multiple processes need to work together and share data.
• Here are some common IPC mechanisms:
i) Pipes
ii) Message Passing
iii) Shared Memory
iv) Socket
v) Semaphores
vi) Remote Procedure Call (RPC)
Interprocess Communication
• Processes within a system may be independent or cooperating
• Cooperating process can affect or be affected by other processes, including
sharing data
• Reasons for cooperating processes:
• Information sharing
• Computation speedup
• Modularity
• Convenience
• Cooperating processes need interprocess communication (IPC)
• Two models of IPC
• Shared memory
• Message passing
Communications Models

Message passing Shared Memory


Fig 3.1 Interprocess communication model
Cooperating Processes
• Independent process cannot affect or be affected by the execution of
another process
• Cooperating process can affect or be affected by the execution of
another process
• Advantages of process cooperation
• Information sharing
• Computation speed-up
• Modularity
• Convenience
Examples of Cooperating Processes
• Web Servers: Multiple processes handle different client requests,
sharing resources like memory and network connections.
• Database Systems: Processes collaborate to manage transactions,
maintain data consistency, and ensure concurrency control.
• Distributed Systems: Processes running on different machines
communicate and cooperate to perform distributed computations,
share resources, and synchronize actions.
• Operating Systems: System processes cooperate to manage hardware
resources, perform I/O operations, and execute user programs
Lecture 2 Message Passing: Shared Memory, Pipes and
Named pipes in Linux

Objective:
• To understand the concepts of Interprocess Communication through Message passing
• To understand the concepts of Interprocess Communication through Shared memory
• To understand the concepts of Interprocess Communication through Pipes

10
Interprocess Communication – Message Passing

• Mechanism for processes to communicate and to synchronize their actions


• Message system – processes communicate with each other without resorting to shared
variables
• IPC facility provides two operations:
• send(message) – message size fixed or variable
• receive(message)
• If P and Q wish to communicate, they need to:
• establish a communication link between them
• exchange messages via send/receive
• Implementation of communication link
• physical (e.g., shared memory, hardware bus)
• logical (e.g., logical properties)
Pipes

• Acts as a conduit allowing two processes to communicate


• Issues
• Is communication unidirectional or bidirectional?
• In the case of two-way communication, is it half or full-duplex?
• Must there exist a relationship (i.e. parent-child) between the
communicating processes?
• Can the pipes be used over a network?
Ordinary Pipes
• Ordinary Pipes allow communication in standard producer-consumer
style

• Producer writes to one end (the write-end of the pipe)

• Consumer reads from the other end (the read-end of the pipe)

• Ordinary pipes are therefore unidirectional

• Require parent-child relationship between communicating processes


Ordinary Pipes

Fig: Ordinary Pipes

File Descriptor fd(0) is the read end of the pipe and fd(1) is the write end
Named Pipes
• Named Pipes are more powerful than ordinary pipes

• Communication is bidirectional

• No parent-child relationship is necessary between the communicating


processes

• Several processes can use the named pipe for communication

• Provided on both UNIX and Windows systems


#include <stdio.h>
#include <unistd.h>
int main() {
int fd[2];
pid_t pid;
char buffer[30];
pipe(fd);
pid = fork();

IPC mechanism if (pid == 0) { // Child process


close(fd[0]); // Close unused read end
in Pipes write(fd[1], "Hello, parent!", 14);
close(fd[1]); // Close write end
} else { // Parent process
close(fd[1]); // Close unused write end
read(fd[0], buffer, 14);
printf("%s\n", buffer);
close(fd[0]); // Close read end
}
return 0;
}
16
L3 Critical Section Problem, Race Condition,
Producer Consumer Problem,

Objectives:

• To understand the concepts of Critical Section problem


• To understand the race condition

17
Introduction
• Concurrent access to shared data may result in data inconsistency
• Maintaining data consistency requires mechanisms to ensure the orderly
execution of cooperating processes
• Example: The producer-consumer problem.
• Suppose that we wanted to provide a solution to the consumer-producer problem
that fills all the buffers. We can do so by having an integer count that keeps track of
the number of full buffers. Initially, count is set to 0. It is incremented by the
producer after it produces a new buffer and is decremented by the consumer after
it consumes a buffer.
Race Condition:
•A race condition occurs when multiple processes access and manipulate shared data concurrently,
leading to unexpected results.
•Example: Incrementing and decrementing a shared counter.
Race Condition
Race Condition
• count++ could be implemented as
register1 = count
register1 = register1 + 1
count = register1
• count-- could be implemented as
register2 = count
register2 = register2 - 1
count = register2
• Consider this execution interleaving with “count = 5” initially:
S0: producer execute register1 = count {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = count {register2 = 5}
S3: consumer execute register2 = register2 - 1 {register2 = 4}
S4: producer execute count = register1 {count = 6 }
S5: consumer execute count = register2 {count = 4}
Producer-Consumer problem
• The Producer-Consumer problem is a classical multi-process
synchronization problem, that is we are trying to achieve
synchronization between more than one process.

• There is one Producer in the producer-consumer problem, Producer is


producing some items, whereas there is one Consumer that is
consuming the items produced by the Producer. The same memory
buffer is shared by both producers and consumers which is of fixed-
size.

20
Producer Consumer Problem Cont..
• The producer should produce data only when the buffer is not full. In
case it is found that the buffer is full, the producer is not allowed to
store any data into the memory buffer.
• Data can only be consumed by the consumer if and only if the memory
buffer is not empty. In case it is found that the buffer is empty, the
consumer is not allowed to use any data from the memory buffer.
• Accessing memory buffer should not be allowed to producer and
consumer at the same time.

21
Producer
count=0; //Shared Variable
buffer [BUFFER_SIZE]; // Shared Buffer
while (true) {
/* produce an item and put in nextProduced */
while (count == BUFFER_SIZE) //Buffer Full
; // do nothing
buffer [in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
count=count++;
}

Note: Initially in =0
Consumer
while (true) {
while (count == 0)
; // do nothing
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
count--;
/* consume the item in nextConsumed
}

Note: Initially out =0


Race condition in Producer Consumer Problem
Imagine a scenario where both the producer and consumer try to access and modify count at almost the same time without
proper synchronization:
Step-by-Step Race Condition: count++ could be implemented as
register1 = count
1.Initial Condition: register1 = register1 + 1
•Assume count = 1, meaning there is exactly 1 item in the buffer. count = register1
2.Producer Process (P):
•The producer finishes producing an item and is about to execute count++. count-- could be implemented as
•It reads count = 1 into a register but hasn’t updated it yet. register2 = count
register2 = register2 - 1
3.Consumer Process (C): count = register2
•Meanwhile, the consumer starts and executes count--.
•It reads count = 1 (before the producer updated it), decrements it by 1, and sets count = 0.
4.Producer Process (P) (Continued):
•Now the producer finishes its count++, thinking the original count was 1 (from step 2), and updates it to count = 2.
5.Final Result:
•After both operations, the count is now 2 when it should have been 1 (because one item was produced and one
consumed).
•This inconsistency leads to a logical error, where count no longer accurately reflects the number of items in the
buffer.
Need of Synchronization in Producer
Consumer Problem
We need synchronization mechanisms to avoid:
• The producer from overwriting a full buffer.
• The consumer from consuming from an empty buffer.
• Race conditions (where multiple processes access shared data
concurrently, causing inconsistent results).

25
Critical Section
• The Critical Section refers to a section of code in a concurrent program where shared
resources (like variables, data structures, or files) are accessed.
• Only one process or thread should be allowed to execute in the critical section at a time to
prevent race conditions, where multiple processes or threads attempt to modify the shared
resource simultaneously, leading to unpredictable results.
Solution to Critical-Section Problem
1. Mutual Exclusion - If process Pi is executing in its critical section, then no other
processes can be executing in their critical sections.

2. Progress - If no process is executing in its critical section and there exist some processes
that wish to enter their critical section, then the selection of the processes that will enter
the critical section next cannot be postponed indefinitely

3. Bounded Waiting - A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its
critical section and before that request is granted
L4 Solution to Critical section Problem: Hardware and
Software Solutions,

Objectives:
• To understand the classical problems of Critical Section
• To device and develop the solution related to classical problems of Critical Section

28
Peterson’s Solution (Software Solution 1)
• Classical software based two process solution
• Assume that the LOAD and STORE instructions are atomic; that is,
cannot be interrupted.
• The two processes share two variables:
• int turn;
• Boolean flag[2]
• The variable turn indicates whose turn it is to enter the critical
section.
• The flag array is used to indicate if a process is ready to enter the
critical section. flag[i] = true implies that process Pi is ready!
Algorithm for Process Pi & Pj
Initially, the flags are false
Pi Pj
do { do {
flag[i] = TRUE; flag[j] = TRUE;
turn = j; turn = i;
while (flag[j]==TRUE && turn == j); while (flag[i]==TRUE && turn == i);
critical section critical section
flag[i] = FALSE; flag[j] = FALSE;
remainder section remainder section
} while (TRUE); } while (TRUE);
Peterson’s Solution
It preserves all three conditions of C.S. :
• Mutual Exclusion
• Progress
• Bounded Waiting
Disadvantages:
• Waiting for the other processes to exit the critical region may take a long
time (busy waiting).
• It is limited to 2 processes
• On systems that have multiple CPUs, this algorithm might not function.
Solution to Critical-section Problem Using Locks
do {
acquire lock • It executes in user mode
• Multiple processes can be synchronized
critical section • Does not guarantee mutual Exclusion
release lock (if preemption takes place between
step 1 and step 2)
remainder section
} while (TRUE); 1. While(LOCK==1);
2. LOCK=1
3. Critical Section
4. LOCK=0
Hardware Solutions
These solutions rely on machine-level features that support synchronization.
a) Atomic Operations
• Many modern processors offer atomic operations (like test-and-set, compare-and-swap)
that can execute without interference from other processes.
• Test-and-Set Lock: A simple locking mechanism that checks and sets a lock in one atomic
operation. If the lock is already set, the process must wait.
b) Interrupt Disabling
• A hardware method that temporarily disables interrupts to prevent context switching and
ensure exclusive access to shared resources.
Disadvantages: This approach is generally only used for short periods because disabling
interrupts affects the entire system and could lead to performance issues.

33
TestAndSet Instruction (Hardware Solution)
• Shared boolean variable lock (initialized to false).

do {
boolean TestAndSet (boolean *target)
while ( TestAndSet (&lock ))
{
; // do nothing
boolean rv = *target;
// critical section
*target = TRUE;
lock = FALSE;
return rv:
// remainder section
}
} while (TRUE);
Swap Instruction
• Definition:

void Swap (boolean *a, boolean *b)


{
boolean temp = *a;
*a = *b;
*b = temp:
}
Solution using Swap
• Shared Boolean variable lock initialized to FALSE; Each process has a local Boolean variable key
• Solution:
do {
key = TRUE;
while ( key == TRUE)
Swap (&lock, &key );

// critical section

lock = FALSE;

// remainder section

} while (TRUE);
Bounded-waiting Mutual Exclusion with TestandSet()
do {
waiting[i] = TRUE;
key = TRUE;
while (waiting[i] && key)
key = TestAndSet(&lock);
waiting[i] = FALSE;
// critical section
j = (i + 1) % n;
while ((j != i) && !waiting[j])
j = (j + 1) % n;
if (j == i)
lock = FALSE;
else
waiting[j] = FALSE;
// remainder section
} while (TRUE);
L5 Software Solutions: Semaphores: Counting semaphore,
Binary semaphore,

Objective
• To understand the concept of semaphores and their types
• To implement the different types of Semaphores

38
Semaphore
• Semaphores are a synchronization mechanism used to control access
to a common resource in concurrent programming and operating
systems.
• They are particularly useful for managing access to shared resources
and ensuring that multiple processes or threads do not
simultaneously access a critical section, leading to race conditions or
inconsistent data.

39
Basic Operations: Semaphores
Support two atomic operations:
• wait (P or down): Decreases the semaphore value. If the value is already 0, the process or
thread is blocked until the semaphore value becomes positive.
• signal (V or up): Increases the semaphore value. If there are any processes or threads
waiting for the semaphore, one of them is unblocked.
wait(S):
while S <= 0:
// Busy wait (or block the process/thread)
S=S-1

signal(S):
S=S+1
40
Semaphore as General Synchronization Tool

• Counting semaphore – integer value can range over an unrestricted domain [ –infinity to n].
• Maximum value (n) is equal to no of instances available of a resource.
• Binary semaphore – integer value can range only between 0 and 1; can be simpler to implement
• Also known as mutex locks
• Can implement a counting semaphore S as a binary semaphore
• Provides mutual exclusion

Semaphore mutex; // initialized to 1


do {
wait (mutex);
// Critical Section
signal (mutex);
// remainder section
} while (TRUE);
Semaphore Implementation
• Must guarantee that no two processes can execute wait () and
signal() on the same semaphore at the same time
• Thus, implementation becomes the critical section problem where
the wait and signal code are placed in the critical section.
• Could now have busy waiting in critical section implementation
• But implementation code is short
• Little busy waiting if critical section rarely occupied
• Note that applications may spend lots of time in critical sections and
therefore this is not a good solution.
Semaphore Implementation with no Busy waiting

• With each semaphore there is an associated waiting queue. Each


entry in a waiting queue has two data items:
• value (of type integer)
• pointer to next record in the list

• Two operations:
• block – place the process invoking the operation on the appropriate waiting
queue.
• wakeup – remove one of processes in the waiting queue and place it in the
ready queue.
Counting Semaphore
• Integer variables, initialized to no. of instances (value) of Resource
Type R.
• Assume each process needs one instance of type R. (initially value>0)
• If value of counting semaphore is
• +ve -> no of instances free for allocation
• -ve -> no of processes in the waiting queue
• 0 -> no instance is free and no process in the waiting queue

44
Semaphore Implementation with no Busy waiting (Cont.)

• Implementation of wait: Implementation of signal:


wait(semaphore *S) { signal(semaphore *S) {
S->value--; S->value++;
if (S->value < 0) { if (S->value <= 0) {
add this process to S->list; remove a process P from S->list;
block();
wakeup(P);
}
}
}
}
Note: Waiting Queue is S->list
Use of Semaphore: Deadlock and Starvation
• Deadlock – two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting
processes
• Let S and Q be two semaphores initialized to 1
P0 P1
wait (S); wait (Q);
wait (Q); wait (S);
. .
. .
. .
signal (S); signal (Q);
signal (Q); signal (S);

• Starvation – indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended
• Priority Inversion - Scheduling problem when lower-priority process holds a lock needed by higher-priority process
Problem 1
• Suppose we want to synchronize two concurrent processes P and Q using binary semaphores S and T.
The code for the processes P and Q is shown below.
Process P: Process Q:
while (1) { while (1) {
W: Y:
print(‘0’); print (‘1’);
print(‘0’); print (‘1’);
X: Z:
} }

Synchronization statements can be inserted only at points W, X, Y and Z. Which of the following
will always lead to an output starting with '001100110011' ?
A. P(S) at W, V(S) at X, P(T) at Y, V(T) at Z, S and T initially 1
B. P(S) at W, V(T) at X, P(T) at Y, V(S) at Z, S initially 1, and T initially 0
C. P(S) at W, V(T) at X, P(T) at Y, V(S) at Z, S and T initially 1
D. P(S) at W, V(S) at X, P(T) at Y, V(T) at Z, S initially 1, and T initially 0

47
Problem 1
• Suppose we want to synchronize two concurrent processes P and Q using binary semaphores S and T.
The code for the processes P and Q is shown below.
Process P: Process Q:
while (1) { while (1) {
W: Y:
print(‘0’); print (‘1’);
print(‘0’); print (‘1’);
X: Z:
} }

Synchronization statements can be inserted only at points W, X, Y and Z. Which of the following
will always lead to an output starting with '001100110011' ?
A. P(S) at W, V(S) at X, P(T) at Y, V(T) at Z, S and T initially 1
B. P(S) at W, V(T) at X, P(T) at Y, V(S) at Z, S initially 1, and T initially 0
C. P(S) at W, V(T) at X, P(T) at Y, V(S) at Z, S and T initially 1
D. P(S) at W, V(S) at X, P(T) at Y, V(T) at Z, S initially 1, and T initially 0

48
Problem 2
Three processes, P1, P2, and P3, want to access a shared resource. They use semaphores S1 and S2 for
synchronization, both initialized to 1. The processes follow the following sequence:
• P1: wait(S1) → wait(S2) → critical section → signal(S2) → signal(S1)
• P2: wait(S1) → critical section → signal(S1)
• P3: wait(S2) → wait(S1) → critical section → signal(S1) → signal(S2)
Question:
1. When P1 enters its critical section, what will be the values of S1 and S2?
2. If P2 attempts to enter its critical section while P1 is inside, what will be the state of P2?
3. If P3 attempts to enter its critical section while P1 is inside, what will be the state of P3?
4. Can deadlock occur in this situation? If yes, explain when it would happen.

49
Solution of Producer Consumer Problem Using
Semaphores
Key Semaphores: Producer:
while (true) {
• empty: This semaphore
counts the number of int item = produce_item(); // Produces an item to be added to the buffer
empty slots in the buffer wait(empty); // Check if there is space in the buffer
(initialized to the buffer wait(mutex); // Enter the critical section
size). add_item_to_buffer(item); // Add the item to the buffer
signal(mutex); // Exit the critical section
• full: This semaphore
counts the number of full signal(full); // Signal that the buffer has one more item
slots in the buffer }
(initialized to 0, as the
buffer is initially empty).
Consumer:
• mutex: This binary
semaphore ensures while (true) {
mutual exclusion, wait(full); // Check if there are items in the buffer
allowing only one process wait(mutex); // Enter the critical section
(either producer or int item = remove_item_from_buffer(); // Remove the item from the buffer
consumer) to access the signal(mutex); // Exit the critical section
buffer at a time. signal(empty); // Signal that the buffer has one more empty slot
(Initialized to 1) consume_item(item); // Consume the item
}
50
Example: Semaphore
A bounded buffer of size 5 is used between a producer and a consumer. Semaphores empty (initialized to 5) and
full (initialized to 0) control access to the buffer. The binary semaphore mutex is initialized to 1.

Question:
1. If the producer produces 4 items, what will be the values of empty, full, and mutex after these operations?
2. After the consumer consumes 2 items, what will be the values of empty, full, and mutex?

Initially: After the consumer consumes 2 items:


empty = 5, full = 0, mutex = 1 Each time an item is consumed, full decreases by 1 and
After the producer produces 4 items: empty increases by 1.
Each time an item is produced, empty decreases by 1 and After 2 consumption operations:
full increases by 1. empty = 1 + 2 = 3
After 4 production operations: full = 4 - 2 = 2
empty = 5 - 4 = 1 mutex = 1
full = 0 + 4 = 4
mutex = 1 (remains unchanged unless the critical
section is accessed)
51
Readers-Writers Problem
• A data set is shared among a number of concurrent processes
• Readers – only read the data set; they do not perform any updates
• Writers – can both read and write

• Problem – allow multiple readers to read at the same time. Only


one single writer can access the shared data at the same time

• Shared Data
• Data set
• Semaphore mutex initialized to 1
• Semaphore wrt initialized to 1
• Integer readcount initialized to 0
Readers-Writers Problem (Cont.)
• The structure of a writer process

do {
wait (wrt) ;

// writing is performed

signal (wrt) ;
} while (TRUE);
Readers-Writers Problem (Cont.)
• The structure of a reader process

do {
wait (mutex) ;
readcount ++ ;
if (readcount == 1)
wait (wrt) ;
signal (mutex)
// reading is performed
wait (mutex) ;
readcount - - ;
if (readcount == 0)
signal (wrt) ;
signal (mutex) ;
} while (TRUE);
Dining-Philosophers Problem
• Classic synchronization problem
• Five philosophers sitting around a circular table
• Each philosopher has a plate of rice in front of
them and a (chopstick/fork) to the left and
right.
• To eat, a philosopher must use both the fork
on their left and the fork on their right. The
challenge is to create a protocol that ensures:
• No two philosophers are using the same
fork simultaneously (i.e., avoiding a
conflict). • Shared data
• No philosopher is starved (i.e., every • Bowl of rice (data set)
philosopher eventually gets a chance to • Semaphore chopstick [5] initialized to 1
eat).
Dining-Philosophers Problem (Cont.)
• Semaphores are used to manage the The structure of Philosopher i:
availability of chopsticks (forks). A semaphore chopstick[5] = {1, 1, 1, 1, 1}; // Chopsticks initialized to 1

philosopher must acquire two do {


chopsticks using wait() and release wait ( chopstick[i] ); // Pick up left chopstick
them using signal(). wait ( chopstick[ (i + 1) % 5] ); // Pick up right chopstick

// eat
• Each fork is represented by a
semaphore initialized to 1, meaning it signal ( chopstick[i] ); // Put down left chopstick
can be held by only one philosopher at signal (chopstick[ (i + 1) % 5] ); // Put down right chopstick
a time.
// think
• The challenge is ensuring that
philosophers take forks in a way that } while (TRUE);
prevents deadlock.
Deadlock Scenario
• Consider five philosophers and five chopsticks arranged in a circular
manner:

• Each philosopher first picks up the chopstick on their left (chopstick[i]) and
then tries to pick up the chopstick on their right (chopstick[(i + 1) % 5]).

• If all philosophers pick up their left chopstick simultaneously, they will all
wait indefinitely for the right one, resulting in a deadlock.

57
Deadlock Avoidance Strategies : Dining Philosophers
Asymmetric Fork-Picking:
1. Philosophers pick up the forks in a different order to prevent circular waiting.
2. For example, the last philosopher (Philosopher 4) picks up the right fork first, while the others
pick up the left fork first.
do {
if (i == 4) { // Last philosopher picks right fork first
wait(chopstick[(i + 1) % 5]);
wait(chopstick[i]);
} else { // Other philosophers pick left fork first
wait(chopstick[i]);
wait(chopstick[(i + 1) % 5]);
}
// Eat
signal(chopstick[i]);
signal(chopstick[(i + 1) % 5]);
// Think
} while (TRUE);

58
Deadlock Avoidance Strategies : Dining Philosophers
Limit Concurrent Philosophers (Using Mutex):
1. Use a mutex (another semaphore) to ensure that only 4 philosophers are allowed to pick up forks at
the same time.
2. By keeping one philosopher out, we prevent a situation where all are waiting on resources, thus
avoiding deadlock. semaphore mutex = 4; // Only 4 philosophers can pick chopsticks at once
semaphore chopstick[5] = {1, 1, 1, 1, 1}; // Chopsticks initialized to 1

do {
wait(mutex); // Enter the dining room
wait(chopstick[i]); // Pick up left chopstick
wait(chopstick[(i + 1) % 5]); // Pick up right chopstick

// Eat

signal(chopstick[i]); // Put down left chopstick


signal(chopstick[(i + 1) % 5]); // Put down right chopstick
signal(mutex); // Leave the dining room

// Think

} while (TRUE); 59
L6 Software Solutions: Monitors

Objective:
• To explore the software solutions using Monitors

60
Monitors
• A high-level abstraction that provides a monitor monitor-name
convenient and effective mechanism
for process synchronization {
// shared variable declarations
• Monitors are provided by programming
languages such as JAVA (OS directly procedure P1 (…) { …. }
may not implement monitors) …
• A procedure defined within a monitor
can access only those variables procedure Pn (…) {……}
declared locally within the monitor and
its formal parameters.
Initialization code ( ….) { … }
• Similarly, local variables of a monitor
can be accessed by only the local …
procedures. }
• Only one process at a time can be }
active within the monitor
Schematic view of a Monitor

Fig: Schematic View of Monitor


Condition Variables
• condition x, y;

• Two operations on a condition variable:


• x.wait () – a process that invokes the operation is suspended.
• x.signal () – resumes one of processes (if any) that invoked x.wait ()
Monitor with Condition Variables
Solution to Dining Philosophers using Monitor

Shared Variables:
• state[5]: An array to keep track of each philosopher’s state (THINKING, HUNGRY, EATING).
• self[i]: A condition variable for each philosopher i to manage waiting when forks are not available.

Monitor Functions:
• pickup(i): This function is called by philosopher i when they want to pick up forks. It checks if both the left
and right forks are available and updates the state to EATING if so. If not, the philosopher waits on their
condition variable.
• putdown(i): After the philosopher has finished eating, they release the forks, change their state to
THINKING, and check if either neighbor is hungry and can start eating.
• test(i): This function checks if philosopher i can eat by ensuring both their left and right forks are
available. If true, the philosopher is woken up from their waiting state.
• init(): Initializes the state of all philosophers as THINKING and the monitor.
monitor DiningPhilosophers { Solution to Dining Philosophers using
enum {THINKING, HUNGRY, EATING} state[5]; // state of each Monitor
philosopher
condition self[5]; // one condition variable per philosopher

// Initialize all philosophers as THINKING


DiningPhilosophers() {
for (int i = 0; i < 5; i++) { // Put down forks for philosopher i
state[i] = THINKING; void putdown(int i) {
} state[i] = THINKING; // philosopher has finished eating
} test((i + 4) % 5); // check if left neighbor can eat
test((i + 1) % 5); // check if right neighbor can eat
// Pickup forks for philosopher i }
void pickup(int i) {
state[i] = HUNGRY; // philosopher is hungry // Check if philosopher i can eat
test(i); // try to acquire forks void test(int i) {
if (state[i] != EATING) { if (state[i] == HUNGRY &&
self[i].wait(); // wait if forks are not available state[(i + 4) % 5] != EATING && // left neighbor is not eating
} state[(i + 1) % 5] != EATING) { // right neighbor is not eating
} state[i] = EATING; // philosopher starts eating
self[i].signal(); // wake up philosopher i
}
66
}}
Solution to Dining Philosophers using Monitor
Working:
• When a philosopher becomes hungry, they call pickup(). The function checks if the left and right
forks are available. If so, the philosopher starts eating. If not, they wait.
• Once the philosopher finishes eating, they call putdown(). This function releases the forks and
checks if the neighbors can now eat, waking them up if they were waiting.
• The test() function ensures that a philosopher can only eat if both their neighbors are not eating.

Advantages:
• No Deadlock: Philosophers only wait if they can’t access forks, and the monitor ensures that
neighboring philosophers get a chance to eat when possible.
• No Starvation: Every philosopher eventually gets a chance to eat because of the fairness
introduced by condition variables.
Solution to Dining Philosophers (cont)

• Each philosopher I invokes the operations pickup()


and putdown() in the following sequence:

DiningPhilosophters.pickup (i);

EAT

DiningPhilosophers.putdown (i);
L7 & 8
Software Solutions: Peterson Solution, Bakery Algorithm

Objective:
• To understand the software solutions using various algorithms

69
Bakery Algorithm
• The Bakery Algorithm, proposed by Leslie Lamport, is a mutual exclusion algorithm for
ensuring that multiple processes can access shared resources in a concurrent system
without conflicts, similar to customers waiting in line at a bakery.
• Key Idea:
• The Bakery Algorithm assigns each process a number, similar to a ticket at a bakery.
• The process with the smallest ticket gets to enter the critical section first.
• Shared variables:
• boolean choosing[i]; // A boolean array indicating if process i is in the process of
choosing a number.
• int number[i]; //An integer array representing the number (or ticket) assigned to
each process.

70
Steps of the Algorithm:

1.Entry Section:
• When process i wants to enter the critical section, it first sets choosing[i] to true to indicate that it is
choosing a number.
• It assigns itself a number that is one more than the maximum number assigned to any other process
(or 1 if no other process has a number).
• After getting its number, the process sets choosing[i] to false to indicate it is done choosing.
• Process i then waits until:
• Every other process j has either finished choosing (choosing[j] == false), or
• Process j has a number, but its number is larger than i's number, or if the numbers are equal, j's
index is greater than i.
2.Critical Section:
• Process i enters the critical section once it has the smallest number compared to others.
3.Exit Section:
• After finishing its work in the critical section, process i sets its number[i] back to 0, signaling that it no
longer wants to be in the critical section.

71
Bakery Algorithm Cont..

do {
choosing[i] = true;
number[i] = 1 + max(number[0], number[1], ..., number[N-1]);
choosing[i] = false;
for (j = 0; j < N; j++) {
while (choosing[j]);
while ((number[j] != 0) && ((number[j], j) < (number[i], i)));
}
// Critical section If there are two processes,
number[i] = 0; with the same num value,
// Remainder section favour the process with the
smaller id(i)
} while (true);

Note: (a,b) < ( c,d) is equivalent to : (a<c) or ((a==c) and (b<d))


72
choosing[i] = true; // Step 1: Process i is choosing its number
number[i] = 1 + max(number[0], number[1], ..., number[N-1]); // Step 2: Assign the highest number
plus one
choosing[i] = false; // Step 3: Process i has finished choosing

// Step 4: Wait for its turn


for (int j = 0; j < N; j++) {
// Wait if process j is in the middle of choosing a number
while (choosing[j]) { /* wait */ }

// Wait if process j has a smaller number or has the same number but a smaller index
while ((number[j] != 0) &&
(number[j] < number[i] || (number[j] == number[i] && j < i))) { /* wait */ }
}

// Critical Section (process i enters the critical section)


number[i] = 0; // Step 5: Process i exits the critical section and releases its number

73
Pros and Cons of Bakery Algorithm

• Pros:
• Simplicity: The algorithm is relatively simple to understand and implement.
• Works for Multiple Processes: Unlike simpler algorithms like Peterson’s
algorithm, the Bakery Algorithm can handle more than two processes.
• Cons:
• Unbounded Ticket Numbers: Ticket numbers can grow indefinitely, though this is
usually manageable in practice.
• Busy Waiting: Processes may continuously check the states of other processes,
leading to inefficiency (busy-wait loops).

74
L9 Classic process synchronization problems (case studies)

Objective:
• To understand the classic Synchronization examples and case studies

75
Classical Problems of Synchronization

• Bounded-Buffer Problem
• Readers and Writers Problem
• Dining-Philosophers Problem

Note: Problems already discussed in previous slides


Thank You

77

You might also like