Operating Systems
ETCS 304
UNIT 2
Chapter 3: Processes
Dept of CSE, Operating Systems BVCOE NEW DELHI
LECTURE 1: Processes
Process Concept
Process Scheduling
Operations on Processes
Interprocess Communication
Examples of IPC Systems
Communication in Client-Server Systems
Department of CSE, Dept of CSE,
Operating Operating Systems BVCOE NEW DELHI
Systems
Objectives
To introduce the notion of a process -- a program in
execution, which forms the basis of all computation
To describe the various features of processes,
including scheduling, creation and termination, and
communication
To explore interprocess communication using shared
memory and message passing
To describe communication in client-server systems
Department of CSE, Dept of CSE,
Operating Operating Systems BVCOE NEW DELHI
Systems
Process Concept
An operating system executes a variety of programs:
Batch system – jobs
Time-shared systems – user programs or tasks
Textbook uses the terms job and process almost
interchangeably
Process – a program in execution; process execution must
progress in sequential fashion
Multiple parts
The program code, also called text section
Current activity including program counter, processor
registers
Stack containing temporary data
Function parameters, return addresses, local variables
Data section containing global variables
Heap containing memory dynamically allocated during run time
Department of CSE, Dept of CSE,
Operating Operating Systems BVCOE NEW DELHI
Systems
Process State
As a process executes, it changes state
new: The process is being created
running: Instructions are being executed
waiting: The process is waiting for some event to occur
ready: The process is waiting to be assigned to a processor
terminated: The process has finished execution
Department of CSE, Dept of CSE,
Operating Operating Systems BVCOE NEW DELHI
Systems
Diagram of Process State
Department of CSE, Dept of CSE,
Operating Operating Systems BVCOE NEW DELHI
Systems
Process Control Block (PCB)
Information associated with each process
(also called task control block)
Process state – running, waiting, etc
Program counter – location of instruction to
next execute
CPU registers – contents of all process-
centric registers
CPU scheduling information- priorities,
scheduling queue pointers
Memory-management information –
memory allocated to the process
Accounting information – CPU used, clock
time elapsed since start, time limits
I/O status information – I/O devices
allocated to process, list of open files
Department of CSE, Dept of CSE,
Operating Operating Systems BVCOE NEW DELHI
Systems
CPU Switch From Process to Process
Department of CSE, Dept of CSE,
Operating Operating Systems BVCOE NEW DELHI
Systems
Threads
So far, process has a single thread of execution
Consider having multiple program counters per
process
Multiple locations can execute at once
Multiple threads of control -> threads
Must then have storage for thread details, multiple
program counters in PCB
Department of CSE, Dept of CSE,
Operating Operating Systems BVCOE NEW DELHI
Systems
Process Scheduling
Maximize CPU use, quickly switch processes onto CPU for
time sharing
Process scheduler selects among available processes for
next execution on CPU
Maintains scheduling queues of processes
Job queue – set of all processes in the system
Ready queue – set of all processes residing in main
memory, ready and waiting to execute
Device queues – set of processes waiting for an I/O device
Processes migrate among the various queues
Department of CSE, Dept of CSE,
Operating Operating Systems BVCOE NEW DELHI
Systems
Ready Queue And Various I/O Device Queues
Department of CSE, Dept of CSE,
Operating Operating Systems BVCOE NEW DELHI
Systems
LECTURE 2
Department of CSE, Operating Systems
Representation of Process Scheduling
Queueing diagram represents queues, resources, flows
Department of CSE, Dept of CSE,
Operating Operating Systems BVCOE NEW DELHI
Systems
Schedulers
Short-term scheduler (or CPU scheduler) – selects which process should be executed
next and allocates CPU
Sometimes the only scheduler in a system
Short-term scheduler is invoked frequently (milliseconds) (must be fast)
Long-term scheduler (or job scheduler) – selects which processes should be brought
into the ready queue
Long-term scheduler is invoked infrequently (seconds, minutes) (may be slow)
The long-term scheduler controls the degree of multiprogramming
Processes can be described as either:
I/O-bound process – spends more time doing I/O than computations, many short CPU
bursts
CPU-bound process – spends more time doing computations; few very long CPU bursts
Long-term scheduler strives for good process mix
Department of CSE, Dept of CSE,
Operating Operating Systems BVCOE NEW DELHI
Systems
Addition of Medium Term Scheduling
Medium-term scheduler can be added if degree of multiple
programming needs to decrease
Remove process from memory, store on disk, bring back in
from disk to continue execution: swapping
Department of CSE, Dept of CSE,
Operating Operating Systems BVCOE NEW DELHI
Systems
Context Switch
When CPU switches to another process, the system must
save the state of the old process and load the saved state
for the new process via a context switch
Context of a process represented in the PCB
Context-switch time is overhead; the system does no useful
work while switching
The more complex the OS and the PCB the longer the
context switch
Time dependent on hardware support
Some hardware provides multiple sets of registers per CPU
multiple contexts loaded at once
Department of CSE, Dept of CSE,
Operating Operating Systems BVCOE NEW DELHI
Systems
Operations on Processes
System must provide mechanisms for:
process creation,
process termination
Department of CSE, Dept of CSE,
Operating Operating Systems BVCOE NEW DELHI
Systems
Process Creation
Parent process create children processes, which, in
turn create other processes, forming a tree of
processes
Generally, process identified and managed via a
process identifier (pid)
Resource sharing options
Parent and children share all resources
Children share subset of parent’s resources
Parent and child share no resources
Execution options
Parent and children execute concurrently
Parent waits until children terminate
Department of CSE, Dept of CSE,
Operating Operating Systems BVCOE NEW DELHI
Systems
Process Termination
Process executes last statement and then asks the operating
system to delete it using the exit() system call.
Returns status data from child to parent (via wait())
Process’ resources are deallocated by operating system
Parent may terminate the execution of children processes
using the abort() system call. Some reasons for doing
so:
Child has exceeded allocated resources
Task assigned to child is no longer required
The parent is exiting and the operating systems does not allow
a child to continue if its parent terminates
Department of CSE, Operating Systems
Process Termination
Some operating systems do not allow child to exists if its parent has
terminated. If a process terminates, then all its children must also
be terminated.
cascading termination. All children, grandchildren, etc. are
terminated.
The termination is initiated by the operating system.
The parent process may wait for termination of a child process by
using the wait()system call. The call returns status information
and the pid of the terminated process
pid = wait(&status);
If no parent waiting (did not invoke wait()) process is a zombie
If parent terminated without invoking wait , process is an
orphan
Department of CSE, Operating Systems
LECTURE 3
Department of CSE, Operating Systems
Interprocess Communication
Processes within a system may be independent or cooperating
Cooperating process can affect or be affected by other
processes, including sharing data
Reasons for cooperating processes:
Information sharing
Computation speedup
Modularity
Convenience
Cooperating processes need interprocess communication
(IPC)
Two models of IPC
Shared memory
Message passing
Department of CSE, Dept of CSE,
Operating Operating Systems BVCOE NEW DELHI
Systems
Communications Models
(a) Message passing. (b) shared memory.
Department of CSE, Dept of CSE,
Operating Operating Systems BVCOE NEW DELHI
Systems
Cooperating Processes
Independent process cannot affect or be affected by the
execution of another process
Cooperating process can affect or be affected by the
execution of another process
Advantages of process cooperation
Information sharing
Computation speed-up
Modularity
Convenience
Department of CSE, Dept of CSE,
Operating Operating Systems BVCOE NEW DELHI
Systems
Producer-Consumer Problem
Paradigm for cooperating processes, producer
process produces information that is consumed by a
consumer process
unbounded-buffer places no practical limit on the
size of the buffer
bounded-buffer assumes that there is a fixed buffer
size
Department of CSE, Dept of CSE,
Operating Operating Systems BVCOE NEW DELHI
Systems
Interprocess Communication – Shared Memory
An area of memory shared among the processes
that wish to communicate
The communication is under the control of the
users processes not the operating system.
Major issues is to provide mechanism that will
allow the user processes to synchronize their
actions when they access shared memory.
Synchronization is discussed in great details in
Chapter 5.
Department of CSE, Dept of CSE,
Operating Operating Systems BVCOE NEW DELHI
Systems
Interprocess Communication – Message Passing
Mechanism for processes to communicate and to
synchronize their actions
Message system – processes communicate with each
other without resorting to shared variables
IPC facility provides two operations:
send(message)
receive(message)
The message size is either fixed or variable
Department of CSE, Dept of CSE,
Operating Operating Systems BVCOE NEW DELHI
Systems
Message Passing (Cont.)
If processes P and Q wish to communicate, they need to:
Establish a communication link between them
Exchange messages via send/receive
Department of CSE, Dept of CSE,
Operating Operating Systems BVCOE NEW DELHI
Systems
Synchronization
Message passing may be either blocking or non-blocking
Blocking is considered synchronous
Blocking send -- the sender is blocked until the message is
received
Blocking receive -- the receiver is blocked until a message
is available
Non-blocking is considered asynchronous
Non-blocking send -- the sender sends the message and
continue
Non-blocking receive -- the receiver receives:
A valid message, or
Null message
Different combinations possible
If both send and receive are blocking, we have a rendezvous
Department of CSE, Operating Systems
Buffering
Queue of messages attached to the link.
implemented in one of three ways
1. Zero capacity – no messages are queued on a link.
Sender must wait for receiver (rendezvous)
2. Bounded capacity – finite length of n messages
Sender must wait if link full
3. Unbounded capacity – infinite length
Sender never waits
Department of CSE, Operating Systems
ONLINE RESOURCES:
https://www.youtube.com/watch?v=g_kLI2pkNC4&list=PL3-
wYxbt4yCjpcfUDz-TgD_ainZ2K3MUZ&index=12
GATE QUESTIONS
1. Which of the following need not necessarily be saved on a context switch between processes?
(a) General purpose registers
(b) Translation look-aside buffer
(c) Program counter
(d) All of the above
Answer: (b)
2. The maximum number of processes that can be in Ready state for a computer system
with n CPUs is
(A) n
(B) n2
(C) 2n
(D) Independent of n
Department of CSE, Operating Systems
3. The following C program is executed on a Unix/Linux system:
#include <unistd.h>
int main()
{
int i;
for (i=0; i<10; i++)
if (i%2 == 0) fork();
return 0;
}
The total number of child processes created is ________.
4. Consider a system with 3 processes that share 4 instances of the same resource type. Each process can request a
maximum of K instances. Resource instances can be requested and released only one at a time. The largest value
of K that will always avoid deadlock is ____.
Department of CSE, Operating Systems
Chapter 4: Threads
BVCOE NEW DELHI
Chapter 4: Threads
Overview
Multicore Programming
Multithreading Models
Thread Libraries
Implicit Threading
Threading Issues
Operating System Examples
Department of CSE, Operating Systems BVCOE NEW DELHI
Objectives
To introduce the notion of a thread—a fundamental
unit of CPU utilization that forms the basis of
multithreaded computer systems
To discuss the APIs for the Pthreads, Windows, and
Java thread libraries
To explore several strategies that provide implicit
threading
To examine issues related to multithreaded
programming
To cover operating system support for threads in
Windows and Linux
Department of CSE, Operating Systems BVCOE NEW DELHI
Motivation
Most modern applications are multithreaded
Threads run within application
Multiple tasks with the application can be implemented
by separate threads
Update display
Fetch data
Spell checking
Answer a network request
Process creation is heavy-weight while thread creation is
light-weight
Can simplify code, increase efficiency
Kernels are generally multithreaded
Department of CSE, Operating Systems BVCOE NEW DELHI
Multithreaded Server Architecture
Department of CSE, Operating Systems BVCOE NEW DELHI
Benefits
Responsiveness – may allow continued execution if
part of process is blocked, especially important for user
interfaces
Resource Sharing – threads share resources of
process, easier than shared memory or message passing
Economy – cheaper than process creation, thread
switching lower overhead than context switching
Scalability – process can take advantage of
multiprocessor architectures
Department of CSE, Operating Systems BVCOE NEW DELHI
Single and Multithreaded Processes
Department of CSE, Operating Systems BVCOE NEW DELHI
User Threads and Kernel Threads
User threads - management done by user-level threads library
Three primary thread libraries:
POSIX Pthreads
Windows threads
Java threads
Kernel threads - Supported by the Kernel
Examples – virtually all general purpose operating systems,
including:
Windows
Solaris
Linux
Tru64 UNIX
Mac OS X
Department of CSE, Operating Systems BVCOE NEW DELHI
Multithreading Models
Many-to-One
One-to-One
Many-to-Many
Department of CSE, Operating Systems BVCOE NEW DELHI
Many-to-One
Many user-level threads mapped to
single kernel thread
One thread blocking causes all to
block
Multiple threads may not run in
parallel on muticore system because
only one may be in kernel at a time
Few systems currently use this model
Examples:
Solaris Green Threads
GNU Portable Threads
Department of CSE, Operating Systems
One-to-One
Each user-level thread maps to kernel thread
Creating a user-level thread creates a kernel
thread
More concurrency than many-to-one
Number of threads per process sometimes
restricted due to overhead
Examples
Windows
Linux
Solaris 9 and later
Department of CSE, Operating Systems
Many-to-Many Model
Allows many user level threads
to be mapped to many kernel
threads
Allows the operating system to
create a sufficient number of
kernel threads
Solaris prior to version 9
Windows with the ThreadFiber
package
Department of CSE, Operating Systems
Two-level Model
Similar to M:M, except that it allows a user
thread to be bound to kernel thread
Examples
IRIX
HP-UX
Tru64 UNIX
Solaris 8 and earlier
Department of CSE, Operating Systems
GATE QUESTIONS
1. Which one of the following is FALSE?
(A) User level threads are not scheduled by the kernel.
(B)When a user level thread is blocked, all other threads of its process are blocked.
(C) Context switching between user level threads is faster than context switching between kernel
level threads.
(D) Kernel level threads cannot share the code segment.
2. A multithreaded program P executes with x number of threads and uses y number of locks for
ensuring mutual exclusion while operating on shared memory locations. All locks in the program
are non-reentrant, i.e., if a thread holds a lock l, then it cannot re-acquire lock l without releasing
it. If a thread is unable to acquire a lock, it blocks until the lock becomes avilable.
The minimum value of x and the minimum value of y together for which execution of P can result in a
deadlock are:
(A) x=1, y=2
(B) x=2, y=1
(C) x=2,y=2
(D) x=1, y=1
Department of CSE, Operating Systems
GATE QUESTIONS
3. A thread is usually defined as a "light weight process" because an operating system (OS) maintains smaller data
structures for a thread than for a process. In relation to this, which of the following is TRUE?
(A) On per-thread basis, the OS maintains only CPU register state
(B)The OS does not maintain a separate stack for each thread
(C) On per-thread basis, the OS does not maintain virtual memory state
(D) On per-thread basis, the OS maintains only scheduling and accounting information
Department of CSE, Operating Systems
Chapter 6: CPU Scheduling
Operating Systems CSE DEPARTMENT, BVCOE NEW DELHI
LECTURE 1: CPU Scheduling
Basic Concepts
Scheduling Criteria
Scheduling Algorithms
Thread Scheduling
Multiple-Processor Scheduling
Real-Time CPU Scheduling
Operating Systems Examples
Algorithm Evaluation
Operating Systems CSE DEPARTMENT, BVCOE NEW DELHI
Department of CSE, Operating Systems
Objectives
To introduce CPU scheduling, which is the basis for
multiprogrammed operating systems
To describe various CPU-scheduling algorithms
To discuss evaluation criteria for selecting a CPU-
scheduling algorithm for a particular system
To examine the scheduling algorithms of several
operating systems
Operating Systems CSE DEPARTMENT, BVCOE NEW DELHI
Department of CSE, Operating Systems
Basic Concepts
Maximum CPU utilization
obtained with
multiprogramming
CPU–I/O Burst Cycle –
Process execution consists of
a cycle of CPU execution
and I/O wait
CPU burst followed by I/O
burst
CPU burst distribution is of
main concern
Operating Systems CSE DEPARTMENT, BVCOE NEW DELHI
Department of CSE, Operating Systems
Histogram of CPU-burst Times
Operating Systems CSE DEPARTMENT, BVCOE NEW DELHI
Department of CSE, Operating Systems
CPU Scheduler
Short-term scheduler selects from among the processes
in ready queue, and allocates the CPU to one of them
Queue may be ordered in various ways
CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
Scheduling under 1 and 4 is nonpreemptive
All other scheduling is preemptive
Consider access to shared data
Consider preemption while in kernel mode
Consider interrupts occurring during crucial OS activities
Operating Systems CSE DEPARTMENT, BVCOE NEW DELHI
Department of CSE, Operating Systems
Dispatcher
Dispatcher module gives control of the CPU to the
process selected by the short-term scheduler; this
involves:
switching context
switching to user mode
jumping to the proper location in the user program to
restart that program
Dispatch latency – time it takes for the
dispatcher to stop one process and start another
running
Operating Systems CSE DEPARTMENT, BVCOE NEW DELHI
Department of CSE, Operating Systems
Scheduling Criteria
CPU utilization – keep the CPU as busy as possible
Throughput – # of processes that complete their
execution per time unit
Turnaround time – amount of time to execute a
particular process
Waiting time – amount of time a process has been
waiting in the ready queue
Response time – amount of time it takes from when a
request was submitted until the first response is
produced, not output (for time-sharing environment)
Operating Systems CSE DEPARTMENT, BVCOE NEW DELHI
Department of CSE, Operating Systems
LECTURE 2
Department of CSE, Operating Systems
Scheduling Algorithm Optimization Criteria
Max CPU utilization
Max throughput
Min turnaround time
Min waiting time
Min response time
Operating Systems CSE DEPARTMENT, BVCOE NEW DELHI
Department of CSE, Operating Systems
First- Come, First-Served (FCFS) Scheduling
Process Burst Time
P1 24
P2 3
P3 3
Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P1 P2 P3
0 24 27 30
Waiting time for P1 = 0; P2 = 24; P3 = 27
Average waiting time: (0 + 24 + 27)/3 = 17
Operating Systems CSE DEPARTMENT, BVCOE NEW DELHI
Department of CSE, Operating Systems
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order:
P2 , P3 , P1
The Gantt chart for the schedule is:
P2 P3 P1
0 3 6 30
Waiting time for P1 = 6;P2 = 0; P3 = 3
Average waiting time: (6 + 0 + 3)/3 = 3
Much better than previous case
Convoy effect - short process behind long process
Consider one CPU-bound and many I/O-bound processes
Operating Systems CSE DEPARTMENT, BVCOE NEW DELHI
Department of CSE, Operating Systems
Shortest-Job-First (SJF) Scheduling
Associate with each process the length of its next CPU
burst
Use these lengths to schedule the process with the
shortest time
SJF is optimal – gives minimum average waiting time
for a given set of processes
The difficulty is knowing the length of the next CPU
request
Could ask the user
Operating Systems CSE DEPARTMENT, BVCOE NEW DELHI
Department of CSE, Operating Systems
Example of SJF
ProcessArriva l TimeBurst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3
SJF scheduling chart
P4 P1 P3 P2
0 3 9 16 24
Average waiting time = (3 + 16 + 9 + 0) / 4 = 7
Department of CSE, Operating Systems
Shortest-remaining-time-first
Now we add the concepts of varying arrival times and preemption to
the analysis
ProcessAarri Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
Preemptive SJF Gantt Chart
P1 P2 P4 P1 P3
0 1 5 10 17 26
Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 =
6.5 msec
Department of CSE, Operating Systems
LECTURE 3
Department of CSE, Operating Systems
Priority Scheduling
A priority number (integer) is associated with each process
The CPU is allocated to the process with the highest priority
(smallest integer highest priority)
Preemptive
Nonpreemptive
SJF is priority scheduling where priority is the inverse of
predicted next CPU burst time
Problem Starvation – low priority processes may never
execute
Solution Aging – as time progresses increase the priority of
the process
Operating Systems CSE DEPARTMENT, BVCOE NEW DELHI
Department of CSE, Operating Systems
Example of Priority Scheduling
ProcessAarri Burst TimeT Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Priority scheduling Gantt Chart
Average waiting time = 8.2 msec
Operating Systems CSE DEPARTMENT, BVCOE NEW DELHI
Department of CSE, Operating Systems
Round Robin (RR)
Each process gets a small unit of CPU time (time
quantum q), usually 10-100 milliseconds. After this time
has elapsed, the process is preempted and added to the end
of the ready queue.
If there are n processes in the ready queue and the time
quantum is q, then each process gets 1/n of the CPU time in
chunks of at most q time units at once. No process waits
more than (n-1)q time units.
Timer interrupts every quantum to schedule next process
Performance
q large FIFO
q small q must be large with respect to context switch,
otherwise overhead is too high
Operating Systems CSE DEPARTMENT, BVCOE NEW DELHI
Department of CSE, Operating Systems
Example of RR with Time Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3
The Gantt chart is:
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
Typically, higher average turnaround than SJF, but better
response
q should be large compared to context switch time
q usually 10ms to 100ms, context switch < 10 usec
Operating Systems CSE DEPARTMENT, BVCOE NEW DELHI
Department of CSE, Operating Systems
Time Quantum and Context Switch Time
Operating Systems CSE DEPARTMENT, BVCOE NEW DELHI
Department of CSE, Operating Systems
Real-Time CPU Scheduling
Can present obvious challenges
Soft real-time systems – no
guarantee as to when critical real-
time process will be scheduled
Hard real-time systems – task
must be serviced by its deadline
Two types of latencies affect
performance
1. Interrupt latency – time from
arrival of interrupt to start of
routine that services interrupt
2. Dispatch latency – time for
schedule to take current process
off CPU and switch to another
Department of CSE, Operating Systems
Algorithm Evaluation
How to select CPU-scheduling algorithm for an OS?
Determine criteria, then evaluate algorithms
Deterministic modeling
Type of analytic evaluation
Takes a particular predetermined workload and defines the
performance of each algorithm for that workload
Consider 5 processes arriving at time 0:
Department of CSE, Operating Systems
ONLINE RESOURCES
https://www.youtube.com/watch?v=4hCih9eLc7M&list=P
L3-wYxbt4yCjpcfUDz-TgD_ainZ2K3MUZ&index=19
Department of CSE, Operating Systems
GATE QUESTIONS
1. Consider a set of n tasks with known runtimes r1, r2, … rn to be run on a uniprocessor
machine. Which of the following processor scheduling algorithms will result in the maximum
throughput?
(a) Round-Robin
(b) Shortest-Job-First
(c) Highest-Response-Ratio-Next
(d) First-Come-First-Served
2. Which scheduling policy is most suitable for a time shared operating system?
(a) Shortest Job First
(b) Round Robin
(c) First Come First Serve
(d) Elevator
3. Consider an arbitrary set of CPU-bound processes with unequal CPU burst lengths submitted at
the same time to a computer system. Which one of the following process scheduling algorithms
would minimize the average waiting time in the ready queue?
(A) Shortest remaining time first
(B) Round-robin with time quantum less than the shortest CPU burst
(C) Uniform random
(D) Highest priority first with priority proportional to CPU burst length
Department of CSE, Operating Systems
GATE QUESTIONS
4. Consider three processes (process id 0, 1, 2 respectively) with compute time bursts 2, 4 and 8
time units. All processes arrive at time zero. Consider the longest remaining time first (LRTF)
scheduling algorithm. In LRTF ties are broken by giving priority to the process with the lowest
process id. The average turn around time is:
A 13 units
B 14 units
C 15 units
D 16 units
5. Consider three CPU-intensive processes, which require 10, 20 and 30 time units and arrive at
times 0, 2 and 6, respectively. How many context switches are needed if the operating system
implements a shortest remaining time first scheduling algorithm? Do not count the context
switches at time zero and at the end.
A1
B2
C3
D4
Department of CSE, Operating Systems
Chapter 5: Process Synchronization
Operating Systems CSE DEPARTMENT,
BVCOE NEW DELHI
LECTURE 1: Process Synchronization
Background
The Critical-Section Problem
Peterson’s Solution
Synchronization Hardware
Mutex Locks
Semaphores
Classic Problems of Synchronization
Monitors
Synchronization Examples
Alternative Approaches
Department of CSE, Operating Systems
Objectives
To present the concept of process synchronization.
To introduce the critical-section problem, whose solutions
can be used to ensure the consistency of shared data
To present both software and hardware solutions of the
critical-section problem
To examine several classical process-synchronization
problems
To explore several tools that are used to solve process
synchronization problems
Department of CSE, Operating Systems
Background
Processes can execute concurrently
May be interrupted at any time, partially completing
execution
Concurrent access to shared data may result in data
inconsistency
Maintaining data consistency requires mechanisms to
ensure the orderly execution of cooperating processes
Illustration of the problem:
Suppose that we wanted to provide a solution to the
consumer-producer problem that fills all the buffers. We
can do so by having an integer counter that keeps track
of the number of full buffers. Initially, counter is set to
0. It is incremented by the producer after it produces a
new buffer and is decremented by the consumer after it
consumes a buffer.
Department of CSE, Operating Systems
Producer
while (true) {
/* produce an item in next produced */
while (counter == BUFFER_SIZE) ;
/* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
counter++;
}
Department of CSE, Operating Systems
Consumer
while (true) {
while (counter == 0)
; /* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;
/* consume the item in next consumed */
}
Department of CSE, Operating Systems
Race Condition
counter++ could be implemented as
register1 = counter
register1 = register1 + 1
counter = register1
counter-- could be implemented as
register2 = counter
register2 = register2 - 1
counter = register2
Consider this execution interleaving with “count = 5” initially:
S0: producer execute register1 = counter {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = counter {register2 = 5}
S3: consumer execute register2 = register2 – 1 {register2 = 4}
S4: producer execute counter = register1 {counter = 6 }
S5: consumer execute counter = register2 {counter = 4}
Department of CSE, Operating Systems
Critical Section Problem
Consider system of n processes {p0, p1, … pn-1}
Each process has critical section segment of code
Process may be changing common variables, updating
table, writing file, etc
When one process in critical section, no other may be in
its critical section
Critical section problem is to design protocol to
solve this
Each process must ask permission to enter critical
section in entry section, may follow critical section
with exit section, then remainder section
Department of CSE, Operating Systems
Critical Section
General structure of process Pi
Department of CSE, Operating Systems
Algorithm for Process Pi
do {
while (turn == j);
critical section
turn = j;
remainder section
} while (true);
Department of CSE, Operating Systems
Solution to Critical-Section Problem
1. Mutual Exclusion - If process Pi is executing in its critical section, then no other
processes can be executing in their critical sections
2. Progress - If no process is executing in its critical section and there exist some
processes that wish to enter their critical section, then the selection of the processes
that will enter the critical section next cannot be postponed indefinitely
3. Bounded Waiting - A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made a
request to enter its critical section and before that request is granted
Assume that each process executes at a nonzero speed
No assumption concerning relative speed of the n processes
Department of CSE, Operating Systems
Critical-Section Handling in OS
Two approaches depending on if kernel is preemptive or
non- preemptive
Preemptive – allows preemption of process when running
in kernel mode
Non-preemptive – runs until exits kernel mode, blocks, or
voluntarily yields CPU
Essentially free of race conditions in kernel mode
Department of CSE, Operating Systems
Peterson’s Solution
Good algorithmic description of solving the problem
Two process solution
Assume that the load and store machine-language instructions are
atomic; that is, cannot be interrupted
The two processes share two variables:
int turn;
Boolean flag[2]
The variable turn indicates whose turn it is to enter the critical section
The flag array is used to indicate if a process is ready to enter the critical
section. flag[i] = true implies that process Pi is ready!
Department of CSE, Operating Systems
LECTURE 2
Department of CSE, Operating Systems
Algorithm for Process Pi
do {
flag[i] = true;
turn = j;
while (flag[j] && turn = = j);
critical section
flag[i] = false;
remainder section
} while (true);
Department of CSE, Operating Systems
Peterson’s Solution (Cont.)
Provable that the three CS requirement are met:
1. Mutual exclusion is preserved
Pi enters CS only if:
either flag[j] = false or turn =
i
2. Progress requirement is satisfied
3. Bounded-waiting requirement is met
Department of CSE, Operating Systems
Solution to Critical-section Problem Using Locks
do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);
Department of CSE, Operating Systems
LECTURE 3
Department of CSE, Operating Systems
test_and_set Instruction
Definition:
boolean test_and_set (boolean *target)
{
boolean rv = *target;
*target = TRUE;
return rv:
}
1. Executed atomically
2. Returns the original value of passed parameter
3. Set the new value of passed parameter to “TRUE”.
Department of CSE, Operating Systems
Solution using test_and_set()
Shared Boolean variable lock, initialized to FALSE
Solution:
do {
while (test_and_set(&lock))
; /* do nothing */
/* critical section */
lock = false;
/* remainder section */
} while (true);
Department of CSE, Operating Systems
Bounded-waiting Mutual Exclusion with test_and_set
do {
waiting[i] = true;
key = true;
while (waiting[i] && key)
key = test_and_set(&lock);
waiting[i] = false;
/* critical section */
j = (i + 1) % n;
while ((j != i) && !waiting[j])
j = (j + 1) % n;
if (j == i)
lock = false;
else
waiting[j] = false;
/* remainder section */
} while (true);
Department of CSE, Operating Systems
LECTURE 4
Department of CSE, Operating Systems
Semaphore
Synchronization tool that provides more sophisticated ways (than Mutex locks) for process to
synchronize their activities.
Semaphore S – integer variable
Can only be accessed via two indivisible (atomic) operations
wait() and signal()
Originally called P() and V()
Definition of the wait() operation
wait(S) {
while (S <= 0)
; // busy wait
S--;
}
Definition of the signal() operation
signal(S) {
S++;
}
Department of CSE, Operating Systems
Semaphore Usage
Counting semaphore – integer value can range over an unrestricted domain
Binary semaphore – integer value can range only between 0 and 1
Same as a mutex lock
Can solve various synchronization problems
Consider P1 and P2 that require S1 to happen before S2
Create a semaphore “synch” initialized to 0
P1:
S1;
signal(synch);
P2:
wait(synch);
S2;
Can implement a counting semaphore S as a binary semaphore
Department of CSE, Operating Systems
Semaphore Implementation
Must guarantee that no two processes can execute the
wait() and signal() on the same semaphore at the
same time
Thus, the implementation becomes the critical section
problem where the wait and signal code are placed in
the critical section
Could now have busy waiting in critical section
implementation
But implementation code is short
Little busy waiting if critical section rarely occupied
Note that applications may spend lots of time in critical
sections and therefore this is not a good solution
Department of CSE, Operating Systems
Deadlock and Starvation
Deadlock – two or more processes are waiting indefinitely for an event that can be
caused by only one of the waiting processes
Starvation – indefinite blocking
A process may never be removed from the semaphore queue in which it is
suspended
Priority Inversion – Scheduling problem when lower-priority process holds a lock
needed by higher-priority process
Solved via priority-inheritance protocol
Department of CSE, Operating Systems
LECTURE 5
Department of CSE, Operating Systems
Classical Problems of
Synchronization
Classical problems used to test newly-proposed
synchronization schemes
Bounded-Buffer Problem
Readers and Writers Problem
Dining-Philosophers Problem
Department of CSE, Operating Systems
Bounded-Buffer Problem
nbuffers, each can hold one item
Semaphore mutex initialized to the value 1
Semaphore full initialized to the value 0
Semaphore empty initialized to the value n
Department of CSE, Operating Systems
Bounded Buffer Problem (Cont.)
The structure of the producer process
do {
...
/* produce an item in next_produced */
...
wait(empty);
wait(mutex);
...
/* add next produced to the buffer */
...
signal(mutex);
signal(full);
} while (true);
Department of CSE, Operating Systems
Bounded Buffer Problem (Cont.)
The structure of the consumer process
Do {
wait(full);
wait(mutex);
...
/* remove an item from buffer to next_consumed */
...
signal(mutex);
signal(empty);
...
/* consume the item in next consumed */
...
} while (true);
Department of CSE, Operating Systems
Readers-Writers Problem
A data set is shared among a number of concurrent processes
Readers – only read the data set; they do not perform any updates
Writers – can both read and write
Problem – allow multiple readers to read at the same time
Only one single writer can access the shared data at the same time
Several variations of how readers and writers are considered – all involve some form of
priorities
Shared Data
Data set
Semaphore rw_mutex initialized to 1
Semaphore mutex initialized to 1
Integer read_count initialized to 0
Department of CSE, Operating Systems
Readers-Writers Problem (Cont.)
The structure of a writer process
do {
wait(rw_mutex);
...
/* writing is performed */
...
signal(rw_mutex);
} while (true);
Department of CSE, Operating Systems
Readers-Writers Problem (Cont.)
The structure of a reader process
do {
wait(mutex);
read_count++;
if (read_count == 1)
wait(rw_mutex);
signal(mutex);
...
/* reading is performed */
...
wait(mutex);
read count--;
if (read_count == 0)
signal(rw_mutex);
signal(mutex);
} while (true);
Department of CSE, Operating Systems
Readers-Writers Problem
Variations
First variation – no reader kept waiting unless
writer has permission to use shared object
Second variation – once writer is ready, it
performs the write ASAP
Both may have starvation leading to even more
variations
Problem is solved on some systems by kernel
providing reader-writer locks
Department of CSE, Operating Systems
Dining-Philosophers Problem
Philosophers spend their lives alternating thinking and eating
Don’t interact with their neighbors, occasionally try to pick up 2 chopsticks (one at a
time) to eat from bowl
Need both to eat, then release both when done
In the case of 5 philosophers
Shared data
Bowl of rice (data set)
Semaphore chopstick [5] initialized to 1
Department of CSE, Operating Systems
Dining-Philosophers Problem Algorithm
The structure of Philosopher i:
do {
wait (chopstick[i] );
wait (chopStick[ (i + 1) % 5] );
// eat
signal (chopstick[i] );
signal (chopstick[ (i + 1) % 5] );
// think
} while (TRUE);
What is the problem with this algorithm?
Department of CSE, Operating Systems
Dining-Philosophers Problem Algorithm (Cont.)
Deadlock handling
Allow at most 4 philosophers to be sitting
simultaneously at the table.
Allow a philosopher to pick up the forks only if
both are available (picking must be done in a critical
section.
Use an asymmetric solution -- an odd-numbered
philosopher picks up first the left chopstick and
then the right chopstick. Even-numbered
philosopher picks up first the right chopstick and
then the left chopstick.
Department of CSE, Operating Systems
LECTURE 6
Department of CSE, Operating Systems
Problems with Semaphores
Incorrect use of semaphore operations:
signal (mutex) …. wait (mutex)
wait (mutex) … wait (mutex)
Omitting of wait (mutex) or signal (mutex) (or both)
Deadlock and starvation are possible.
Department of CSE, Operating Systems
Monitors
A high-level abstraction that provides a convenient and effective mechanism for process
synchronization
Abstract data type, internal variables only accessible by code within the procedure
Only one process may be active within the monitor at a time
But not powerful enough to model some synchronization schemes
monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }
procedure Pn (…) {……}
Initialization code (…) { … }
}
}
Department of CSE, Operating Systems
Schematic view of a Monitor
Department of CSE, Operating Systems
Condition Variables
condition x, y;
Two operations are allowed on a condition variable:
x.wait() – a process that invokes the operation is
suspended until x.signal()
x.signal() – resumes one of processes (if any) that
invoked x.wait()
If no x.wait() on the variable, then it has no effect on the
variable
Department of CSE, Operating Systems
ONLINE RESOURCES:
https://www.youtube.com/watch?v=qKPCfuZAAEc&list=PL3-
wYxbt4yCjpcfUDz-TgD_ainZ2K3MUZ&index=25
https://www.youtube.com/watch?v=I-xQpuZ_UFM&list=PL3-
wYxbt4yCjpcfUDz-TgD_ainZ2K3MUZ&index=30
GATE QUESTIONS
1.Let m[0]…m[4] be mutexes (binary semaphores) and P[0] …. P[4] be processes.
Suppose each process P[i] executes the following:
wait (m[i]); wait(m[(i+1) mode 4]);
release (m[i]); release (m[(i+1)mod 4]);
This could cause (GATE CS 2000)
(a) Thrashing
(b) Deadlock
(c) Starvation, but not deadlock
(d) None of the above
Answer: (b)
Department of CSE, Operating Systems
GATE QUESTIONS
1. Consider Peterson’s algorithm for mutual exclusion between two concurrent processes i and j. The program
executed by process is shown below.
repeat
flag [i] = true;
turn = j;
while ( P ) do no-op;
Enter critical section, perform actions, then exit critical
section
flag [ i ] = false;
Perform other non-critical section actions.
until false;
2. For the program to guarantee mutual exclusion, the predicate P in the while loop should be (GATE 2001)
a) flag [j] = true and turn = i
b) flag [j] = true and turn = j
c) flag [i] = true and turn = j
d) flag [i] = true and turn = i
Answer: (b)
Department of CSE, Operating Systems
GATE QUESTIONS
3. Three concurrent processes X,Y, and Z execute three different code segments that access and update certain
shared variables. Process X executes the P operation (i.e., wait) on semaphores a, b and c; process Y executes the
P operation on semaphores b, c and d; process Z executes the P operation on semaphores c, d, and a before
entering the respective code segments. After completing the execution of its code segment, each process invokes
the V operation (i.e., signal) on its three semaphores. All semaphores are binary semaphores initialized to one.
Which one of the following represents a deadlockfree order of invoking the P operations by the processes? (GATE
CS 2013)
A
X: P(a)P(b)P(c) Y:P(b)P(c)P(d) Z:P(c)P(d)P(a)
B
X: P(b)P(a)P(c) Y:P(b)P(c)P(d) Z:P(a)P(c)P(d)
C
X: P(b)P(a)P(c) Y:P(c)P(b)P(d) Z:P(a)P(c)P(d)
D
X: P(a)P(b)P(c) Y:P(c)P(b)P(d) Z:P(c)P(d)P(a)
Department of CSE, Operating Systems
GATE QUESTIONS
4. A shared variable x, initialized to zero, is operated on by four concurrent processes W, X,Y, Z as follows. Each of the processes W
and X reads x from memory, increments by one, stores it to memory, and then terminates. Each of the processes Y and Z reads x
from memory, decrements by two, stores it to memory, and then terminates. Each process before reading x invokes the P operation
(i.e., wait) on a counting semaphore S and invokes the V operation (i.e., signal) on the semaphore S after storing x to memory.
Semaphore S is initialized to two. What is the maximum possible value of x after all processes complete execution?
A -2
B -1
C1
D2
Department of CSE, Operating Systems