OS Module 2
OS Module 2
MODULE: 2
CONTENTS:
Process Management:
• Process concept
• Process scheduling
• Operations on processes
• Inter process communication Multi-threaded Programming:
• Multithreading models
• Thread Libraries
• Threading issues. Process Scheduling:
• Basic concepts
• Scheduling Criteria;
• Scheduling Algorithms; Multiple-processor scheduling; Thread
scheduling.
Process Synchronization:
• Synchronization: The critical section problem;
• Peterson’s solution;
• Synchronization hardware;
• Semaphores;
• Classical problems of synchronization;
• Monitors
PROCESS MANAGEMENT
Process Concept
The Process
Process memory is divided into four sections as shown in the figure below:
• The stack is used to store temporary data such as local variables, function parameters, function
return values, return address etc.
• The heap which is memory that is dynamically allocated during process run time
• The data section stores global variables.
• The text section comprises the compiled program code.
• Note that, there is a free space between the stack and the heap. When the stack is full, it grows
downwards and when the heap is full, it grows upwards.
1
Shanthi K N, Dept of AIML, SUIET, Mukka
Module II, Operating Systems (23SAL052)
Process State
Q) Illustrate with a neat sketch, the process states and process control block.
Process State
A Process has 5 states. Each process may be in one of the following states –
For each process there is a Process Control Block (PCB), which stores the process-specific information
as shown below –
• Process State – The state of the process may be new, ready, running, waiting, and so on.
• Program counter – The counter indicates the address of the next instruction to be executed for
this process.
2
Shanthi K N, Dept of AIML, SUIET, Mukka
Module II, Operating Systems (23SAL052)
• CPU registers - The registers vary in number and type, depending on the computer
architecture. They include accumulators, index registers, stack pointers, and general-purpose
registers. Along with the program counter, this state information must be saved when an
interrupt occurs, to allow the process to be continued correctly afterward.
• CPU scheduling information- This information includes a process priority, pointers to
scheduling queues, and any other scheduling parameters.
• Memory-management information – This includes information such as the value of the base
and limit registers, the page tables, or the segment tables.
• Accounting information – This information includes the amount of CPU and real time used,
time limits, account numbers, job or process numbers, and so on.
• I/O status information – This information includes the list of I/O devices allocated to the
process, a list of open files, and so on.
The PCB simply serves as the repository for any information that may vary from process to process.
3
Shanthi K N, Dept of AIML, SUIET, Mukka
Module II, Operating Systems (23SAL052)
Process Scheduling
Scheduling Queues
• As processes enter the system, they are put into a job queue, which consists of all processes in
the system.
• The processes that are residing in main memory and are ready and waiting to execute are kept
on a list called the ready queue. This queue is generally stored as a linked list.
• A ready-queue header contains pointers to the first and final PCBs in the list. Each PCB
includes a pointer field that points to the next PCB in the ready queue.
4
Shanthi K N, Dept of AIML, SUIET, Mukka
Module II, Operating Systems (23SAL052)
5
Shanthi K N, Dept of AIML, SUIET, Mukka
Module II, Operating Systems (23SAL052)
6
Shanthi K N, Dept of AIML, SUIET, Mukka
Module II, Operating Systems (23SAL052)
Schedulers
Schedulers are software which selects an available program to be assigned to CPU.
• A long-term scheduler or Job scheduler – selects jobs from the job pool (of secondary
memory, disk) and loads them into the memory.
If more processes are submitted, than that can be executed immediately, such processes will be
in secondary memory. It runs infrequently, and can take time to select the next process.
• The short-term scheduler, or CPU Scheduler – selects job from memory and assigns the
CPU to it. It must select the new process for CPU frequently.
• The medium-term scheduler - selects the process in ready queue and reintroduced into the
memory.
An efficient scheduling system will select a good mix of CPU-bound processes and I/O bound
processes.
• If the scheduler selects more I/O bound process, then I/O queue will be full and ready queue
will be empty.
• If the scheduler selects more CPU bound process, then ready queue will be full and I/O queue
will be empty.
Time sharing systems employ a medium-term scheduler. It swaps out the process from ready
queue and swap in the process to ready queue. When system loads get high, this scheduler will
swap one or more processes out of the ready queue for a few seconds, in order to allow smaller
faster jobs to finish up quickly and clear the system.
7
Shanthi K N, Dept of AIML, SUIET, Mukka
Module II, Operating Systems (23SAL052)
Context switching
• The task of switching a CPU from one process to another process is called context switching.
Context-switch times are highly dependent on hardware support (Number of CPU registers).
• Whenever an interrupt occurs (hardware or software interrupt), the state of the currently
running process is saved into the PCB and the state of another process is restored from the PCB
to the CPU.
• Context switch time is an overhead, as the system does not do useful work while switching.
Operations on Processes
Q) Demonstrate the operations of process creation and process termination in UNIXProcess
Creation
• A process may create several new processes. The creating process is called a parent process,
and the new processes are called the children of that process. Each of these new processes
may in turn create other processes. Every process has a unique process ID.
• On typical Solaris systems, the process at the top of the tree is the ‘sched’ process with PID
of 0. The ‘sched’ process creates several children processes – init, pageout and fsflush.
Pageout and fsflush are responsible for managing memory and file systems. The init process
with a PID of 1, serves as a parent process for all user processes. A
process will need certain resources (CPU time, memory, files, I/O devices) to accomplish its
task. When a process creates a subprocess, the subprocess may be able to obtain its resources in
two ways:
• directly from the operating system
• Subprocess may take the resources of the parent process. The resource can be taken from
parent in two ways –
8
Shanthi K N, Dept of AIML, SUIET, Mukka
Module II, Operating Systems (23SAL052)
The parent may have to partition its resources among its children
Share the resources among several children.
There are two options for the parent process after creating the child:
• Wait for the child process to terminate and then continue execution. The parent makes a
wait() system call.
• Run concurrently with the child, continuing to execute without waiting.
Two possibilities for the address space of the child relative to the parent:
• The child may be an exact duplicate of the parent, sharing the same program and data
segments in memory. Each will have their own PCB, including program counter, registers,
and PID. This is the behaviour of the fork system call in UNIX.
• The child process may have a new program loaded into its address space, with all new code
and data segments. This is the behaviour of the spawn system calls in Windows.
In UNIX OS, a child process can be created by fork() system call. The fork system call, if
successful, returns the PID of the child process to its parents and returns a zero to the child
process. If failure, it returns -1 to the parent. Process IDs of current process or its direct
parent can be accessed using the getpid( ) and getppid( ) system calls respectively.
The parent waits for the child process to complete with the wait() system call. When the child
process completes, the parent process resumes and completes its execution.
9
Shanthi K N, Dept of AIML, SUIET, Mukka
Operating Systems 19SCS541
In windows the child process is created using the function createprocess( ). The createprocess( )
returns 1, if the child is created and returns 0, if the child is not created.
Process Termination
• A process terminates when it finishes executing its last statement and asks the operating
system to delete it, by using the exit () system call. All of the resources assigned to the
process like memory, open files, and I/O buffers, are deallocated by the operating system.
• A process can cause the termination of another process by using appropriate system call.
The parent process can terminate its child processes by knowing of the PID of the child.
• A parent may terminate the execution of children for a variety of reasons, such as:
• The child has exceeded its usage of the resources, it has been allocated.
• The task assigned to the child is no longer required.
• The parent is exiting, and the operating system terminates all the children. This is called
cascading termination.
Interprocess Communication
Q) What is interprocess communication? Explain types of IPC.
• Information Sharing - There may be several processes which need to access the same file. So
the information must be accessible at the same time to all users.
• Computation speedup - Often a solution to a problem can be solved faster if the problem can
be broken down into sub-tasks, which are solved simultaneously (particularly when multiple
processors are involved.)
• Modularity - A system can be divided into cooperating modules and executed by sending
information among one another.
• Convenience - Even a single user can work on multiple tasks by information sharing.
Cooperating processes require some type of inter-process communication. This is allowed by two
models:
1. Shared Memory systems
2. Message passing systems.
• Message Passing requires system calls for every message transfer, and is therefore slower, but
it is simpler to set up and works well across multiple computers. Message passing is generally
preferable when the amount and/or frequency of data transfers is small.
Shared-Memory Systems
• A region of shared-memory is created within the address space of a process, which needs to
communicate. Other process that needs to communicate uses this shared memory.
• The form of data and position of creating shared memory area is decided by the process.
Generally, a few messages must be passed back and forth between the cooperating processes
first in order to set up and coordinate the shared memory access.
• The process should take care that the two processes will not write the data to the shared
memory at the same time.
• This is a classic example, in which one process is producing data and another process is
consuming the data.
• The data is passed via an intermediary buffer (shared memory). The producer puts the data to
the buffer and the consumer takes out the data from the buffer. A producer can produce one
item while the consumer is consuming another item. The producer and consumer must be
synchronized, so that the consumer does not try to consume an item that has not yet been
produced. In this situation, the consumer must wait until an item is produced. There are two
types of buffers into which information can be put –
• Unbounded buffer
• Bounded buffer
• With Unbounded buffer, there is no limit on the size of the buffer, and so on the data
produced by producer. But the consumer may have to wait for new items.
• With bounded-buffer – As the buffer size is fixed. The producer has to wait if the buffer is full
and the consumer has to wait if the buffer is empty.
This example uses shared memory as a circular queue. The in and out are two pointers to the array.
Note in the code below that only the producer changes "in", and only the consumer changes "out".
Message-Passing Systems
A mechanism to allow process communication without sharing address space. It is used in distributed
systems.
• Message passing systems uses system calls for "send message" and "receive message".
• A communication link must be established between the cooperating processes before messages
can be sent.
• There are three methods of creating the link between the sender and the receiver o Direct or
indirect communication (naming) o Synchronous or asynchronous communication
(Synchronization) o Automatic or explicit buffering.
a) Direct communication the sender and receiver must explicitly know each other’s name. The syntax
for send() and receive() functions are as follows-
Disadvantages of direct communication – any changes in the identifier of a process, may have to change
the identifier in the whole system (sender and receiver), where the messages are sent and received.
A mailbox or port is used to send and receive messages. Mailbox is an object into which messages can
be sent and received. It has a unique ID. Using this identifier messages are sent and received.
Two processes can communicate only if they have a shared mailbox. The send and receive functions are
–
• send (A, message) – send a message to mailbox A
• receive (A, message) – receive a message from mailbox A
2. Synchronization
The send and receive messages can be implemented as either blocking or non-blocking.
Blocking (synchronous) send - sending process is blocked (waits) until the message is
received by receiving process or the mailbox.
Non-blocking (asynchronous) send - sends the message and continues (does not wait)
3. Buffering
When messages are passed, a temporary queue is created. Such queue can be of three capacities:
Zero capacity – The buffer size is zero (buffer does not exist). Messages are not stored in the
queue. The senders must block until receivers accept the messages.
Bounded capacity- The queue is of fixed size(n). Senders must block if the queue is full.
After sending ‘n’ bytes the sender is blocked.
Unbounded capacity - The queue is of infinite capacity. The sender never blocks.
1. What is Interprocess communication? Explain direct and indirect communication with respect to
message passing system.
2. Describe a mechanism for enforcing memory protection in order to prevent a program from
modifying the memory associated with other programs.
3. What are the tradeoffs inherent in handheld computers?
4. Distinguish between the client-server and peer-to-peer models of distributed systems.
5. Some computer systems do not provide a privileged mode of operation in hardware. Is it possible to
construct a secure operating system for these computer systems? Give arguments both that it is and
that it is not possible.
6. What are the main differences between operating systems for mainframe computers and personal
computers?
7. Identify several advantages and several disadvantages of open-source operating systems. Include the types of
people who would find each aspect to be an advantage or a disadvantage.
8. How do clustered systems differ from multiprocessor systems? What is required for two machines belonging
to a cluster to cooperate to provide a highly available service?
9. What is the main difficulty that a programmer must overcome in writing an operating system for a real-time
environment?
MULTITHREADED PROGRAMMING
• A thread is a basic unit of CPUutilization. It consistsof
thread ID
PC
register-set and stack.
• It shares with other threads belonging to the same process its code-section &data-section.
• A traditional (or heavy weight) process has a single thread ofcontrol.
• If a process has multiple threads of control, it can perform more than one task at a time. such a
process is called multithreaded process
• Resource Sharing By default, threads share the memory (and resources) of the process
to which they belong. Thus, an application is allowed to have several different threads of
activity within the sameaddress-space.
• Economy Allocating memory and resources for process-creation is costly. Thus, it is
more economical to create and context-switchthreads.
• Utilization of Multiprocessor Architectures In a multiprocessor architecture, threads
may be running in parallel on different processors. Thus, parallelism will beincreased.
MULTITHREADING MODELS
• Support for threads may be provided ateither 1. The user level, for user threads or
2. By the kernel, for kernel threads.
• User-threads are supported above the kernel and are managed withoutkernelsupport. Kernel-
threads are supported and managed directly by the OS.
• Three ways of establishing relationship between user-threads &kernel-threads:
1. Many-to-onemodel
2. One-to-one modeland 3. Many-to-manymodel.
Many-to-One Model
• Many user-level threads are mapped to one kernel thread.
Advantages:
Thread management is done by the thread library in user space, so it isefficient.
Disadvantages:
The entire process will block if a thread makes a blockingsystem-call.
Multiple threads are unable to run in parallel onmultiprocessors.
• Forexample:
Solaris green threads
GNU portable threads.
Fig:
Many-toone model
OnetoOne
Model
• Each user thread is mapped to a kernel thread.
Advantages:
It provides more concurrency by allowing another thread to run when a thread makes a
blockingsystem-call.
Multiple threads can run in parallel on multiprocessors.
Disadvantage:
Creating a user thread requires creating the corresponding kernel thread.
• For example:
Windows NT/XP/2000, Linux
Tru64 UNIX
Fig: Many-to-many model Fig: Two-level model
Thread Libraries
• It provides the programmer with an API for the creation and management ofthreads.
2. Win32 and
3. Java.
Pthreads
• This is a POSIX standard API for thread creation andsynchronization.
• This is a specification for thread-behavior, not an implementation.
• OS designers may implement the specification in any way theywish.
• Commonly used in: UNIX andSolaris.
Win32 threads
• Implements the one-to-onemapping Each threadcontains
A threadid
Registerset
Separate user and kernelstacks
Private data storagearea
• The register set, stacks, and private storage area are known as
the context of the threads The primary data structures of a
thread include:
ETHREAD (executive threadblock)
KTHREAD (kernel threadblock)
TEB (thread environmentblock)
Java Threads
• Threads are the basic model of program-executionin Java program and Java language.
• The API provides a rich set of features for the creation and management of threads.
• All Java programs comprise at least a single thread ofcontrol.
• Two techniques for creating threads:
1. Create a new class that is derived from the Thread class and override its run() method.
2. Define a class that implements the Runnable interface. The Runnable interface is defined as
follows:
THREADING ISSUES
• If a thread invokes the exec(), the program specified in the parameter to exec() will replace the
entire process including allthreads.
Thread Cancellation
• This is the task of terminating a thread before it hascompleted.
• Target thread is the thread that is to be cancelled
• Thread cancellation occurs in two differentcases:
1. Asynchronous cancellation: One thread immediately terminates the targetthread.
2. Deferred cancellation: The target thread periodically checks whether it should be
terminated.
Signal Handling
• In UNIX, a signal is used to notify a process that a particular event hasoccurred.
• All signals follow thispattern:
1. A signal is generated by the occurrence of a certainevent.
2. A generated signal is delivered to aprocess.
3. Once delivered, the signal must behandled.
• A signal handler is used to processsignals.
• A signal may be received either synchronously or asynchronously, depending on thesource.
1. Synchronoussignals
Delivered to the same process that performed the operation causing the signal.
E.g. illegal memory access and division by 0.
2. Asynchronoussignals
Generated by an event external to a running process.
E.g. user terminating a process with specific keystrokes<ctrl><c>.
Every signal can be handled by one of two possiblehandlers:
1. A Default SignalHandler
Run by the kernel when handling the signal.
2. A User-defined SignalHandler
Overrides the default signal handler.
• In single-threaded programs, delivering signals is simple (since signals are always
delivered to a process).
• In multithreaded programs, delivering signals is more complex. Then, the following
options exist:
1. Deliver the signal to the thread to which the signal applies.
2. Deliver the signal to every thread in process
3. Deliver the signal to certain threads in the process.
4. Assign a specific thread to receive all signals for the process.
THREAD POOLS
The basic idea is to
create a no. of threads at process-startup and
place the threads into a pool (where they sit and wait for work).
Procedure:
1. When a server receives a request, it awakens a thread from the pool.
2. If any thread is available, the request is passed to it for service.
3. Once the service is completed, the thread returns to the pool.
Advantages:
Servicing a request with an existing thread is usually faster than waiting to create a
thread.
The pool limits the no. of threads that exist at any one point. No. of threads in the
pool can be based on actors such as
no. of CPUs
amount of memory and
expected no. of concurrent client-requests.
SCHEDULER ACTIVATIONS
• Both M:M and Two-level models require communication to maintain the
appropriate number of kernel threads allocated to theapplication.
• Scheduler activations provide upcallsa communication mechanism from the kernel
to the threadlibrary
• This communication allows an application to maintain the correct number kernel
threads
• One scheme for communication between the user-thread library and the kernel is
known as scheduler activation.
PROCESS SCHEDULING
Basic Concepts
• In a single-processor system,
Only one process may run at a time.
Other processes must wait until the CPU is rescheduled.
• Objective ofmultiprogramming:
To have some process running at all times, in order to maximize CPU
utilization.
CPU Scheduler
Thisscheduler
selects a waiting-process from the ready-queue and
allocates CPU to the waiting-process.
• The ready-queue could be a FIFO, priority queue, tree andlist.
• The records in the queues are generally process control blocks (PCBs) of theprocesses.
CPU Scheduling
• Four situations under which CPU scheduling decisions takeplace:
1. When a process switches from the running state to the waiting state. For ex;
I/O request.
2. When a process switches from the running state to the ready state. For ex:
when an interrupt occurs.
3. When a process switches from the waiting state to the ready state. For ex:
completion of I/O.
4. When a process terminates.
• Scheduling under 1 and 4 is non- preemptive. Scheduling under 2 and 3 is preemptive.
Preemptive Scheduling
This is driven by the idea of prioritizedcomputation.
Processes that are runnable may be temporarilysuspended
Disadvantages:
1. Incurs a cost associated with access toshared-data.
2. Affects the design of the OSkernel.
Dispatcher
• It gives control of the CPU to the process selected by the short-termscheduler.
• The functioninvolves:
1. Switchingcontext
2. Switching to user mode&
3. Jumping to the proper location in the user program to restart that
program It should be as fast as possible, since it is invoked during every
process switch.
• Dispatch latency means the time taken by the dispatcherto
stop one process and
start another running.
SCHEDULING CRITERIA:
In choosing which algorithm to use in a particular situation, depends upon the properties
of the various algorithms.Many criteria have been suggested for comparing CPU-
scheduling algorithms. The criteria include the following:
1. CPU utilization: We want to keep the CPU as busy as possible. Conceptually,
CPU utilization can range from 0 to 100 percent. In a real system, it should range
from 40 percent (for a lightly loaded system) to 90 percent (for a heavily used
system).
2. Throughput: If the CPU is busy executing processes, then work is being done.
One measure of work is the number of processes that are completed per time unit,
called throughput. For long processes, this rate may be one process per hour; for
short transactions, it may be ten processes per second.
3. Turnaround time. This is the important criterion which tells how long it takes to
execute that process. The interval from the time of submission of a process to the
time of completion is the turnaround time. Turnaround time is the sum of the
periods spent waiting to get into memory, waiting in the ready queue, executing on
the CPU, and doing I/0.
4. Waiting time: The CPU-scheduling algorithm does not affect the amount of time
during which a process executes or does I/0, it affects only the amount of time that
a process spends waiting in the ready queue.Waiting time is the sum of the periods
spent waiting in the ready queue.
5. Response time:In an interactive system, turnaround time may not be the best
criterion. Often, a process can produce some output fairly early and can continue
computing new results while previous results are being output to the user. Thus,
another measure is the time from the submission of a request until the first
response is produced. This measure, called response time, is the time it takes to
start responding, not the time it takes to output the response. The turnaround time is
generally limited by the speed of the output device.
SCHEDULING ALGORITHMS
CPU scheduling deals with the problem of deciding which of the processes in
the ready-queue is to be allocated theCPU. Following are some
schedulingalgorithms:
1. FCFS scheduling (First Come FirstServed)
2. Round Robin scheduling
3. SJF scheduling (Shortest JobFirst)
4. SRT scheduling
5. Priority scheduling
6. Multilevel Queue schedulingand
7. Multilevel Feedback Queuescheduling
FCFS Scheduling
• The process that requests the CPU first is allocated the CPUfirst.
• The implementation is easily done using a FIFOqueue.
• Procedure:
1. When a process enters the ready-queue, its PCB is linked onto the tail of
thequeue.
2. When the CPU is free, the CPU is allocated to the process at the queue’shead.
3. The running process is then removed from the queue.
• Advantage:
1. Code is simple to write & understand.
• Disadvantages:
1. Convoy effect: All other processes wait for one big process to get off theCPU.
2. Non-preemptive (a process keeps the CPU until it releasesit).
3. Not good for time-sharingsystems.
4. The average waiting time is generally notminimal.
• Example: Suppose that the processes arrive in the order P1, P2,P3.
• The Gantt Chart for the schedule is asfollows:
SJF Scheduling
• The CPU is assigned to the process that has the smallest next CPUburst.
• If two processes have the same length CPU burst, FCFS scheduling is used to break
thetie.
• For long-term scheduling in a batch system, we can use the process time limit specified
by the user, as the‘length’
• SJF can't be implemented at the level of short-term scheduling, because there is no way
to know the length of the next CPUburst Advantage:
1. The SJF is optimal, i.e. it gives the minimum average waiting time for a given
set of processes.
• Disadvantage:
1. Determining the length of the next CPU burst.
If the new process has a shorter next CPU burst than what is left of the
executing process, that process is preempted. It is also known as SRTF
scheduling (Shortest-Remaining-Time-First).
• Example (for non-preemptive SJF): Consider the following set of processes, with the
length of the CPU-burst time given inmilliseconds.
preemptive SJF/SRTF: Consider the following set of processes, with the length
Priority Scheduling
• A priority is associated with eachprocess.
• The CPU is allocated to the process with the highestpriority.
• Equal-priority processes are scheduled in FCFSorder.
• Priorities can be defined either internally orexternally.
1. Internally-defined priorities.
Use some measurable quantity to compute the priority of a process.
For example: time limits, memory requirements, no. f open files.
2. Externally-defined priorities.
Set by criteria that are external to the OS For example:
importance of the process, political factors
• Priority scheduling can be either preemptive or non-preemptive.
1.Preemptive
The CPU is preempted if the priority of the newly arrived process is higher than
the priority of the currently running process.
2. Non Preemptive
• Advantage:
Higher priority processes can be executed first.
• Disadvantage:
Indefinite blocking, where low-priority processes are left waiting
indefinitely for CPU. Solution: Aging is a technique of increasing priority
of processes that wait in system for a long time.
Example: Consider the following set of processes, assumed to have arrived at
time 0, in the order PI, P2, ..., P5, with the length of the CPU-burst time given
inmilliseconds.
• Useful for situations in which processes are easily classified into different groups.
• For example, a common division is made between foreground (or interactive) processes
and background (or batch) processes.
• The ready-queue is partitioned into several separate queues (Figure2.19).
• The processes are permanently assigned to one queue based on some property like
memory size
process priority or process type.
• Each queue has its own scheduling algorithm.
For example, separate queues might be used for foreground and background
processes.
Contention Scope
Twoapproaches:
1. Process-Contention scope
On systems implementing the many-to-one and many-to-many models, the thread
library schedules user-level threads to run on an available LWP.
Competition for the CPU takes place among threads belonging to the sameprocess.
2. System-Contentionscope
The process of deciding which kernel thread to schedule on theCPU.
Competition for the CPU takes place among all threads in thesystem.
Systems using the one-to-one model schedule threads using onlySCS.
Pthread Scheduling
• Pthread API that allows specifying either PCS or SCS during threadcreation.
• Pthreads identifies the following contention scopevalues:
1. PTHREAD_SCOPEJPROCESS schedules threads using PCSscheduling.
2. PTHREAD-SCOPE_SYSTEM schedules threads using SCSscheduling.
• Pthread IPC provides following two functions for getting and setting the contention
scopepolicy:
1. pthread_attr_setscope(pthread_attr_t *attr, intscope)
2. pthread_attr_getscope(pthread_attr_t *attr, int*scop)
PROCESS SYNCHRONIZATION
• A cooperating process is one that can affect or be affected by other processes
executing in the system. Cooperating processes can either directly share a logical
address space (that is, both code and data) or be allowed to share data only through
files or messages.
• Concurrent access to shared data may result in data inconsistency. To maintain data
consistency, various mechanisms is required to ensure the orderly execution of
cooperating processes that share a logical address space.
while (true) {
• Each process must request permission to enter its critical section. The section of code
implementing this request is the entry section.
• The critical section may be followed by an exit section. The remaining code is the
reminder section.
Figure: General structure of a typical process Pi
A solution to the critical-section problem must satisfy the following three requirements:
2. Progress:If no process is executing in its critical section and some processes wish to
enter their critical sections, then only those processes that are not executing in their
remainder sections can participate in deciding which will enter its critical section
next, and this selection cannot be postponedindefinitely.
3. Bounded waiting:There exists a bound, or limit, on the number of times that other
processes are allowed to enter their critical sections after a process has made a
request to enter its critical section and before that request isgranted.
PETERSON'S SOLUTION
Peterson's solution is restricted to two processes that alternate execution between their
critical sections and remainder sections. The processes are numbered P o and P1 or Pi and Pj
where j = 1-i
Peterson's solution requires the two processes to share two data items:
int turn;
boolean flag[2];
• turn: The variable turn indicates whose turn it is to enter its critical section. Ex: if
turn == i, then process Pi is allowed to execute in its criticalsection
•
flag: The flag array is used to indicate if a process is ready to enter its critical
section. Ex: if flag [i] is true, this value indicates that P i is ready to enter its critical
section.
Figure: The structure of
process Pi in do {
Peterson's flag[i] = TRUE; turn = j;
solution while (flag[j] && turn ==
j)
; // do nothing
critical section
flag[i] = FALSE;
remainder section
} while (TRUE);
• To enter the
critical section, process Pi first sets flag [i] to be true and then sets turn to the
value j, thereby asserting that if the other process wishes to enter the critical
section, it can doso.
• If both processes try to enter at the same time, turn will be set to both i and j at
roughly the same time. Only one of these assignments will last, the other will
occur but will be over written immediately.
• The eventual value of turn determines which of the two processes is allowed to
enter its critical sectionfirst
Definition:
booleanTestAndSet (boolean *target)
{ booleanrv = *target;
*target = TRUE;
return rv:
}
• The Swap() instruction, operates on the contents of two words, it is defined as shown
below
Definition:
void Swap (boolean *a, boolean *b)
{ boolean temp = *a;
*a = *b;
*b = temp:
}
// critical section
lock =FALSE;
//
remaindersection } while (TRUE);
• These algorithms satisfy the mutual-exclusion requirement, they do not satisfy the
bounded- waiting requirement.
• Below algorithm using the TestAndSet () instruction that satisfies all the critical-
section requirements. The common data structures are
boolean waiting[n];
boolean lock;
// critical section j
= (i + 1) % n; while ((j != i)
&& !waiting[j]) j = (j + 1) %
n;
if (j == i)
lock = FALSE;
else waiting[j] = FALSE;
// remainder section
} while (TRUE);
Figure:Bounded-waiting mutual exclusion with TestAndSet ()
[Type text]
1. To prove the mutual
exclusionrequirement
• Note that process Pi can enter its critical section only if either waiting [i] == false or
key ==false.
• The value of key can become false only if the TestAndSet( ) isexecuted.
• The first process to execute the TestAndSet( ) will find key== false; all others must
wait.
• The variable waiting[i] can become false only if another process leaves its critical
section; only one waiting[i] is set to false, maintaining the mutual-exclusion
requirement.
SEMAPHORE
Binary semaphore
• The value of a binary semaphore can range only between 0 and1.
• Binary semaphores are known as mutex locks, as they are locks that provide mutual
exclusion. Binary semaphores to deal with the critical-section problem for multiple
processes. Then processes share a semaphore, mutex, initialized to1
Counting semaphore
• The value of a counting semaphore can range over an unrestricteddomain.
• Counting semaphores can be used to control access to a given resource consisting
of a finite number ofinstances.
• The semaphore is initialized to the number of resources available. Each process
that wishes to use a resource performs a wait() operation on the semaphore. When
a process releases a resource, it performs a signal()operation.
• When the count for the semaphore goes to 0, all resources are being used. After
that, processes that wish to use a resource will block until the count becomes
greater than 0.
Implementation
• The main disadvantage of the semaphore definition requires busywaiting.
• While a process is in its critical section, any other process that tries to enter its
critical section must loop continuously in the entry code.
• This continual looping is clearly a problem in a real multiprogramming system,
where a single CPU is shared among many processes.
• Busy waiting wastes CPU cycles that some other process might be able to use
productively. This type of semaphore is also called a spinlock because the process
"spins" while waiting for thelock.
typedefstruct { int value; struct process
*list;
} semaphore;
Each semaphore has an integer value and a list of processes list. When a process must wait on
a semaphore, it is added to the list of processes. A signal() operation removes one process
from the list of waiting processes and awakens that process.
• The block() operation suspends the process that invokes it. The wakeup(P)
operation resumes the execution of a blocked process P. These two operations are
provided by the operating system as basic systemcalls.
• In this implementation semaphore values may be negative. If a semaphore value is
negative, its magnitude is the number of processes waiting on thatsemaphore.
P0 P1
wait(S); wait(Q);
wait(Q); wait(S);
. .
. .
signal(S); signal(Q);
signal(Q); signal(S);
• Suppose that Po executes wait (S) and then P1 executes wait (Q). When Po executes
wait (Q), it must wait until P1 executes signal (Q). Similarly, when P1 executes wait
(S), it must wait until Po executes signal(S). Since these signal() operations cam1ot be
executed, Po and P1 are deadlocked.
• Another problem related to deadlocks is indefinite blocking or starvation: A situation
in which processes wait indefinitely within the semaphore.
• Indefinite blocking may occur if we remove processes from the list associated with a
semaphore in LIFO (last-in, first-out) order.
CLASSICAL PROBLEMS OF
SYNCHRONIZATION
• Bounded-BufferProblem
• Readers and WritersProblem
• Dining-PhilosophersProblem
Bounded-Buffer Problem
• N buffers, each can hold one item
• Semaphore mutexinitialized to the value 1
• Semaphore full initialized to the value0
• Semaphore empty initialized to the value N.
Consider five philosophers who spend their lives thinking and eating. The philosophers
share a circular table surrounded by five chairs, each belonging to one philosopher. In the
center of the table is a bowl of rice, and the table is laid with five singlechopsticks.
A philosopher gets hungry and tries to pick up the two chopsticks that are closest to her
(the chopsticks that are between her and her left and right neighbors). A philosopher
may pick up only one chopstick at a time. When a hungry philosopher has both her
chopsticks at the same time, she eats without releasing the chopsticks. When she is
finished eating, she puts down both chopsticks and starts thinkingagain.
It is a simple representation of the need to allocate several resources among several
processes in a deadlock-free and starvation-freemanner.
Solution:One simple solution is to represent each chopstick with a semaphore. A
philosopher tries to grab a chopstick by executing a wait() operation on thatsemaphore.
She releases her chopsticks by executing the signal() operation on the appropriate
semaphores. Thus, the shared data are semaphore chopstick[5];
where all the elements of chopstick are initialized to 1. The structure of philosopher iis
shown
Monitor
• An abstract data type—or ADT—encapsulates data with a set of functions to operate
on that data that are independent of any specific implementation of the ADT.
• A monitor typeis an ADT that includes a set of programmer defined operations that are
provided with mutual exclusion within the monitor. The monitor type also declares the
variables whose values define the state of an instance of that type, along with the
bodies of functions that operate on those variables.
• The monitor construct ensures that only one process at a time is active within the
monitor.
• To have a powerful Synchronization schemes a condition construct is added to the
Monitor. So synchronization scheme can be defined with one or more variables of
type condition Two operations on a conditionvariable:
Condition x, y
• The only operations that can be invoked on a condition variable are wait() and
signal().
The operation
x.wait () – a process that invokes the operation is suspended.
x.signal () – resumes one of processes (if any) that invoked x.wait ()
Fig: Monitor with Condition Variables
Solution to Dining Philosophers
Each philosopher I invokes the operations pickup() and putdown() in the following