Operating Systems
UNIT - 2 PROCESS MANAGEMENT
STRUCTURE
2.0 Learning Objectives
2.1 Introduction
2.2 Concept of Process and Process Synchronization
2.3 Inter-Process Communication
2.4 Process Management and Scheduling (Scheduling Algorithms)
2.5 Hardware Requirements
2.5.1 Protection
2.5.2 Non- Privileged and Privileged Modes
2.5.3 Context Switching
2.6 Threads and their Management
2.7 Tools and Constructs for Concurrency
2.8 Deadlocks
2.8.1 Detection
2.8.2 Prevention
2.8.3 Avoidance
2.9 Mutual Exclusion Algorithms
2.10 Semaphores – Concurrent Programming Using Semaphores
2.11 Let us sum up
2.12 Keywords
2.13 Some useful books
2.14 Answer to check your progress
2.15 Terminal Questions
Operating Systems
2.0 LEARNING OBJECTIVES
After studying this unit, you will be able to:
● Explain the concept of process management and process
synchronization.
● Describe the inter process communication.
● Differentiate between different scheduling algorithms.
● Describe the concepts of threads and their management.
● Analyse deadlocks, its detection, prevention, and avoidance.
● Explain the concept of concurrency.
● Discuss semaphores and its usage in process management.
2.1 INTRODUCTION
A process is an important part of modern-day Operating Systems (OS). It
is the basic context within OS in which all user-requested activity is
serviced. It is a program under execution. Various tasks done by the
process management module of OS include creation, scheduling, and
termination of processes and deadlocks. Processes need resources of a
computer system to achieve the user requested activity. The OS allocates
resources to processes. Multiple processes share and exchange
information. While allowing synchronization among processes, OS also
protects the resources of each process.
2.2 CONCEPT OF PROCESS AND PROCESS
SYNCHRONIZATION
A process is a program under execution. An application may have many
processes that operate concurrently to jointly achieve a goal. These
processes use a set of resources and interact among themselves. Such an
application could provide a quicker response. At any time, an OS contains
many processes.
Operating Systems
Some of these may be several executions of the same program. That
happens when users initiate multiple executions of the same program
independently, each with its own data.
Following are the main activities of process management module:
● Creating processes
● Allocating required resources to processes
● Scheduling CPU
● Process synchronization during concurrent process interactions
● Preventing deadlocks to overcome indefinite wait for each other
● Terminating processes when they complete their operation
The way OS schedules processes for use of a CPU determines the response
times of processes, resource efficiency, and system performance.
Kernel, the central component of OS, allocates resources and schedules
CPU to the waiting processes. It ensures the execution of sequential and
concurrent programs uniformly.
A lightweight process, a thread resembles a process in all other respects,
except it does not have resources of its own but uses the resources of a
process. An OS incurs less overhead in managing threads than in
managing processes. Thread functions in the environment of a process i.e.,
it uses the code, data, and resources of a process. It is possible for many
threads to function in the environment of the same process; they share its
code, data, and resources.
A program being a passive entity, by itself does not perform any actions;
If the actions it calls for are to take place, the program must be executed.
A process is an execution of a program, performing the actions specified
in a program. An OS shares the CPU among processes enabling them to
execute the user programs.
Following are the sequence of operations that take place on initiation of
execution of a program.
Operating Systems
A new process with a unique id is created by OS. Subsequently, resources
such as sufficient memory to accommodate the address space of the
program, and devices like keyboard and a monitor to facilitate interaction
with the user are allocated. During its operation, the process may make
system calls to request for additional resources such as files.
A program consists of a set of functions and procedures. According to the
logic of the program control flows between the functions and procedures.
But the OS does not know the internal logic of a program except for the
requests made by programs through system calls. The rest is under the
control of the program. Functions of a program may result in separate
processes, or they may be the code part of a single process. The kernel
creates the process and thereby Initiates and executes a program. .
Process Synchronization
Process synchronization includes the techniques used to delay and resume
processes to accomplish process interactions. It is achieved through two
kinds of synchronizations. Data access synchronization and control
synchronization. Data access synchronization ensures that shared data do
not lose consistency when they are updated by interacting processes. It is
achieved by ensuring that processes access shared data only in a mutually
exclusive manner. Control synchronization is achieved by making
interacting processes perform their actions in a desired order.
The execution speed of a process, or the relative execution speeds of
interacting processes, cannot be known in advance because of factors such
as time-slicing, priorities of processes, and I/O activities in processes.
Hence a process synchronization technique must be designed so that it will
function correctly irrespective of the relative execution speeds of
processes.
Computer systems provide indivisible instructions called atomic
instructions to support process synchronization. Process synchronization
involves executing some sequences of instructions in a mutually exclusive
manner. On a uniprocessor system, this can be achieved by disabling
interrupts while a process executes such a sequence of instructions, so that
it will not be pre-empted.
Operating Systems
However, this approach involves the overhead of system calls to disable
interrupts and enable them again, and delays processing of interrupts,
which can lead to undesirable consequences for system performance or
user service. It is also not applicable to multiprocessor systems.
Synchronization Problems
Two of the classic process synchronization problems in various
application domains are readers–writers, and dining philosophers.
Consider an example of a race condition (A condition in which the value
of a shared data item resulting from execution of various operations on
same data in interacting processes may be different) in an airline
reservation application and its consequences.
A program containing a race condition may produce correct or incorrect
results depending on the order in which instructions of its processes are
executed. This feature complicates both testing and debugging of
concurrent programs, so race conditions should be prevented. Race
conditions are prevented if we ensure that various operations do not
execute concurrently i.e., only one of the operations can access shared data
at any time. This requirement is called mutual exclusion.
Readers and Writers Synchronization Problem
A readers–writers system consists of shared data, an unspecified number
of reader processes that do only reading the data, and an unspecified
number of writer processes that modify or update the data. We use the
terms reading and writing for accesses to the shared data made by reader
and writer processes, respectively. A solution to the readers–writers
problem must satisfy the following conditions:
1. Many readers can perform reading concurrently.
2. Reading is prohibited while a writer is writing.
3. At any time, only one writer can perform writing.
Conditions 1–3 do not specify which process should be preferred if a
reader and a writer process wish to access the shared data at the same time.
The following additional condition is imposed if it is important to give a
higher priority to readers to meet some business goals:
Operating Systems
A reader has a nonpreemptive priority over writers, meaning it has a higher
priority compared to a waiting writer, but it does not preempt an active
writer.
This system is called a readers preferred readers–writers system. A writers
preferred readers -writers system is analogously defined.
Figure 2.1 Readers and writers in a banking system
Figure 2.1 shows the readers and writers sharing a bank account). The
reader processes print statement and stat analysis merely read the data
from the bank account; hence they can execute concurrently. Credit and
debit modify the balance in the account. Clearly only one of them should
be active at any moment and none of the readers should be concurrent with
it. In an airline reservation system, processes that merely query the
availability of seats on a flight are reader processes, while processes that
make reservations are writer processes since they modify parts of the
reservation database.
We determine the synchronization requirements of a readers–writers
system as follows: Conditions 1–3 permit either one writer to perform
writing or many readers to perform concurrent reading. Hence writing
should be performed in a critical section for the shared data.
When a writer finishes writing, it should either activate all waiting readers
using a signalling arrangement and a count of waiting readers or enable
another writer to enter its critical section. If readers are reading, when the
last reader finishes reading, the waiting writer should be enabled to
perform writing. This action would require a count of concurrent readers
to be maintained.
Figure 2.1 is an outline for a readers–writers system. Writing is performed
in a critical section. A critical section is not used in a reader, because that
Operating Systems
would prevent concurrency between readers. A signalling arrangement is
used to handle blocking and activation of readers and writers. For
simplicity, details of maintaining and using counts of waiting readers and
readers reading concurrently are not shown in the outline; The outline of
Figure 2.1 does not provide bounded waits for readers and writers;
however, it provides maximum concurrency. This outline does not prefer
either readers or writers.
Dining Philosophers Synchronization Problem
Five philosophers sit around a table pondering philosophical issues (Figure
2.2). A plate of spaghetti is kept in front of each philosopher, and a fork is
placed between each pair of philosophers. To eat, a philosopher must pick
up the two forks placed between him and the neighbours on either side,
one at a time. The problem is to design processes to represent the
philosophers such that each philosopher can eat when hungry and none
dies of hunger.
Figure 2.2 Dining philosophers
The correctness condition in the dining philosopher’s system is that a
hungry philosopher should not face indefinite waits when he decides to
eat. The challenge is to design a solution that does not suffer from either
deadlock, where processes become blocked waiting for each other or live-
locks, where processes are not blocked but defer to each other indefinitely.
A philosopher picks up the forks one at a time, say, first the left fork and
then the right fork. This solution is prone to deadlock, because if all
philosopher(s) simultaneously lift their left forks, none will be able to lift
the right fork! It also contains race conditions because neighbours might
fight over a shared fork.
Operating Systems
We can avoid deadlocks by modifying the philosopher process so that if
the right fork is not available, the philosopher would defer to his left
neighbour by putting down the left fork and repeating the attempt to take
the forks sometime later. However, this approach suffers from livelocks
because the same situation may recur.
One of the solution outlines for the dining philosopher(s) problem is by
implementing both checks on availability of forks and picking up forks by
philosophers in a Critical Section (CS). Hence race conditions cannot
arise. This arrangement ensures that at least some philosopher(s) can eat
at any time and deadlocks cannot arise. A philosopher who cannot get both
forks at the same time blocks himself. He gets activated when any of his
neighbours puts down a shared fork, hence he must check for availability
of forks once again, which can be implemented using a while loop.
However, the loop also causes a busy wait . Some innovative solutions to
the dining philosopher’s problem prevent deadlocks without busy waits.
Solutions to Classic Process Synchronization Problem
The following three important criteria must be met in any solution to a
process synchronization problem:
● Correctness: Control synchronization and data access
synchronization should be performed in accordance with
synchronization requirements of the problem.
● Maximum concurrency: A process needs to wait, only when other
processes perform synchronization actions, otherwise it should be
able to operate freely.
● No busy waits: Synchronization should be performed through
blocking rather than through busy waits to avoid performance
degradation.
Check Your Progress-1
1. Why is process synchronization important?
__________________________________________________________
__________________________________________________________
________________________________________________________
Operating Systems
2. What are some common synchronization problems?
__________________________________________________________
__________________________________________________________
________________________________________________________
3. What is a deadlock?
__________________________________________________________
__________________________________________________________
________________________________________________________
2.3 INTER-PROCESS COMMUNICATION
Processes of an application work toward a common goal and hence need
to interact with one another. Following are four different kinds of process
interactions (Table 2.1):
Data Sharing: Many processes updating a shared variable concurrently
may lead to inconsistent value. For example, consider a statement a: = a+1,
being executed by two processes concurrently, where ‘a’ is a shared
variable. Depending on the way the kernel interleaves their execution, the
result may vary. The value of ‘a’ may be incremented by only 1. To avoid
this problem, only one process should access shared data at any time,
which may lead to delay in data access in other processes. This is called
mutual exclusion. Thus, data sharing by concurrent processes incurs the
overhead of mutual exclusion.
Message Passing: A process may send some information to another
process in the form of a message. The other process can copy the
information into its own data structures and use it. Both the sender and the
receiver process must anticipate the information exchange, i.e., a process
must know when it is expected to send or receive a message, so the
information exchange becomes a part of the convention or protocol
between processes.
Synchronization: The logic of a program may require that an action a1
should be performed only after some action a2 has been performed.
Operating Systems
Synchronization between processes is required if these actions are
performed in different processes, the process that wishes to perform action
a1 is made to wait until another process completes action a2.
Signals: A signal is used to convey an exceptional situation to a process
so that it may handle the situation through appropriate actions. A signal
handler is the code that a process wishes to execute on receiving a signal.
The signal mechanism is modelled along the lines of interrupts. Thus,
when a signal is sent to a process, the kernel interrupts operation of the
process and executes a signal handler, if one has been specified by the
process; otherwise, it may perform a default action. Operating systems
differ in the way they resume a process after executing a signal handler.
Kind of Description
interaction
Data sharing If several processes modify the shared data at the same
time, data may become inconsistent. Hence processes
must interact to determine when it is safe to use or modify
shared data.
Message Information exchange between processes takes place by
passing sending messages to one another.
Synchronizatio Processes must coordinate their activities and perform
n them in a desired order to achieve a common goal.
Signals An occurrence of exceptional situation is conveyed to a
process through a signal.
Table 2.1: Table showing four kinds of process interaction
Operating Systems
2.4 PROCESS MANAGEMENT AND SCHEDULING
(SCHEDULING ALGORITHMS)
Scheduling is the act of selecting the next process to be serviced by a CPU.
A scheduling policy decides which process should be given the CPU now.
A scheduling policy influences both user service and system performance.
Assignment of priorities to processes can provide good system
performance, as in a multi-programming system; or provide favoured
treatment to important functions, as in a real-time system. Variation of
time slice permits the scheduler to adapt the time slice to the nature of a
process so that it can provide an appropriate response time to the process
and control its own overhead. Reordering of processes can improve both
system performance, measured as throughput, and user service, measured
as turnaround times or response times of processes. Performance analysis
of a scheduling policy is a study of its performance. It can be used for
comparing performance of two scheduling policies or for determining
values of key system parameters like the size of a process queue.
Figure 2.3 shows a schematic diagram of scheduling. One of the requests
from the pending list of requests is selected by the scheduler. The server
services the request selected by the scheduler. This request leaves the
server either when scheduler pre-empts it and puts it back into the list of
pending requests or when it completes. In either situation, the scheduler
selects the request that should be serviced next. From time to time, the
scheduler admits one of the arrived requests for active consideration and
enters it into the list of pending requests.
Figure 2.3 A schematic of scheduling
Operating Systems
Actions of the scheduler are shown by the dashed arrows in Figure 2.3.
Events related to a request are its arrival, admission, scheduling, pre-
emption, and completion. Here Server represents the CPU, and a request
is the execution of a job, program or a process, A job or a process arrives
on submission by a user and admitted when the scheduler starts
considering it for scheduling.
An admitted job or process either waits in the list of pending requests, uses
the CPU, or performs I/O operations. Eventually, it completes and leaves
the system. The scheduler’s action of admitting a request is important only
in an OS with limited resources; for simplicity, in most of our discussions
we assume that a request is admitted automatically on arrival.
To achieve better system performance and user service, modern operating
systems use more advanced and complex scheduling policies.
Service time of a process = CPU time + I/O time required by it to complete
its execution.
Deadline = Time by which process’s servicing should be completed (Only
in real time systems).
Both service time and deadline are an inherent property of a job or a
process. The completion time of a job or a process depends on its arrival
and service times, and on the kind of service it receives from the OS.
Fundamental design techniques used by schedulers to achieve good user
service and high performance:
Priority-based scheduling: It ensures that the process inside the CPU is
always the highest priority process. It is ensured by scheduling the highest-
priority ready process at any time and pre-empting it when a process with
a higher priority becomes ready.
Reordering of requests: Reordering implies servicing of requests in some
order other than their arrival order. Reordering may be used by itself to
improve user service, e.g., servicing short requests before long ones
reduces the average turnaround time of requests.
Operating Systems
Reordering of requests is implicit in pre-emption, which may be used to
enhance user service, as in a time-sharing system, or to enhance the system
throughput, as in a multiprogramming system.
Variation of time slice: Better response times are obtained when smaller
values of the time slice are used; however, it lowers the CPU efficiency
because considerable process switching overhead is incurred. To balance
CPU efficiency and response times, an OS could use different values of
time slice based on the request. A small value for I/O-bound requests and
a large value for CPU-bound requests or it could vary when the process's
behaviour changes from CPU-bound to I/O-bound, or from I/O bound to
CPU-bound.
In non-pre-emptive scheduling, a server always completes a request before
switching to another request. So, pre-emption of a request never occurs.
Nonpreemptive scheduling is attractive because of its simplicity. The
scheduler does not have to distinguish between an unserved request and a
partially serviced one. Since a request is never pre-empted, the scheduler
just reorders the request to improve user service or system performance.
Following are three non pre-emptive scheduling policies:
● First-come, first-served (FCFS) scheduling
● Shortest request next (SRN) scheduling
● Highest response ratio next (HRN) scheduling
Scheduling Algorithms
There are six popular process scheduling algorithms which are either non-
preemptive or preemptive:
First-Come, First-Served (FCFS) Scheduling
● Jobs are executed on a first come, first serve basis.
● It is a non-preemptive scheduling algorithm.
● Easy to understand and implement.
● It is based on the FIFO queue.
● Due to high average wait time performance is poor.
Operating Systems
Let us take an example to understand FCFS scheduling. Consider four
processes P0, P1, P2 and P3 with arrival, Execute and Service time as
shown in Table 2.2
Process Arrival Execute Service
Time Time Time
P0 0 5 0
P1 1 4 5
P2 2 8 9
P3 3 7 17
P0 P1 P2 P3
0 5 9 17 24
Table 2.2: First-Come, First-Served (FCFS) scheduling
Wait time of each process in case of FCFS scheduling is as follows:
Process Wait Time: Service Time - Arrival Time
P0 0-0=0
P1 5-1=4
P2 9-2=7
P3 17 - 3 = 14
Table 2.3: Wait time of each process
Average Wait Time: (0+4+7+14) / 4 = 6.25
Operating Systems
Shortest-Job-Next (SJN) Scheduling
● It has an alias name shortest job first, or SJF.
● It can be implemented as non-preemptive or pre-emptive.
● Least waiting time among all scheduling algorithms
● Ideal for Batch systems, since CPU requirement is known in advance.
● Not suitable for interactive systems where CPU requirement is not
known.
● Execution time of the process must be known in advance.
Given: Table of processes, and their Arrival time, Execution time
Process Arrival Time Execution Time Service Time
P0 0 5 0
P1 1 4 5
P2 2 8 16
P3 3 7 9
Table 2.4: Shortest-Job-Next (SJN) scheduling
Table 2.4 shows Process P3 being Serviced before P2 due to the shorter
execution time.
Waiting time of each process is as follows:
Process Wait Time: Service Time - Arrival Time
P0 0-0=0
Operating Systems
P1 5-1=4
P2 16 - 2 = 14
P3 9- 3 = 6
Table 2.5: Waiting time of each process
The wait time for P3 is reduced in SJN compared to FCFS scheduling.
Average Wait Time: (0 + 4 + 14 +6)/4 = 24 / 4 = 6
There is an improvement in Average waiting time too.
Priority Scheduling
● A non-preemptive algorithm.
● Recommended for batch systems.
● Highest priority process gets the CPU all the time, which requires
each process to be assigned with a priority.
● If there are multiple processes with the same priority, schedule first
come first-served basis.
● Priority can be decided based on resource requirements such as
memory, time, or any other resource requirement.
Given: Table of processes, and their Arrival time, Execution time, and
priority. Considering 1 as the lowest priority.
Execution Priorit Service
Process Arrival Time
Time y Time
P0 0 5 1 0
P1 1 4 2 11
Operating Systems
P2 2 8 1 14
P3 3 7 3 5
Table 2.6: Waiting time of each process
Waiting time of each process is as follows:
Process Waiting Time
P0 0-0=0
P1 11 - 1 = 10
P2 14 - 2 = 12
P3 5-3=2
Table 2.7: Waiting time of each process
Average Wait Time: (0 + 10 + 12 + 2)/4 = 24 / 4 = 6
Shortest Remaining Time (SRT)
● It is the preemptive version of the SJN algorithm.
● Job closest to completion gets the processor, but in case a newer r
ready job with shorter time to completion can pre-empt running
process.
● Not suitable for interactive systems where required CPU time is not
known.
● Suitable for batch environments where short jobs need to give
preference.
Operating Systems
Round Robin (RR) Scheduling
● A preemptive process scheduling algorithm.
● Allocates a quantum, a fixed time to execute, for each process.
● After executing for quantum time, the process is pre-empted, and the
next process is given CPU time.
● Context switching is used to save states of pre-empted processes.
Quantum = 4
P0 P1 P2 P3 P0 P2 P3
0 4 8 12 16 17 21
Table 2.8: Round Robin (RR) Scheduling
Wait time of each process is as follows:
Process Wait Time: Service Time - Arrival Time
P0 (0 - 0) + (16 - 4) = 12
P1 (4 - 1) = 3
P2 (8 - 2) + (17 - 12) = 11
P3 (12 - 3) + (21 - 16) = 14
Table 2.9: Wait time of each process
Average Wait Time: (12+3+11+14) / 4 = 10
Multiple-Level Queues Scheduling
● It combines other existing algorithms to group and schedule jobs with
common characteristics by maintaining multiple queues.
Operating Systems
● Each queue follows its own scheduling algorithms.
● For example, all I/O-bound jobs and CPU-bound jobs can be
scheduled in two separate queues, respectively. Priorities are
assigned to each queue. Scheduler selects jobs from each queue
alternately, and based on the algorithm assigned to the queue, assigns
them to the CPU.
Check Your Progress-2
1. What is preemptive scheduling?
__________________________________________________________
__________________________________________________________
________________________________________________________
2. How does Round Robin scheduling work?
__________________________________________________________
__________________________________________________________
________________________________________________________
2.5 HARDWARE REQUIREMENTS
Systems need protection from user programs that:
● Access memory of other programs or the OS.
● Access files of other programs.
● Go into an infinite loop.
2.5.1 Protection
1. Dual-Mode Operation: Two (at min.) modes of operation: user mode
and monitor (supervisor/system/privileged) mode with some privileged
instructions only executable in monitor mode. e.g., I/O instructions are
privileged, so a user program must request I/O through the OS.
2. Memory-Management Unit (MMU): provides some form of base-
limit registers for checking that memory access is within a program's
Operating Systems
memory. Details vary with address mapping scheme (paging,
segmentation, paged segments, none).
3. CPU Timer: Before the user program is started, OS sets a timer to
expire and interrupt a user program. CPU timer modifications are
privileged.
Dual Mode Operations in OS
Consider a process stuck in an infinite loop. It could affect correct
operation of other processes or it might affect the OS. Following two
modes of operation ensures proper execution of the OS:
2.5.2 Non- Privileged and Privileged mode
User mode or Non-Privileged mode:
Creating a text document or using any application program, are examples
for the system in the user mode. The Instructions that can run only in user
mode are called Non-Privileged Instructions.
Various examples of Non-Privileged Instructions include:
● Reading the status of Processor.
● Reading the System Time.
● Generate any Trap Instruction.
● Sending the final printout of Printer.
Kernel Mode or Privileged mode:
When the system boots, hardware starts in kernel mode and when OS is
loaded, it starts user application in user mode. Privileged instructions are
instructions which execute only in kernel mode.
Privileged Instructions have following characteristics:
• Trying to execute privileged instruction in user mode is illegal.
Hardware traps it to the OS.
• OS must ensure that a timer is set to interrupt the OS, before
transferring the control to any user program. On interrupt from the
timer the OS regains the control. Thus, any instruction which can
modify the contents of the timer is a privileged instruction.
Operating Systems
• Privileged instructions ensure smooth operation of OS.
• Various examples of privileged instructions include:
• I/O instructions and Halt instructions.
• Turn off all Interrupts.
• Set the Timer.
• Context Switching.
• Clear the Memory or Remove a process from the Memory.
• Modify entries in device-status table.
Figure 2.4 A schematic of dual mode operation
Figure 2.4 shows the schematic of Dual Mode Operation, which ensures
protection and security of the system from unauthorized or errant users.
To change the mode from privileged to non-privileged, we require a non-
privileged instruction that does not generate any interrupt.
It is the responsibility of the OS to prevent the application software from
accessing the hardware directly. This can be done by hiding the hardware.
Two modes of operation provide this protection. The OS runs in kernel
mode, also known as supervisor mode or privileged mode.
Operating Systems
In kernel mode, the software has complete access to all the computer's
hardware and can control the switching between the CPU modes.
Interrupts are also received in the kernel mode software. The rest of the
software runs in user mode, where direct access to the hardware is
prohibited, and so is any arbitrary switching to kernel mode. Any attempts
to violate these restrictions are reported to the kernel mode software: i.e.,
to the OS itself. By having two modes of operation which are enforced by
the computer's own hardware, the OS can force application programs to
use the operating system's abstract services, instead of circumventing any
resource allocations by direct hardware access.
2.5.3 Context Switching
If a processor is running in kernel mode, it can set itself to user mode by
setting the status register that defines its operating mode (Figure 2.5).
Processor running in user mode, cannot simply switch to run in kernel
mode as that would be a privilege violation. Traps, interrupts, and
exceptions are used to switch from user to kernel mode.
Mostly seen on general-purpose systems, interrupts cause the OS to switch
a CPU core from its current task to run a kernel routine. When an interrupt
occurs, the system suspends the current process. To restore the suspended
process at a later point in time, the OS needs to save the current context of
the process. It includes the process state, the value of the CPU registers,
and memory-management information. Be it in any mode, the system
performs a state save of the current state of the CPU core, followed by a
state restore to resume operations. Essentially it requires a context switch,
where the OS must save state of the current process and a restore state of
a different process. The kernel saves the context of the old process in its
process control block (PCB) and loads the saved context of the new
process scheduled to run. Context switch time is pure overhead, as the
system does no useful work while switching.
Context-switch times are highly dependent on hardware support such as
the number of registers to be copied, memory speed, and the instruction
set such as special instructions to load or store all registers). Switching
speed varies from machine to machine.
Operating Systems
For instance, in processors which provide multiple sets of registers, a
context switch simply requires changing the pointer to the current register
set. A typical speed is several microseconds.
Figure 2.5 Diagram showing context switch from process to process
2.6 THREADS AND THEIR MANAGEMENT
To speed up their operation, applications use concurrent processes.
However, switching between processes incurs high process switching
overhead. So, OS designers developed an alternative model of execution
of a program, called a thread that could provide concurrency within an
application with less overhead.
Thread is an execution of a program that uses the resources of a process.
It does not have a context as it operates by using the context of the process.
A system call is used by a process to create a thread.
Use of threads effectively splits the process state into two parts. The
resource state remains with the process while an execution state, which is
the CPU state, is associated with a thread. The cost of concurrency within
the context of a process is now merely replication of the execution state
for each thread. The execution states need to be switched during switching
between threads.
Operating Systems
The resource state is neither replicated nor switched during switching
between threads of the process. The only difference between the threads
and processes is that the threads do not have resources allocated to them.
Advantage Explanation
Lower overhead of Thread state consists only of the state of a computation.
creation and Resource allocation state and communication state are not
switching part of the thread state, so creation of threads and switching
between them incurs a lower overhead.
More efficient Threads of a process can communicate with one another
communication through shared data, thus avoiding the overhead of system
calls for communication.
Simplification of Use of threads can simplify design and coding of
design applications that service requests concurrently.
Table 2.10: Advantages of threads over processes
2.7 TOOLS AND CONSTRUCTS FOR CONCURRENCY
Concurrent programming constructs provide data abstraction and
encapsulation features specifically suited to the construction of concurrent
programs. They had well-defined semantics that were enforced by the
language compiler. Effectively, concurrent programming constructs
incorporated functions that were analogous to those provided by the
synchronization primitives, but they also included features to ensure that
these functions could not be used in a haphazard or indiscriminate manner.
These properties helped in ensuring correctness of programs, which made
construction of large applications practical. Most modern programming
languages provide a concurrent programming construct called a monitor.
A concurrent system consists of three key components:
● Shared data
● Operations on shared data
Operating Systems
● Interacting processes
Shared data include two kinds of data. Application data used and
manipulated by processes, and synchronization data, i.e., data used for
synchronization between processes. An operation is a convenient unit of
code, typically a function or a procedure in a programming language,
which accesses and manipulates shared data. A synchronization
operation is an operation on synchronization data. A view of the system
at a specific time instant showing the relationship between shared data,
operations, and processes at that instant is called a snapshot of a
concurrent system. Figure 2.6 depicts a snapshot with pictorial
conventions. A circle represents a process. A circle with a cross in it
indicates a blocked process. A rectangular box denotes a data item, or a
set of data items. The value(s) of data, if known, are shown inside the
box.
Figure 2.6 Pictorial conventions for snapshots of concurrent systems
Connectors or sockets joined to the data shows operations on data. A
shared data item is indicated by an oval shape enclosing the data item.
A dashed line connects a process and an operation on data if the process
is currently engaged in executing the operation. A dashed rectangular box
encloses code executed as a critical section. Hence mutually exclusive
operations on data are enclosed in a dashed rectangular box. A queue of
blocked processes is associated with the dashed box to show the processes
Operating Systems
waiting to perform one of the operations. A series of snapshots are required
to represent the execution of a concurrent system.
Snapshots of a Concurrent System
Consider the system where process Pi performs action ai only after process
Pj performs action aj. We assume that operations ai and aj operate on
shared data items X and Y, respectively. Let the system be implemented
using the operations check_aj and post_aj. This system comprises the
following components:
Boolean variables operation_aj_performed and
Shared data pi_blocked, both initialized to false, and data items
X and Y.
Operations on application
Operations ai and aj.
data
Synchronization
Operations check_aj and post_aj.
operations
Processes Processes Pi and Pj.
Table 2.10: Advantages of threads over processes
Figure 2.7 shows three snapshots of this system. T and F indicate values
true and false, respectively. Operations check_aj and post_aj both use the
boolean variables operation_aj_performed and pi_blocked. These
operations are indivisible operations, so they are mutually exclusive.
Accordingly, they are enclosed in a dashed box.
Figure 2.7 Snapshots of the system
Operating Systems
Figure 2.7 shows the situation when process Pj is engaged in performing
operation aj and process Pi wishes to perform operation ai, so it invokes
operation check_aj. Operation check_aj finds that operation_aj_performed
is false, so it sets pi_blocked to true, blocks process Pi and exits. When Pj
finishes performing operation aj, it invokes operation post_aj. This
operation finds that pi_blocked is true, so it sets pi_blocked to false,
activates process Pi, and exits. Process Pi now performs operation ai.
Monitor
A monitor type resembles a class. It contains declarations of shared data.
It may also contain declarations of special synchronization data called
condition variables on which only the built-in operations wait and signal
can be performed; these operations provide convenient means of setting
up signalling arrangements for process synchronization. Procedures of the
monitor type encode operations that manipulate shared data and perform
process synchronization through condition variables. Thus, the monitor
type provides two of the three components that make up a concurrent
system.
A concurrent system is set up as follows: A concurrent program has a
monitor type. The program creates an object of the monitor type during its
execution. We refer to the object as a monitor variable, or simply as a
monitor. The monitor contains a copy of the shared and synchronization
data declared in the monitor type as its local data.
The procedures defined in the monitor type become operations of the
monitor; they operate on its local data. The concurrent program creates
processes through system calls. These processes invoke operations of the
monitor to perform data sharing and control synchronization; they become
blocked or activated when the monitor operations perform wait or signal
operations.
The data abstraction and encapsulation feature of the monitor assist in
synchronization as follows: Only the operations of a monitor can access
its shared and synchronization data. To avoid race conditions, the compiler
of the programming language implements mutual exclusion over
operations of a monitor by ensuring that at most one process can be
Operating Systems
executing a monitor operation at any time. Invocations of the operations
are serviced in a FIFO (First in First out) manner to satisfy the bounded
wait property.
2.8 DEADLOCKS
Deadlocks in an operating system are analogous processes that wait for
one another’s actions indefinitely. They occur in process synchronization,
either in resource sharing when they wait for other processes to release
resources that they need or when they wait for each other’s signals. Each
process is then waiting for an event that cannot occur. Deadlocks adversely
affect resource efficiency, user service, and throughput as deadlocked
processes remain blocked indefinitely.
Deadlocks arise in resource sharing when a set of conditions concerning
resource requests and resource allocations hold simultaneously. Operating
systems use several approaches to handle deadlocks. In the deadlock
detection and resolution approach, the kernel checks whether the
conditions contributing to a deadlock hold simultaneously and eliminates
a deadlock by judiciously aborting some processes so that the remaining
processes are no longer in a deadlock.
In the deadlock prevention approach, the kernel employs resource
allocation policies that ensure that the conditions for deadlocks do not hold
simultaneously; it makes deadlocks impossible. In the deadlock avoidance
approach, the kernel does not make resource allocations that may lead to
deadlocks, so deadlocks do not arise.
Handling Deadlocks
Approach Description
Deadlock detection The kernel analyses the resource state to check
and resolution whether a deadlock exists. In case of a deadlock,
it releases the resources held by some process(es)
by aborting them, so that the deadlock ceases to
exist.
Operating Systems
Deadlock prevention Kernel makes deadlocks impossible.by using a
resource allocation policy that ensures that the
four conditions for resource deadlocks do not
arise simultaneously.
Deadlock avoidance The kernel ensures that deadlocks do not arise by
analysing the allocation state. It checks whether
granting a resource request can lead to a
deadlock in the future. Only requests that cannot
lead to a deadlock are granted, others are kept
pending until they can be granted.
Table 2.11: Deadlock handling approaches
Table 2.11 mentions three fundamental approaches to deadlock handling.
Each approach has different consequences in terms of possible delays in
resource allocation, the kind of resource requests that user processes can
make, and the OS overhead.
Under the deadlock detection and resolution approach, the kernel aborts
some processes when it detects a deadlock on analysing the allocation
state. This action frees the resources held by the aborted process, which
are now allocated to other processes that had requested them. The aborted
processes must be re executed. Thus, the cost of this approach includes the
cost of deadlock detection and the cost of re executing the aborted
processes.
In deadlock prevention, the kernel uses a resource allocation policy that
makes deadlocks impossible and processes must abide by any restrictions
that the policy may impose. For example, a simple deadlock prevention
policy would be to allocate all resources required by a process at the same
time. This policy would require a process to make all its resource requests
together. Say, both processes would request a printer and a tape drive at
the same time. A deadlock would not arise because one of the processes
would get both the resources it needed; however, the policy may force a
process to obtain a resource long before it was needed.
Operating Systems
Under the deadlock avoidance approach, the kernel grants a resource
request only if it finds that granting the request will not lead to deadlocks
later; otherwise, it keeps the request pending until it can be granted. Hence
a process may face long delays in obtaining a resource. The kernel would
realize the possibility of a future deadlock while processing the second
request. Hence it would not grant the printer to process Process1 until
process Process2 is completed.
2.8.1 Detection
Consider a system that contains a process P1, which holds a printer; and a
process P2 that is blocked on its request for a printer. If process P1 is not
in the blocked state, there is a possibility that it might complete its
operation without requesting any more resources; on completion, it would
release the printer allocated to it, which could then be allocated to process
P2. Thus, if P1 is not in the blocked state, P2’s wait for the printer is not
indefinite because of the following sequence of events: process P1
completes, releases printer, printer is allocated to P2. If some other process
P3 waits for some other resource allocated to P2, its wait is also not
indefinite. Hence processes P1, P2, and P3 are not involved in a deadlock
at the current moment.
From this observation, we can formulate the following rule for deadlock
detection: A blocked process at the current moment is not in a deadlock if
the resource for which it is blocked can be released through a sequence of
process completion, resource release, and resource allocation events.
We check for the presence of a deadlock in a system by trying to construct
fictitious but feasible sequences of events whereby all blocked processes
can get the resources they have requested. Success in constructing such a
sequence implies the absence of a deadlock at the current moment, and a
failure to construct it implies presence of a deadlock.
2.8.2 Prevention
Following four conditions must hold simultaneously for a resource
deadlock to arise in a system. To prevent deadlocks, the kernel must use a
resource allocation policy that ensures that one of these conditions cannot
arise.
Operating Systems
Non-Sharable Resources: Wait-for relations will not exist in the system
if all resources could be made shareable.
Pre-emption of Resources: If resources are made preemptible, the kernel
can ensure that some processes have all the resources they need.
Hold-and-Wait: To prevent the hold-and-wait condition, either a process
that holds resources should not be permitted to make resource requests, or
a process that gets blocked on a resource request should not be permitted
to hold any resources.
Circular Wait: A circular wait can result from the hold-and-wait
condition, which is a consequence of the non-shareability and non-
preemptibility conditions, so it does not arise if either of these conditions
does not arise. Circular waits can be separately prevented by not allowing
some processes to wait for some resources.
2.8.3 Avoidance
A deadlock avoidance policy grants a resource request only if it can
establish that granting the request cannot lead to a deadlock either
immediately or in the future. The kernel lacks detailed knowledge about
future behaviour of processes, so it cannot accurately predict deadlocks.
To facilitate deadlock avoidance under these conditions, it uses the
following conservative approach: Each process declares the maximum
number of resource units of each class that it may require.
The kernel permits a process to request these resource units in stages, i.e.,
a few resource units at a time, subject to the maximum number declared
by the process. It uses a worst-case analysis technique to check for the
possibility of future deadlocks. A request is granted only if there is no
possibility of deadlocks; otherwise, it remains pending until it can be
granted. This approach is conservative because a process may complete its
operation without requiring the maximum number of units declared by it.
Thus, the kernel may defer granting of some resource requests that it
would have granted immediately had it known about future behaviour of
processes. This effect and the overhead of making this check at every
resource request constitute the cost of deadlock avoidance.
Operating Systems
Deadlock Handling in Practice
An operating system manages numerous and diverse resources, hardware
resources such as memory and I/O devices, software resources such as files
containing programs or data and interprocess messages, and kernel
resources such as data structures and control blocks used by the kernel.
The overhead of deadlock detection-and-resolution and deadlock
avoidance make them unattractive deadlock handling policies in practice.
Hence, an OS either uses the deadlock prevention approach, creates a
situation in which explicit deadlock handling actions are unnecessary, or
simply does not care about possibility of deadlocks. Further, since
deadlock prevention constrains the order in which processes request their
resources, operating systems tend to handle deadlock issues separately for
each kind of resource like memory, I/O devices, files, and kernel
resources.
Memory: Memory is a preemptible resource, so its use by processes
cannot cause a deadlock. Explicit deadlock handling is therefore
unnecessary. The memory allocated to a process is freed by swapping out
the process whenever the memory is needed for another process.
I/O Devices: Among deadlock prevention policies, the “all resources
together” policy requires processes to make one multiple requests for all
their resource requirements. This policy incurs the least CPU overhead,
but it has the drawback which leads to underutilization of I/O devices that
are allocated much before a process needs them.
Resource ranking, on the other hand, is not a feasible policy to control use
of I/O devices because any assignment of resource ranks causes
inconvenience to some group of users.
This difficulty is compounded by the fact that I/O devices are generally
non preemptible. Operating systems overcome this difficulty by creating
virtual devices. For example, the system creates a virtual printer by using
some disk area to store a file that is to be printed. Actual printing takes
place when a printer becomes available. Since virtual devices are created
whenever needed, it is not necessary to preallocate them as in the “all
resources together” policy unless the system faces a shortage of disk space.
Operating Systems
Files and Interprocess Messages: A file is a user-created resource. An
OS contains many files. Deadlock prevention policies such as resource
ranking could cause high overhead and inconvenience to users. Hence
operating systems do not extend deadlock handling actions to files;
processes accessing a common set of files are expected to make their own
arrangements to avoid deadlocks. For similar reasons, operating systems
do not handle deadlocks caused by interprocess messages.
Control Blocks: The kernel allocates control blocks such as process
control blocks (PCBs) and event control blocks (ECBs) to processes in a
specific order. A PCB is allocated when a process is created, and an ECB
is allocated when the process becomes blocked on an event. Hence
resource ranking can be a solution here. If a simpler policy is desired, all
control blocks for a job or process can be allocated together at its initiation.
2.9 MUTUAL EXCLUSION: ALGORITHMS
Mutual exclusion is useful in implementing critical sections. Mutual
exclusion can be implemented by using a semaphore that is initialized to
1. A process performs a wait operation on the semaphore before entering
a CS and a signal operation on exiting from it.
Figure 2.8 CS implementation with semaphores
Figure 2.8 shows implementation of a critical section in processes Pi and
Pj by using a semaphore named sem_CS. sem_CS is initialized to 1. Each
process performs a wait operation on sem_CS before entering its CS, and
a signal operation after exiting from its critical section. The first process
which executes wait (sem_CS), on finding sem_CS is > 0, decrements
Operating Systems
sem_CS by 1 and goes on to enter its critical section. When the second
process performs wait (sem_CS), it is blocked because its value is 0. It is
activated when the first process performs signal (sem_CS) after exiting
from its own critical section; the second process then enters its critical
section. If no process is blocked on sem_CS when a signal (sem_CS)
operation is performed, the value of sem_CS becomes 1. This value of
sem_CS permits a process that is performing a wait operation at some time
later to immediately enter its CS. More processes using similar code can
be added to the system without causing correctness problems.
2.10 SEMAPHORES – CONCURRENT
PROGRAMMING USING SEMAPHORES
A semaphore is a special kind of synchronization data, which is a shared
integer with nonnegative values. It can be used only through specific
synchronization primitives. It can be subjected only to the following
operations:
1. Initialization (specified as part of its declaration)
2. The indivisible operations wait and signal
When a process performs a wait operation on a semaphore, the operation
checks whether the value of the semaphore is > 0. If so, it decrements the
value of the semaphore and lets the process continue its execution;
otherwise, it blocks the process on the semaphore. A semaphore value is
incremented by 1 on signal operation and activates a process blocked on
the semaphore. Due to these semantics, semaphores are also called
counting semaphores. Programming language or the OS that implements
wait and signal operations ensures that race conditions cannot arise over a
semaphore.
Processes use wait and signal operations to synchronize their execution
with respect to one another. The number of processes which can get past
the wait operation in the beginning depends on the initial value of a
semaphore. If a process does not get past a wait operation it is blocked on
Operating Systems
the semaphore. This feature avoids busy waits. The producers–consumers
and readers–writers problems can also be implemented using semaphores.
Uses of Semaphores in Concurrent Systems
Use Description
Mutual Mutual exclusion can be implemented by using a semaphore that is
exclusion initialized to 1. Before entering a CS, a process performs a wait
operation on the semaphore and on exit a signal operation. Binary
semaphore, a special type of semaphores simplifies CS
implementation.
Bounded Bounded concurrency implies that a function may be executed, or a
concurrency resource may be accessed, by n processes concurrently, 1 ≤ n ≤ c,
where c is a constant. A semaphore initialized to c can be used to
implement bounded concurrency.
Signalling Signalling is used when a process Pi wishes to perform an operation
ai only after process Pj has performed an operation aj. To implement
signalling a semaphore is used and initialised to 0. Pi performs a wait
on the semaphore before performing operation ai. Pj performs a
signal on the semaphore after it performs operation aj.
Table 2.11: Uses of Semaphores in Implementing Concurrent Systems
Uses of Semaphores in Concurrent Systems
Mutual exclusion is useful in implementing critical sections. Bounded
concurrency is important when a resource can be shared by up to c
processes, where c is a constant ≥ 1. Signalling is useful in control
synchronization.
A readers–preferred readers–writers system using semaphores
Figure 2.9 shows a scheme for implementing semaphores:
Operating Systems
Figure 2.9 A readers–preferred readers–writers system using
semaphores
A semaphore type is defined. It has fields for the value of a semaphore, a
list that is used to store ids of processes blocked on the semaphore, and to
ensure indivisibility of the wait and signal operations on the semaphore a
lock variable. The wait and signal are procedures to implement these
operations. Each takes a variable of the semaphore type as a parameter
. A concurrent program declares semaphores as variables of the semaphore
type, and its processes invoke the wait and signal procedures to operate on
them.
To avoid race conditions while accessing the value of the semaphore,
procedures wait, and signal first invokes the function Close_lock to set the
lock variable sem.lock. Close_lock uses an indivisible instruction and a
busy wait; however, the busy waits are short since the wait and signal
operations are themselves short. The procedures invoke the function
Open_lock to reset the lock after completing their execution. Recall that a
busy wait may lead to priority inversion in an OS using priority-based
Operating Systems
scheduling; we assume that a priority inheritance protocol is used to avoid
this problem. In a time-sharing system, a busy wait can cause delays in
synchronization, but does not cause more serious problems.
Figure 2.10 A scheme for implementing wait and signal operations on
a semaphore
If any process is blocked on sem, signal procedure activates one such
process by making the system call activate. If no processes are waiting for
sem, it increments the value of sem by 1. It is convenient to maintain the
list of blocked processes as a queue and activate the first blocked process
at a signal operation. The semaphore implementation may also possess the
bounded wait property.
However, an implementation could choose any order as the semantics of
the signal operation do not specify any order for the processes to be
activated.
The wait operation has a very low failure rate in most systems using
semaphores, i.e., processes performing wait operations are seldom
blocked. This characteristic is exploited in some methods of implementing
semaphores to reduce the overhead. There are three methods of
implementing semaphores. Each one has their own overhead implications.
We shall use the term process as a generic term for both processes and
threads.
Operating Systems
The wait procedure checks whether the value of sem is > 0. If so, it
decrements the value and returns. If the value is 0, the wait procedure adds
the id of the process to the list of processes blocked on sem and makes a
block_me system call with the lock variable as a parameter. This call
blocks the process that invoked the wait procedure and opens the lock
passed to it as a parameter. Note that the wait procedure could not have
performed these actions itself. Race conditions would arise if it opened the
lock before making a block_me call, and a deadlock would arise if it made
a block_me call before opening the lock!
2.11 LETS SUM UP
• Process management involves various tasks like creation,
scheduling, termination of processes, and a deadlock.
• The kernel allocates resources to processes and schedules them
for use of the CPU.
• Process synchronization is a generic term for the techniques used
to delay and resume processes to implement process interactions.
• A program containing a race condition may produce correct or
incorrect results depending on the order in which instructions of
its processes are executed.
• Processes of an application need to interact with one another
because they work toward a common goal.
• Scheduling is the act of selecting the next process to be serviced
by a CPU.
• Two modes: Kernel and user modes assist OS to complete its
operations.
• Thread is an execution of a program that uses the resources of a
process.
• Concurrent programming constructs provide data abstraction and
encapsulation features specifically suited to the construction of
concurrent programs.
• Deadlocks in an operating system are analogous processes
waiting for one another’s actions indefinitely.
Operating Systems
• Mutual exclusion is useful in implementing critical sections.
• A semaphore is a special kind of synchronization data that can be
used only through specific synchronization primitives.
2.12 KEYWORDS
• Process Control Block (PCB): A data structure maintained by the
operating system for each process, containing information such as
process state, program counter, CPU registers, and memory
allocation.
• Process Scheduling: The activity of determining the order and
priority of process execution on the CPU, usually based on
scheduling algorithms, to optimize resource utilization and system
performance.
• Context Switching: The process of saving and restoring the state
of a process, typically during a context switch between different
processes or threads, to allow multitasking and efficient CPU
utilization.
• Process Forking: The creation of a new process by duplicating an
existing process, resulting in two identical processes (parent and
child) with separate memory spaces but shared code and data
segments.
• Process Group: A collection of one or more processes that share
certain attributes or properties, such as a common session or
terminal, and can be managed as a single entity by the operating
system.
2.13 SOME USEFUL BOOKS
References Books
● Charles Crowley.2017. Operating System: A Design-oriented
Approach. McGraw Hill Education.
Operating Systems
● Gary J Nutt. 1997. Operating Systems: A Modern Perspective.
Pearson Education Inc.
● Maurice Bach. Design of the Unix Operating Systems. Prentice
hall, PTR.
● Archer J Harris. 2001. Schaum's Outline of Operating Systems
(Schaum's Outlines). McGraw-Hill Education.
Textbook References
● Silberschatz, Abraham and Galvin, Peter B., and Gagne, Greg,
2018. Operating System Concepts (10th edition), Hoboken, NJ:
John Wiley & Sons, Inc.
● Stallings, William.2017. Operating systems: Internals and design
principles (Ninth edition). New Jersey: Pearson Education, Inc.
● Dhamdhere, Dhananjay M. 2009. Operating Systems: A Concept-
Based Approach (First Edition), New York, McGraw-Hill.
Weblink/Research paper References
● https://www.tutorialspoint.com/operating_system/os_process_sch
eduling_algorithms.htm
● researchgate.net/publication/221329752_Operating_Systems_The
_Problems_Of_Performance_and_Reliability
● https://www.geeksforgeeks.org/monitors-in-process-
synchronization/
● https://stackoverflow.com/questions/34510/what-is-a-race-
condition
2.14 ANSWER TO CHECK YOUR PROGRESS
1. Answer to check your progress- 1 Q. 1 …
Process synchronization is important to prevent conflicts and race
conditions that may arise when multiple processes or threads attempt to
access shared resources simultaneously, which can lead to data corruption
or inconsistencies.
Operating Systems
2. Answer to check your progress- 1 Q. 2 …
Common synchronization problems include deadlock, where two or more
processes are blocked waiting for each other to release resources, and
starvation, where a process is unable to gain access to resources due to
other processes continually acquiring them.
3. Answer to check your progress- 1 Q. 3 …
A deadlock is a situation in which two or more processes are unable to
proceed because each is waiting for the other to release a resource,
resulting in a cyclic dependency that prevents any of the processes from
completing.
4. Answer to check your progress- 2 Q. 1 …
Preemptive scheduling is a scheduling policy where the operating system
can interrupt a currently executing process and allocate the CPU to another
process of higher priority. This ensures fairness and responsiveness in the
system.
5. Answer to check your progress- 2 Q. 2 ….
Round Robin (RR) scheduling is a preemptive scheduling algorithm where
each process is assigned a fixed time quantum (time slice) to execute on
the CPU. When a time quantum expires, the currently executing process
is preempted, and the CPU is allocated to the next process in the ready
queue.
2.15 TERMINAL QUESTIONS
1. What is the concept of a process in operating systems, and why is
process synchronization important? Provide examples of situations
where process synchronization is necessary.
2. Explain the concept of process scheduling in operating systems.
Discuss at least three scheduling algorithms commonly used in
modern operating systems.
Operating Systems
3. What is inter-process communication (IPC), and why is it essential
in multitasking environments? Provide examples of IPC mechanisms
and their applications.
4. Describe the hardware requirements for process management in
operating systems, including protection mechanisms, context
switching, and privileged modes.
5. What are threads, and how do they differ from processes? Discuss
the advantages and challenges of using threads for concurrency in
operating systems.