KEMBAR78
Unit 2 | PDF | Process (Computing) | Scheduling (Computing)
0% found this document useful (0 votes)
14 views221 pages

Unit 2

The document provides an overview of process management in operating systems, detailing the definition of a process, its states, and the operations involved in managing processes. It discusses the roles of the operating system in scheduling, executing, and terminating processes, as well as the advantages and disadvantages of process management. Additionally, it covers various scheduling algorithms and the concept of context switching, emphasizing the importance of efficient process management for optimal system performance.

Uploaded by

giceb37140
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views221 pages

Unit 2

The document provides an overview of process management in operating systems, detailing the definition of a process, its states, and the operations involved in managing processes. It discusses the roles of the operating system in scheduling, executing, and terminating processes, as well as the advantages and disadvantages of process management. Additionally, it covers various scheduling algorithms and the concept of context switching, emphasizing the importance of efficient process management for optimal system performance.

Uploaded by

giceb37140
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 221

Introduction of Process Management

• A process is a program in execution.


• For example, when we write a program in C or C++ and
compile it, the compiler creates binary code.
• The original code and binary code are both programs.
• When we actually run the binary code, it becomes a process.
• A process is an ‘active’ entity instead of a program, which is
considered a ‘passive’ entity.
• A single program can create many processes when run
multiple times; for example, when we open a .exe or binary file
multiple times, multiple instances begin (multiple processes
are created).
What is Process Management?

• Process management is a key part of an operating system.


• It controls how processes are carried out, and controls how
your computer runs by handling the active processes.
• This includes stopping processes, setting which processes
should get more attention, and many more.
• You can manage processes on your own computer too.
• The OS is responsible for managing the start, stop, and
scheduling of processes, which are programs running on the
system.
• The operating system uses a number of methods to prevent
deadlocks, facilitate inter-process communication, and
synchronize processes.
• Efficient resource allocation, conflict-free process execution,
and optimal system performance are all guaranteed by
competent process management.
• This essential component of an operating system enables the
execution of numerous applications at once, enhancing system
utilization and responsiveness.
How Does a Process Look Like in Memory?

• A process in memory is
divided into several distinct
sections, each serving a
different purpose.
• Here’s how a process
typically looks in memory:
• Text Section: A Process, sometimes known as the Text
Section, also includes the current activity represented by the
value of the Program Counter.
• Stack: The stack contains temporary data, such as function
parameters, returns addresses, and local variables.
• Data Section: Contains the global variable.
• Heap Section: Dynamically memory allocated to process
during its run time.
Characteristics of a Process
A process has the following attributes.
• Process Id: A unique identifier assigned by the operating system.
• Process State: Can be ready, running, etc.
• CPU Registers: Like the Program Counter (CPU registers must be
saved and restored when a process is swapped in and out of the
CPU)
• Accounts Information: Amount of CPU used for process execution,
time limits, execution ID, etc
• I/O Status Information: For example, devices allocated to the
process, open files, etc
• CPU Scheduling Information: For example, Priority (Different
processes may have different priorities, for example, a shorter
process assigned high priority in the shortest job first scheduling)
• All of the above attributes of a process are also known as
the context of the process.
• Every process has its own process control block(PCB), i.e. each
process will have a unique PCB.
• All of the above attributes are part of the PCB.
States of Process
• New: Newly Created Process (or) being-created process.
• Ready: After the creation process moves to the Ready state, i.e.
the process is ready for execution.
• Running: Currently running process in CPU (only one process
at a time can be under execution in a single processor).
• Wait (or Block): When a process requests I/O access.
• Complete (or Terminated): The process completed its
execution.
• Suspended Ready: When the ready queue becomes full, some
processes are moved to a suspended ready state
• Suspended Block: When the waiting queue becomes full.
Process Operations

• Process operations in an
operating system refer to the
various activities the OS
performs to manage
processes.
• These operations include
process creation, process
scheduling, execution and
killing the process.
• Here are the key process
operations:
• Process Creation
• Process creation in an operating system (OS) is the act of
generating a new process.
• This new process is an instance of a program that can execute
independently.
• Scheduling
• Once a process is ready to run, it enters the “ready queue.”
• The scheduler’s job is to pick a process from this queue and
start its execution.
• Execution
• Execution means the CPU starts working on the process.
During this time, the process might:
• Move to a waiting queue if it needs to perform an I/O operation.
• Get blocked if a higher-priority process needs the CPU.
• Killing the Process
• After the process finishes its tasks, the operating system ends
it and removes its Process Control Block (PCB).
Context Switching of Process

• The process of saving the context of one process and loading


the context of another process is known as Context Switching.
• In simple terms, it is like loading and unloading the process
from the running state to the ready state.
When Does Context Switching Happen?
• Context Switching Happen:
• When a high-priority process comes to a ready state (i.e. with
higher priority than the running process).
• An Interrupt occurs.
• User and kernel-mode switch (It is not necessary though)
• Preemptive CPU scheduling is used.
Note:
• There are two main types of CPU scheduling, preemptive and
non-preemptive.
• Preemptive scheduling is when a process transitions from a
running state to a ready state or from a waiting state to a
ready state.
• Non-preemptive scheduling is employed when a process
terminates or transitions from running to waiting state.
Context Switch vs Mode Switch
• A mode switch occurs when the CPU privilege level is changed, for
example when a system call is made or a fault occurs.
• The kernel works in more a privileged mode than a standard user
task.
• If a user process wants to access things that are only accessible to
the kernel, a mode switch must occur.
• The currently executing process need not be changed during a
mode switch.
• A mode switch typically occurs for a process context switch to
occur.
• Only the kernel can cause a context switch.
CPU-Bound vs I/O-Bound Processes
• A CPU-bound process requires more CPU time or spends more
time in the running state.
• An I/O-bound process requires more I/O time and less CPU
time.
• An I/O-bound process spends more time in the waiting state.
• Process planning is an integral part of the process
management operating system.
• It refers to the mechanism used by the operating system to
determine which process to run next.
• The goal of process scheduling is to improve overall system
performance by maximizing CPU utilization, minimizing
execution time, and improving system response time.
Process Scheduling Algorithms
• The operating system can use different scheduling algorithms
to schedule processes. Here are some commonly used timing
algorithms:
• First-Come, First-Served (FCFS): This is the simplest
scheduling algorithm, where the process is executed on a first-
come, first-served basis.
• FCFS is non-preemptive, which means that once a process
starts executing, it continues until it is finished or waiting for
I/O.
• Shortest Job First (SJF): SJF is a proactive scheduling
algorithm that selects the process with the shortest burst time.
• The burst time is the time a process takes to complete its
execution.
• SJF minimizes the average waiting time of processes.
• Round Robin (RR): Round Robin is a proactive scheduling
algorithm that reserves a fixed amount of time in a round for
each process.
• If a process does not complete its execution within the
specified time, it is blocked and added to the end of the queue.
• RR ensures fair distribution of CPU time to all processes and
avoids starvation.
• Priority Scheduling: This scheduling algorithm assigns priority
to each process and the process with the highest priority is
executed first.
• Priority can be set based on process type, importance, or
resource requirements.
• Multilevel Queue: This scheduling algorithm divides the ready
queue into several separate queues,
• each queue having a different priority.
• Processes are queued based on their priority, and each queue
uses its own scheduling algorithm.
• This scheduling algorithm is useful in scenarios where different
types of processes have different priorities.
Advantages of Process Management
• Running Multiple Programs:
• Process management lets you run multiple applications at the
same time, for example, listen to music while browsing the
web.
• Process Isolation:
• It ensures that different programs don’t interfere with each
other, so a problem in one program won’t crash another.
• Fair Resource Use:
• It makes sure resources like CPU time and memory are shared
fairly among programs, so even lower-priority programs get a
chance to run.
• Smooth Switching:
• It efficiently handles switching between programs, saving and
loading their states quickly to keep the system responsive and
minimize delays.
Disadvantages of Process Management
• Overhead: Process management uses system resources
because the OS needs to keep track of various data structures
and scheduling queues.
• This requires CPU time and memory, which can affect the
system’s performance.
• Complexity: Designing and maintaining an OS is complicated
due to the need for complex scheduling algorithms and
resource allocation methods.
• Deadlocks: To keep processes running smoothly together, the
OS uses mechanisms like semaphores and mutex locks.
• However, these can lead to deadlocks, where processes get
stuck waiting for each other indefinitely.
• Increased Context Switching: In multitasking systems, the OS
frequently switches between processes.
• Storing and loading the state of each process (context
switching) takes time and computing power, which can slow
down the system.
• In conclusion, process management is a important function of
an operating system,
• ensuring that multiple programs can run smoothly and
efficiently. It involves creating, scheduling, and terminating
processes, as well as managing resources and handling
communication between processes.
• Effective process management optimizes the use of system
resources, maintains system stability, and enhances the overall
performance and responsiveness of the computer.
• Understanding and implementing robust process management
strategies are crucial for maintaining an efficient and reliable
computing environment.
States of a Process in Operating Systems
• In an operating system, a process is a program that is being
executed.
• During its execution, a process goes through different states.
• Understanding these states helps us see how the operating
system manages processes, ensuring that the computer runs
efficiently.
• There must be a minimum of five states.
• Even though the process could be in one of these states during
execution, the names of the states are not standardized.
• Each process goes through several stages throughout its life
cycle.
Process States in Operating System
• New State: In this step, the process is about to be created but
not yet created.
• It is the program that is present in secondary memory that will
be picked up by the OS to create the process.
• Ready State: New -> Ready to run.
• After the creation of a process, the process enters the ready
state i.e. the process is loaded into the main memory.
• The process here is ready to run and is waiting to get the CPU
time for its execution.
• Processes that are ready for execution by the CPU are
maintained in a queue called a ready queue for ready
processes.
• Run State:
• The process is chosen from the ready queue by the OS for
execution
• and the instructions within the process are executed by any
one of the available CPU cores.
• Blocked or Wait State:
• Whenever the process requests access to I/O or needs input
from the user or needs access to a critical region(the lock for
which is already acquired) it enters the blocked or waits state.
• The process continues to wait in the main memory and does
not require CPU.
• Once the I/O operation is completed the process goes to the
ready state.
• Terminated or Completed State:
• Process is killed as well as PCB is deleted.
• The resources allocated to the process will be released or
deallocated.
• Suspend Ready:
• Process that was initially in the ready state but was swapped
out of main memory(refer to Virtual Memory topic)
• and placed onto external storage by the scheduler is said to be
in suspend ready state.
• The process will transition back to a ready state whenever the
process is again brought onto the main memory.
• Suspend Wait or Suspend Blocked:
• Similar to suspend ready but uses the process which was
performing I/O operation and lack of main memory caused
them to move to secondary memory.
• When work is finished it may go to suspend ready.
• CPU and I/O Bound Processes:
• If the process is intensive in terms of CPU operations, then it is
called CPU bound process.
• Similarly, If the process is intensive in terms of I/O operations
then it is called I/O bound process.
How Does a Process Move From One State to
Other State?
• A process can move between different states in an operating
system based on its execution status and resource availability.
• Here are some examples of how a process can move between
different states:
• New to Ready:
• When a process is created, it is in a new state.
• It moves to the ready state when the operating system has
allocated resources to it and it is ready to be executed.
• Ready to Running:
• When the CPU becomes available, the operating system
selects a process from the ready queue depending on various
scheduling algorithms and moves it to the running state.
• Running to Blocked:
• When a process needs to wait for an event to occur (I/O
operation or system call), it moves to the blocked state.
• For example, if a process needs to wait for user input, it moves
to the blocked state until the user provides the input.
• Running to Ready:
• When a running process is preempted by the operating
system, it moves to the ready state.
• For example, if a higher-priority process becomes ready, the
operating system may preempt the running process and move
it to the ready state.
• Blocked to Ready:
• When the event a blocked process was waiting for occurs, the
process moves to the ready state.
• For example, if a process was waiting for user input and the
input is provided, it moves to the ready state.
• Running to Terminated: When a process completes its
execution or is terminated by the operating system, it moves to
the terminated state.
Types of Schedulers
• Long-Term Scheduler:
• Decides how many processes should be made to stay in the
ready state.
• This decides the degree of multiprogramming.
• Once a decision is taken it lasts for a long time which also
indicates that it runs infrequently.
• Hence it is called a long-term scheduler.
• Short-Term Scheduler:
• Short-term scheduler will decide which process is to be
executed next and then it will call the dispatcher.
• A dispatcher is a software that moves the process from ready
to run and vice versa.
• In other words, it is context switching.
• It runs frequently.
• Short-term scheduler is also called CPU scheduler.
• Medium Scheduler:
• Suspension decision is taken by the medium-term scheduler.
• The medium-term scheduler is used for swapping which is
moving the process from main memory to secondary and vice
versa.
• The swapping is done to reduce degree of multiprogramming.
Multiprogramming

• We have many processes ready to run.


• There are two types of multiprogramming:
• Preemption – Process is forcefully removed from CPU.
• Pre-emption is also called time sharing or multitasking.
• Non-Preemption – Processes are not removed until they
complete the execution.
• Once control is given to the CPU for a process execution, till
the CPU releases the control by itself, control cannot be taken
back forcibly from the CPU.
• Degree of Multiprogramming
• The number of processes that can reside in the ready state at
maximum decides the degree of multiprogramming,
• e.g., if the degree of programming = 100, this means 100
processes can reside in the ready state at maximum.
Operation on The Process
• Creation:
• The process will be ready once it has been created, enter the
ready queue (main memory), and be prepared for execution.
• Planning:
• The operating system picks one process to begin executing
from among the numerous processes that are currently in the
ready queue.
• Scheduling is the process of choosing the next process to run.
• Application:
• The processor begins running the process as soon as it is
scheduled to run.
• During execution, a process may become blocked or wait, at
which point the processor switches to executing the other
processes.
• Killing or Deletion:
• The OS will terminate the process once its purpose has been
fulfilled.
• The process’s context will be over there.
• Blocking:
• When a process is waiting for an event or resource, it is
blocked.
• The operating system will place it in a blocked state, and it will
not be able to execute until the event or resource becomes
available.
• Resumption:
• When the event or resource that caused a process to block
becomes available,
• the process is removed from the blocked state and added back
to the ready queue.
• Context Switching:
• When the operating system switches from executing one
process to another,
• it must save the current process’s context and load the context
of the next process to execute.
• This is known as context switching.
• Inter-Process Communication:
• Processes may need to communicate with each other to share
data or coordinate actions.
• The operating system provides mechanisms for inter-process
communication, such as shared memory, message passing, and
synchronization primitives.
• Process Synchronization:
• Multiple processes may need to access a shared resource or
critical section of code simultaneously.
• The operating system provides synchronization mechanisms to
ensure that only one process can access the resource or critical
section at a time.
• Process States:
• Processes may be in one of several states, including ready,
running, waiting, and terminated.
• The operating system manages the process states and
transitions between them.
Features of The Process State

• A process can move from the running state to the waiting


state if it needs to wait for a resource to become available.
• A process can move from the waiting state to the ready state
when the resource it was waiting for becomes available.
• A process can move from the ready state to the running state
when it is selected by the operating system for execution.
• The scheduling algorithm used by the operating system
determines which process is selected to execute from the
ready state.
• The operating system may also move a process from the
running state to the ready state to allow other processes to
execute.
• A process can move from the running state to the terminated
state when it completes its execution.
• A process can move from the waiting state directly to the
terminated state if it is aborted or killed by the operating
system or another process.
• A process can go through ready, running and waiting state any
number of times in its lifecycle but new and terminated
happens only once.
• The process state includes information about the program
counter, CPU registers, memory allocation, and other resources
used by the process.
• The operating system maintains a process control block (PCB)
for each process, which contains information about the process
state, priority, scheduling information, and other process-
related data.
• The process state diagram is used to represent the transitions
between different states of a process and is an essential
concept in process management in operating systems.
Introduction of Process Synchronization
• Process Synchronization is the coordination of execution of
multiple processes in a multi-process system to ensure that
they access shared resources in a controlled and predictable
manner.
• It aims to resolve the problem of race conditions and other
synchronization issues in a concurrent system.
• The main objective of process synchronization is to ensure that
multiple processes access shared resources without interfering
with each other and to prevent the possibility of inconsistent
data due to concurrent access.
• To achieve this, various synchronization techniques such as
semaphores, monitors, and critical sections are used
• In a multi-process system, synchronization is necessary to
ensure data consistency and integrity, and to avoid the risk of
deadlocks and other synchronization problems.
• Process synchronization is an important aspect of modern
operating systems, and it plays a crucial role in ensuring the
correct and efficient functioning of multi-process systems.
Types of Process

• On the basis of synchronization, processes are categorized as


one of the following two types:
• Independent Process: The execution of one process does not
affect the execution of other processes.
• Cooperative Process: A process that can affect or be affected
by other processes executing in the system.
• Process synchronization problem arises in the case of
Cooperative processes also because resources are shared in
Cooperative processes.
What is Race Condition?

• When more than one process is executing the same code or


accessing the same memory or any shared variable in that
condition there is a possibility that the output or the value of
the shared variable is wrong so for that all the processes doing
the race to say that my output is correct this condition known
as a race condition.
• Several processes access and process the manipulations over
the same data concurrently, and then the outcome depends on
the particular order in which the access takes place.
• A race condition is a situation that may occur inside a critical
section.
• This happens when the result of multiple thread execution in
the critical section differs according to the order in which the
threads execute.
• Race conditions in critical sections can be avoided if the critical
section is treated as an atomic instruction.
• Also, proper thread synchronization using locks or atomic
variables can prevent race conditions.
Example:

• Let’s say there are two processes P1 and P2 which share a


common variable (shared=10),
• both processes are present in – queue and waiting for their turn
to be executed.
• Suppose, Process P1 first come under execution, and the CPU
store a common variable between them (shared=10) in the
local variable (X=10) and increment it by 1(X=11),
• after then when the CPU read line sleep(1),it switches from
current process P1 to process P2 present in ready-queue.
• The process P1 goes in a waiting state for 1 second.
• Now CPU execute the Process P2 line by line and store common
variable (Shared=10) in its local variable (Y=10) and decrement Y by
1(Y=9),
• after then when CPU read sleep(1), the current process P2 goes in
waiting for state and CPU remains idle for some time as there is no
process in ready-queue, after completion of 1 second of process P1
when it comes in ready-queue,
• CPU takes the process P1 under execution and execute the
remaining line of code (store the local variable (X=11) in common
variable (shared=11) ),
• CPU remain idle for sometime waiting for any process in ready-
queue,
• after completion of 1 second of Process P2, when process P2 comes
in ready-queue, CPU start executing the further remaining line of
Process P2(store the local variable (Y=9) in common variable
(shared=9) )
• Note: We are assuming the final value of a common
variable(shared) after execution of Process P1 and Process P2
is 10 (as Process P1 increment variable (shared=10) by 1 and
Process P2 decrement variable (shared=11) by 1 and finally it
becomes shared=10).
• But we are getting undesired value due to a lack of proper
synchronization.

Actual meaning of Race-Condition

• If the order of execution of the process(first P1 -> then P2)


then we will get the value of common variable (shared) =9.
• If the order of execution of the process(first P2 -> then P1)
then we will get the final value of common variable (shared)
=11.
• Here the (value1 = 9) and (value2=10) are racing, If we execute
these two processes in our computer system then sometime
we will get 9 and sometime we will get 10 as the final value of
a common variable(shared).
• This phenomenon is called race condition.
Critical Section Problem
• A critical section is a code segment that can be accessed by
only one process at a time.
• The critical section contains shared variables that need to be
synchronized to maintain the consistency of data variables.
• So the critical section problem means designing a way for
cooperative processes to access shared resources without
creating data inconsistencies.
In the entry section, the
process requests for
entry in the Critical
Section.
• Any solution to the critical section problem must satisfy three
requirements:
• Mutual Exclusion: If a process is executing in its critical section,
then no other process is allowed to execute in the critical
section.
• Progress: If no process is executing in the critical section and
other processes are waiting outside the critical section,
• then only those processes that are not executing in their
remainder section can participate in deciding which will enter
the critical section next,
• and the selection can not be postponed indefinitely.
• Bounded Waiting:
• A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a
process has made a request to enter its critical section and
before that request is granted.
Peterson’s Solution

• Peterson’s Solution is a classical software-based solution to


the critical section problem.
• In Peterson’s solution, we have two shared variables:
• boolean flag[i]: Initialized to FALSE, initially no one is
interested in entering the critical section
• int turn: The process whose turn is to enter the critical section.
• Peterson’s Solution preserves all three conditions
• Mutual Exclusion is assured as only one process can access
the critical section at any time.
• Progress is also assured, as a process outside the critical
section does not block other processes from entering the
critical section.
• Bounded Waiting is preserved as every process gets a fair
chance.
• Disadvantages of Peterson’s Solution
• It involves busy waiting. (In the Peterson’s solution, the code
statement- “while(flag[j] && turn == j);” is responsible for this.
Busy waiting is not favored because it wastes CPU cycles that
could be used to perform other tasks.)
• It is limited to 2 processes.
• Peterson’s solution cannot be used in modern CPU
architectures.
Semaphores

• A semaphore is a signaling mechanism and a thread that is


waiting on a semaphore can be signaled by another thread.
• This is different than a mutex as the mutex can be signaled
only by the thread that is called the wait function.
• A semaphore uses two atomic operations, wait and signal for
process synchronization.
A Semaphore is an integer variable, which can be accessed
only through two operations wait() and signal().
There are two types of semaphores: Binary
Semaphores and Counting Semaphores.
• Binary Semaphores: They can only be either 0 or 1.
• They are also known as mutex locks, as the locks can
provide mutual exclusion.
• All the processes can share the same mutex semaphore that is
initialized to 1.
• Then, a process has to wait until the lock becomes 0.
• Then, the process can make the mutex semaphore 1 and start
its critical section.
• When it completes its critical section, it can reset the value of
the mutex semaphore to 0 and some other process can enter
its critical section.
• Counting Semaphores: They can have any value and are not
restricted to a certain domain.
• They can be used to control access to a resource that has a
limitation on the number of simultaneous accesses.
• The semaphore can be initialized to the number of instances of
the resource.
• Whenever a process wants to use that resource, it checks if the
number of remaining instances is more than zero, i.e., the
process has an instance available.
• Then, the process can enter its critical section thereby
decreasing the value of the counting semaphore by 1.
• After the process is over with the use of the instance of the
resource, it can leave the critical section thereby adding 1 to
the number of available instances of the resource.
• Advantages of Process Synchronization
• Ensures data consistency and integrity
• Avoids race conditions
• Prevents inconsistent data due to concurrent access
• Supports efficient and effective use of shared resources
• Disadvantages of Process Synchronization
• Adds overhead to the system
• This can lead to performance degradation
• Increases the complexity of the system
• Can cause deadlock if not implemented properly.
Concurrency in Operating System
• It refers to the execution of multiple instruction sequences
at the same time.
• It occurs in an operating system when multiple process
threads are executing concurrently.
• These threads can interact with one another via shared
memory or message passing.
• Concurrency results in resource sharing, which causes
issues like deadlocks and resource scarcity.
• It aids with techniques such as process coordination,
memory allocation, and execution schedule to maximize
throughput.
Principles of Concurrency

• Today's technology, like multi-core processors and


parallel processing, allows multiple processes and threads
to be executed simultaneously.
• Multiple processes and threads can access the same
memory space, the same declared variable in code,
• or even read or write to the same file.
• The amount of time it takes a process to execute cannot
be simply estimated,
• and you cannot predict which process will complete first,
enabling you to build techniques to deal with the problems
that concurrency creates.
• Interleaved and overlapping processes are two types of
concurrent processes with the same problems.
• It is impossible to predict the relative speed of execution,
and the following factors determine it:
1.The way operating system handles interrupts
2.Other processes' activities
3.The operating system's scheduling policies
Problems in Concurrency

• There are various problems in concurrency. Some of them


are as follows:
• 1. Locating the programming errors
• It's difficult to spot a programming error because reports
are usually repeatable due to the varying states of shared
components each time the code is executed.
• 2. Sharing Global Resources
• Sharing global resources is difficult.
• If two processes utilize a global variable and both alter the
variable's value, the order in which the many changes are
executed is critical.
• 3. Locking the channel
• It could be inefficient for the OS to lock the resource and
prevent other processes from using it.
• 4. Optimal Allocation of Resources
• It is challenging for the OS to handle resource allocation
properly.
• Issues of Concurrency
• Various issues of concurrency are as follows:
1. Non-atomic
• Operations that are non-atomic but interruptible by several
processes may happen issues.
• A non-atomic operation depends on other processes, and an
atomic operation runs independently of other processes.
2. Deadlock
• In concurrent computing, it occurs when one group member
waits for another member, including itself, to send a message
• and release a lock.
• Software and hardware locks are commonly used to arbitrate
shared resources and implement process synchronization in
parallel computing, distributed systems, and multiprocessing.
• 3. Blocking
• A blocked process is waiting for some event, like the
availability of a resource or completing an I/O operation.
• Processes may block waiting for resources, and a process
may be blocked for a long time waiting for terminal input.
• If the process is needed to update some data periodically,
it will be very undesirable.
4. Race Conditions
• A race problem occurs when the output of a software
application is determined by the timing or sequencing of
other uncontrollable events.
• Race situations can also happen in multithreaded
software, runs in a distributed environment, or is
interdependent on shared resources.
• 5. Starvation
• A problem in concurrent computing is where a process is
continuously denied the resources it needs to complete its
work.
• It could be caused by errors in scheduling or mutual
exclusion algorithm, but resource leaks may also cause it.
• Concurrent system design frequently requires developing
dependable strategies for coordinating their execution,
data interchange, memory allocation, and execution
schedule to decrease response time and maximize
Advantages Concurrency in Operating
System
1. Better Performance
• It improves the operating system's performance.
• When one application only utilizes the processor, and
another only uses the disk drive, the time it takes to
perform both apps simultaneously is less than the time it
takes to run them sequentially.
2. Better Resource Utilization
• It enables resources that are not being used by one
application to be used by another.
3. Running Multiple Applications
• It enables you to execute multiple applications
simultaneously.
Disadvantages of Concurrency in Operating
System

1.It is necessary to protect multiple applications from each


other.
2.It is necessary to use extra techniques to coordinate
several applications.
3.Additional performance overheads and complexities in OS
are needed for switching between applications.
CPU Scheduling in Operating Systems
• Scheduling of processes/work is done to finish the work on
time.
• CPU Scheduling is a process that allows one process to use
the CPU while another process is delayed (in standby) due to
unavailability of any resources such as I / O etc,
• thus making full use of the CPU.
• The purpose of CPU Scheduling is to make the system more
efficient, faster, and fairer.
• CPU scheduling is a key part of how an operating system
works.
• It decides which task (or process) the CPU should work on at
any given time.
• This is important because a CPU can only handle one task at a
time, but there are usually many tasks that need to be
processed.
• Whenever the CPU becomes idle, the operating system must
select one of the processes in the line ready for launch.
• The selection process is done by a temporary (CPU) scheduler.
• The Scheduler selects between memory processes ready to
launch and assigns the CPU to one of them.
CPU Scheduling

• The CPU Scheduling is the process by which a process is


executed by the using the resources of the CPU.
• The process also can wait due to the absence or
unavailability of the resources.
• These processes make the complete use of Central
Processing Unit
• The operating system must choose one of the processes
in the list of ready-to-launch processes whenever the CPU
gets idle.
• A temporary (CPU) scheduler does the selection.
• The Scheduler choose one of the ready-to-start memory
processes to get the CPU.
• Before, going to the Types of CPU Scheduling Algorithms,
we are going to learn about the Basic Terminologies which
are to be followed and used in the CPU Scheduling
Algorithms by us.
1. Process ID
• The Process ID is the first Thing is to be written while
solving the problem.
• The Process ID acts like the name of the process.
• It is usually represented with numbers or P letter with
numbers
• Example:
1. 0, 1, 2, 3, . . . . . . . . . .
2. P0, P1, P2, P3 . . . . . . . .
• Usually, we start the naming of the process from zero.
• We can also start the numbering from number 1 also. It is
our interest
2. Arrival Time
• The time which is required for the Process to enter the
ready queue or the time when the Process is ready to be
executed by the CPU.
• This Arrival Time can be represented as AT in short form.
• The Arrival Times is always positive or also zero.
3. Burst Time
• The Time Slot which the Process requires to complete the
Process is known as the Burst Time.
• The Burst Time can be represented as BT in short form.
• The Burst Times of a process is always greater than zero.
4. Completion Time
• The Total Time required by the CPU to complete the
process is known as Completion Time.
• The Completion Time can be represented as CT in short
form.
• The Completion will always be greater than zero.
5. Turn Around Time
• The time taken by the CPU since the Process has been
ready to execute or since the process is in Ready Queue is
known as Turn Around Time.
• The Turn Around Time can be calculated with the help of
Completion Time and Arrival Time.
• The Turn Around Time can be represented as TAT in short
form.
• The Turn Around Time is the difference of Completion
Time and Arrival Time.
• Formula
TAT = CT - AT
Here, CT is Completion Time and AT is Arrival Time.
6 . Waiting Time
• The time the Process has been waiting to complete its
process since the assignment of process for completion is
known as Waiting Time.
• The Waiting Time can be represented as WT in short
form.
• The Waiting Time can be calculated with the help of Turn
Around Time and Burst Time.
• The Waiting Time is the difference between Turn Around
Time and Burst Time
• Formula
• WT = TAT - BT
7. Ready Queue
• The Queue where all the processes are stored until the
execution of the previous process.
• This ready queue is very important because there would
be confusion in CPU when two same kinds of processes
are being executed at the same time.
• Then, in these kinds of conditions the ready queue comes
into place and then, the its duty is fulfilled.
8. Gantt Chart
• It is the place where all the already executed processes
are stored.
• This is very useful for calculating Waiting Time,
Completion Time, Turn Around Time.
Modes in CPU Scheduling Algorithms
• There are two modes in CPU Scheduling Algorithms. They
are:
1.Pre-emptive Approach
2.Non Pre-emptive Approach
• In Pre Emptive-Approach the process once starts its
execution then the CPU is allotted to the same process
until the completion of process.
• There would be no shift of Processes by the Central
Processing Unit.
• The complete CPU is allocated to the Process and there
would be no change of CPU allocation until the process is
complete.
• In Non-Pre Emptive-Approach the process once stars its
execution, then the CPU is not allotted to the same
process until the completion of process.
• There is a shift of Processes by the Central Processing
Unit.
• The complete CPU is allocated to the Process when only
certain required conditions are achieved and there will be
change of CPU allocation if there is break or False
occurrence in the required conditions.
What Are The Different Types of CPU Scheduling
Algorithms?

• There are mainly two types of scheduling methods:


• Preemptive Scheduling: Preemptive scheduling is used when
a process switches from running state to ready state or from
the waiting state to the ready state.
• Non-Preemptive Scheduling: Non-Preemptive scheduling is
used when a process terminates , or when a process switches
from running state to waiting state.
First Come First Serve – CPU Scheduling (Non-
Preemptive)
• Simplest CPU scheduling algorithm that schedules according
to arrival times of processes.
• The first come first serve scheduling algorithm states that the
process that requests the CPU first is allocated the CPU first.
• It is implemented by using the FIFO queue. When a process
enters the ready queue, its PCB is linked to the tail of the
queue.
• When the CPU is free, it is allocated to the process at the head
of the queue.
• The running process is then removed from the queue.
• FCFS is a non-preemptive scheduling algorithm.
Characteristics of FCFS
• FCFS supports non-preemptive and preemptive CPU
scheduling algorithms.
• Tasks are always executed on a First-come, First-serve
concept.
• FCFS is easy to implement and use.
• This algorithm is not very efficient in performance, and the wait
time is quite high.
• Algorithm for FCFS Scheduling
• The waiting time for the first process is 0 as it is executed first.
• The waiting time for the upcoming process can be calculated
by:
• The Average waiting time can be calculated by:

Average Waiting Time = (sum of all waiting time)/(Number of processes)


Example
Priority Scheduling Algorithm
• In Priority scheduling, there is a priority number assigned
to each process.
• In some systems, the lower the number, the higher the
priority.
• While, in the others, the higher the number, the higher will
be the priority.
• The Process with the higher priority among the available
processes is given the CPU.
• There are two types of priority scheduling algorithm
exists.
• One is Preemptive priority scheduling while the other
is Non Preemptive Priority scheduling.
• The priority number assigned to each of the process may
or may not vary.
• If the priority number doesn't change itself throughout the
process, it is called static priority,
• while if it keeps changing itself at the regular intervals, it
is called dynamic priority.

Non Preemptive Priority Scheduling

• In the Non Preemptive Priority scheduling, The Processes


are scheduled according to the priority number assigned
to them.
• Once the process gets scheduled, it will run till the
completion.
• Generally, the lower the priority number, the higher is the
priority of the process.
• The people might get confused with the priority numbers,
hence in the GATE, there clearly mention which one is the
highest priority and which one is the lowest one.
• Response time =
• Response time is the amount of time it takes for the CPU to
respond to a request made by a process.
• It is the duration between the arrival of a process and the first
time it runs.
Preemptive Priority Scheduling
• In Preemptive Priority Scheduling,
• at the time of arrival of a process in the ready queue,
• its Priority is compared with the priority of the other
processes present in the ready queue as well as with the
one which is being executed by the CPU at that point of
time.
• The One with the highest priority among all the available
processes will be given the CPU next.
• The difference between preemptive priority scheduling
and non preemptive priority scheduling is that,
• in the preemptive priority scheduling, the job which is
being executed can be stopped at the arrival of a higher
priority job.
• Once all the jobs get available in the ready queue, the
algorithm will behave as non-preemptive priority
scheduling, which means the job scheduled will run till the
completion and no preemption will be done.
Shortest Job First (SJF) Scheduling
• The shortest job first (SJF) or shortest job next, is a scheduling
policy that selects the waiting process with the smallest
execution time to execute next.
• SJN, also known as Shortest Job Next (SJN), can be preemptive
or non-preemptive.
• Characteristics of SJF Scheduling:
• Shortest Job first has the advantage of having a minimum
average waiting time among all scheduling algorithms.
• It is a Greedy Algorithm.
• It may cause starvation if shorter processes keep coming.
• This problem can be solved using the concept of ageing.
• It is practically infeasible as Operating System may not know
burst times and therefore may not sort them.
• While it is not possible to predict execution time, several
methods can be used to estimate the execution time for a job,
such as a weighted average of previous execution times.
• SJF can be used in specialized environments where accurate
estimates of running time are available.
• Algorithm:
• Sort all the processes according to the arrival time.
• Then select that process that has minimum arrival time and
minimum Burst time.
• After completion of the process make a pool of processes that
arrives afterward till the completion of the previous process
and select that process among the pool which is having
minimum Burst time.
Shortest Remaining time first
• Shortest Remaining Time First (SRTF) is the preemptive version
of Shortest Job Next (SJN) algorithm,
• where the processor is allocated to the job closest to
completion.
• This algorithm requires advanced concept and knowledge of
CPU time required to process the job in an interactive system,
• and hence can’t be implemented there.
• But, in a batch system where it is desirable to give preference
to short jobs, SRT algorithm is used.
• However, SRT involves more overheads than SJN( or SJF),
• as the OS is required to frequently monitor the CPU time of the
jobs in the READY queue and perform context switching.
• As illustrated above, for the same set of jobs, SRT algorithm is
faster in execution than SJN algorithm.
• But, here the overhead charges, i.e., time required for context
switching has been ignored.
• When a job is preempted, all of it’s processing information
must be saved in it’s PCB for later when it is to be continued,
• and the contents of the PCB of the other job to which the OS is
switching are loaded into the registers in the memory.
• This is known as Context Switching.
Advantages:
• SRTF algorithm makes the processing of the jobs faster than
SJN algorithm, given it’s overhead charges are not counted.
• Allows for easier management of library updates or
replacements without recompiling the program.
• Enables efficient memory usage, as libraries can be shared
among multiple instances of the program.
• Provides better portability, as the program can be executed on
different systems with compatible libraries available at
runtime.
Disadvantages
• The context switch is done a lot more times in SRTF than in
SJN, and consumes CPU’s valuable time for processing.
• This adds up to it’s processing time and diminishes it’s
advantage of fast processing.
• Slightly slower program startup due to the additional linking
process.
• Requires proper handling of library dependencies to ensure
correct execution.
• Debugging can be slightly more complex, as libraries are
separate entities loaded at runtime.
Longest Job First (LJF)
• Longest Job First (LJF) is a non-preemptive scheduling
algorithm.
• This algorithm is based on the burst time of the processes.
• The processes are put into the ready queue based on their
burst times i.e., in descending order of the burst times.
• As the name suggests this algorithm is based on the fact that
the process with the largest burst time is processed first.
• The burst time of only those processes is considered that have
arrived in the system until that time.
• Its preemptive version is called Longest Remaining Time First
(LRTF) algorithm.
• Characteristics of Longest Job First(Non-Preemptive)
• Among all the processes waiting in a waiting queue, the CPU is
always assigned to the process having the largest burst time.
• If two processes have the same burst time then the tie is
broken using FCFS i.e. the process that arrived first is
processed first.
• LJF CPU Scheduling can be of both preemptive and non-
preemptive types.
• Advantages of Longest Job First(LJF)
• No other process can execute until the longest job or process
executes completely.
• All the jobs or processes finish at the same time approximately.
• Disadvantages of Longest Job First CPU Scheduling
Algorithm
• This algorithm gives a very high average waiting
time and average turn-around time for a given set of processes.
• This may lead to a convoy effect.
• It may happen that a short process may never get executed
and the system keeps on executing the longer processes.
• It reduces the processing speed and thus reduces the efficiency
and utilization of the system.
Longest Job First CPU Scheduling Algorithm
• Step-1: First, sort the processes in increasing order of their
Arrival Time.
• Step 2: Choose the process having the highest Burst Time
among all the processes that have arrived till that time.
• Step 3: Then process it for its burst time. Check if any other
process arrives until this process completes execution.
• Step 4: Repeat the above three steps until all the processes
are executed.
Highest Response Ratio Next (HRRN) CPU
Scheduling
• CPU scheduling is the process of deciding which process will
own the CPU to use while another process is suspended.
• The main function of CPU scheduling is to ensure that
whenever the CPU remains idle, the OS has at least selected
one of the processes available in the ready-to-use line.
• Highest Response Ratio Next (HRRN) Scheduling is a part of
nonpreemptive CPU scheduling.
What is the Highest Response Ratio Next
(HRRN) Scheduling?
• One of the most optimal scheduling algorithms is the Highest
Response Ratio Next (HRNN).
• This algorithm is a non-preemptive algorithm in which, HRRN
scheduling is done based on an extra parameter, which is
called Response Ratio.
• Given N processes with their Arrival times and Burst times, the
task is to find the average waiting time and an average
turnaround time using the HRRN scheduling algorithm.
• The name itself states that we need to find the response ratio
of all available processes and select the one with the highest
Response Ratio.
• A process once selected will run till completion.
• below is the formula to calculate the Response Ratio.
• Response Ratio = (W + S)/S
• Here, W is the waiting time of the process so far and S is the
Burst time of the process.
Characteristics of HRRN Scheduling
• Highest Response Ratio Next is a non-preemptive CPU
Scheduling algorithm and it is considered as one of the most
optimal scheduling algorithms.
• The criteria for HRRN is Response Ratio, and the mode is Non-
Preemptive.
• HRRN is considered as the modification of the Shortest Job
First to reduce the problem of starvation.
• In comparison with SJF, during the HRRN scheduling
algorithm, the CPU is allotted to the next process
which has the highest response ratio, and not to the
process having less burst time.
• Performance of HRRN Scheduling
1.This algorithm not only favors shorter job but it also
concern the waiting time of the longer jobs.
2.Its mode is non preemptive hence context switching is
minimal in this algorithm.
Round Robin Scheduling Algorithm

• Round Robin is a CPU scheduling algorithm where each


process is cyclically assigned a fixed time slot.
• It is the preemptive version of the First come First Serve CPU
Scheduling algorithm.
• Round Robin CPU Algorithm generally focuses on Time
Sharing technique.
• The period of time for which a process or job is allowed to run
in a pre-emptive method is called time quantum.
• Each process or job present in the ready queue is assigned the
CPU for that time quantum,
• if the execution of the process is completed during that time
then the process will end
• else the process will go back to the waiting table and wait for
its next turn to complete the execution.
Characteristics of Round Robin CPU Scheduling
Algorithm

• It is simple, easy to implement, and starvation-free as all


processes get a fair share of CPU.
• One of the most commonly used techniques in CPU
scheduling is a core.
• It is preemptive as processes are assigned CPU only for a fixed
slice of time at most.
• The disadvantage of it is more overhead of context switching.
Advantages of Round Robin CPU Scheduling
Algorithm

• There is fairness since every process gets an equal share of the


CPU.
• The newly created process is added to the end of the ready
queue.
• A round-robin scheduler generally employs time-sharing,
giving each job a time slot or quantum.
• While performing a round-robin scheduling, a particular time
quantum is allotted to different jobs.
• Each process get a chance to reschedule after a particular
quantum time in this scheduling.
Disadvantages of Round Robin CPU Scheduling
Algorithm

• There is Larger waiting time and Response time.


• There is Low throughput.
• There is Context Switches.
• Gantt chart seems to come too big (if quantum time is less for
scheduling. For Example:1 ms for big scheduling.)
• Time consuming scheduling for small quantum.
Inter Process Communication (IPC)

• Processes can coordinate and interact with one another using a


method called inter-process communication (IPC) .
• Through facilitating process collaboration, it significantly
contributes to improving the effectiveness, modularity, and
ease of software systems.
• Types of Process
• Independent process
• Co-operating process
• An independent process is not affected by the execution of
other processes
• while a co-operating process can be affected by other
executing processes.
• Though one can think that those processes, which are running
independently, will execute very efficiently, in reality, there are
many situations when cooperative nature can be utilized for
increasing computational speed, convenience, and modularity.
• Inter-process communication (IPC) is a mechanism that allows
processes to communicate with each other and synchronize
their actions.
• The communication between these processes can be seen as a
method of cooperation between them.
• Processes can communicate
with each other through both:
Methods of IPC
• Shared Memory
• Message Passing
• Figure shows a basic
structure of communication
between processes via the
shared memory method and
via the message passing
method.
IPC through Shared Memory
• Shared memory is a
memory shared between
two or more processes.
• Each process has its own
address space;
• if any process wants to
communicate with some
information from its own
address space to other
processes,
• then it is only possible with
IPC (inter-process
communication)
techniques.
• Shared memory is the fastest inter-process communication
mechanism.
• The operating system maps a memory segment in the
address space of several processes to read and write in that
memory segment without calling operating system
functions.
• For applications that exchange large amounts of data,
shared memory is far superior to message passing
techniques like message queues, which require system calls
for every data exchange.
• To use shared memory, we have to perform two basic steps:
1. Request a memory segment that can be shared between
processes to the operating system.
2.Associate a part of that memory or the whole memory with
the address space of the calling process.
• A shared memory segment is a portion of physical
memory that is shared by multiple processes.
• In this region, processes can set up structures, and
others may read/write on them.
• When a shared memory region is established in two or
more processes, there is no guarantee that the regions
will be placed at the same base address.
• Semaphores can be used when synchronization is
required.
• A semaphore is a variable or abstract data type in an operating
system (OS) that controls access to a shared resource by multiple
processes. Semaphores are a type of synchronization primitive
that help prevent critical section problems in concurrent
systems, such as multitasking operating systems.
• For example, one process
might have the shared region
starting at address 0x60000
while the other process uses
0x70000.
• It is critical to understand
that these two addresses refer
to the exact same piece of
data.
• So storing the number 1 in the
first process's address
0x60000 means the second
process has the value of 1 at
0x70000.
• The two different addresses
refer to the exact same
location.
Why Choose IPC through Shared Memory?

• Usually, inter-related process communication is


performed using Pipes or Named Pipes.
• And unrelated processes communication can be
performed using Named Pipes or through popular IPC
techniques of Shared Memory and Message Queues.
• But the problem with pipes, FIFO, and message queue
is that the information exchange between two
processes goes through the kernel, and it works as
follows.
• The server reads from the input file.
• The server writes this data in a message using pipe, FIFO, or
message queue.
• The client reads the data from the IPC channel,
• again requiring the data to be copied from the kernel's IPC
buffer to the client's buffer.
• Finally, the data is copied from the client's buffer.
• A total of four copies of data are required (2 read and 2
write).
• So, shared memory provides a way by letting two or more
processes share a memory segment.
• With Shared Memory, the data is only copied twice, from
the input file into shared memory and from shared memory
to the output file.
Functions of IPC Using Shared Memory

• Two functions shmget() and shmat() are used for IPC


using shared memory
• shmget() function is used to create the shared memory
segment,
• while the shmat() function is used to attach the shared
segment with the process's address space.
1. shmget() Function
• The first parameter specifies the unique number (called
key) identifying the shared segment.
• The second parameter is the size of the shared segment,
e.g., 1024 bytes or 2048 bytes.
• The third parameter specifies the permissions on the
shared segment.
• On success, the shmget() function returns a valid
identifier, while on failure, it returns -1.
Syntax

#include <sys/ipc.h>
#include <sys/shm.h>
int shmget (key_t key, size_t size, int shmflg);
2. shmat() Function
• shmat() function is used to attach the created shared
memory segment associated with the shared memory
identifier specified by shmid to the calling process's
address space.
• The first parameter here is the identifier which the
shmget() function returns on success.
• The second parameter is the address where to attach it
to the calling process.
• A NULL value of the second parameter means that the
system will automatically choose a suitable address.
• The third parameter is '0' if the second parameter is
NULL. Otherwise, the value is specified by SHM_RND.
Syntax
How does IPC Using Shared Memory work?

• A process creates a shared memory segment


using shmget().
• The original owner of a shared memory segment can
assign ownership to another user with shmctl().
• It can also revoke this assignment.
• Other processes with proper permission can perform
various control functions on the shared memory
segment using shmctl().
• Once created, a shared segment can be attached to a
process address space using shmat().
• It can be detached using shmdt().
• The attaching process must have the appropriate
permissions for shmat().
• Once attached, the process can read or write to the
segment, as the permission requested in the attach
operation allows.
• A shared segment can be attached multiple times by
the same process.
• A shared memory segment is described by a control
structure with a unique ID that points to an area of
physical memory.
• The identifier of the segment is called the shmid.
• The structure definition for the shared memory
segment control structures and prototypes can be
found in <sys/shm.h>.
Examples

• Program 1: This program creates a shared memory


segment, attaches itself to it, and then writes some
content into the shared memory segment.
1. #include<stdio.h>
2. #include<stdlib.h>
3. #include<unistd.h>
4. #include<sys/shm.h>
5. #include<string.h>
6. int main()
7. {
8. int i;
9. void *shared_memory;
10. char buff[100];
11. int shmid;
12. shmid=shmget((key_t)2345, 1024, 0666|IPC_CREAT);
13. //creates shared memory segment with key 2345, having size 1024 bytes. IPC_CREAT is used to create the shared segment if it does not exist. 06
66 are the permissions on the shared segment
14. printf("Key of shared memory is %d\n",shmid);
15. shared_memory=shmat(shmid,NULL,0);
16. //process attached to shared memory segment
17. printf("Process attached at %p\n",shared_memory);
18. //this prints the address where the segment is attached with this process
19. printf("Enter some data to write to shared memory\n");
20. read(0,buff,100); //get some input from user
21. strcpy(shared_memory,buff); //data written to shared memory
22. printf("You wrote : %s\n",(char *)shared_memory);
23. }
Output
How does it work?
• In the above program, the shmget() function creates a
segment with key 2345, size 1024 bytes, and reads and
writes permissions for all users.
• It returns the identifier of the segment, which gets
stored in shmid.
• This identifier is used in shmat() to attach the shared
segment to the process's address space.
• NULL in shmat() means that the OS will itself attach the
shared segment at a suitable address of this process.
• Then some data is read from the user using
the read() system call, and it is finally written to the
shared segment using the strcpy() function.
• Program 2: This program attaches itself to the shared
memory segment created in Program 1, and it reads the
content of the shared memory.
1. #include<stdio.h>
2. #include<stdlib.h>
3. #include<unistd.h>
4. #include<sys/shm.h>
5. #include<string.h>
6.int main()
7. {
8.int i;
9.void *shared_memory;
10.char buff[100];
11.int shmid;
12.shmid=shmget((key_t)2345, 1024, 0666);
13.printf("Key of shared memory is %d\n",shmid);
14.shared_memory=shmat(shmid,NULL,0); //process attached to shared memory segment

15.printf("Process attached at %p\n",shared_memory);


16.printf("Data read from shared memory is : %s\n",(char *)shared_memory);
17.}
Output
How does it work?
• In this program, shmget() here generates the identifier
of the same segment as created in Program 1.
• Remember to give the same key value.
• The only change is, do not write IPC_CREAT as the
shared memory segment is already created.
• Next, shmat() attaches the shared segment to the
current process.
• After that, the data is printed from the shared segment.
In the output, you will see the same data that you have
written while executing Program 1.
Messaging Passing Method
• In this method, processes communicate with each other
without using any kind of shared memory.
• If two processes p1 and p2 want to communicate with each
other, they proceed as follows:
• Establish a communication link (if a link already exists, no need
to establish it again.)
• Start exchanging messages using basic primitives.
We need at least two primitives:
– send (message, destination) or send (message)
– receive (message, host) or receive (message)
• The message size can be of fixed size or of variable size.
• If it is of fixed size, it is easy for an OS designer but
complicated for a programmer
• and if it is of variable size then it is easy for a programmer but
complicated for the OS designer.
• A standard message can have two parts: header and body.
• The header part is used for storing message type, destination
id, source id, message length, and control information.
• The control information contains information like what to do if
runs out of buffer space, sequence number, priority.
• Generally, message is sent using FIFO style.
Message Passing Through Communication Link
• Direct and Indirect Communication link
• Now, We will start our discussion about the methods of
implementing communication links.
• While implementing the link, there are some questions that need to
be kept in mind like :

• How are links established?


• Can a link be associated with more than two processes?
• How many links can there be between every pair of communicating
processes?
• What is the capacity of a link? Is the size of a message that the link
can accommodate fixed or variable?
• Is a link unidirectional or bi-directional?
• A link has some capacity that determines the number of
messages that can reside in it temporarily,
• for which every link has a queue associated with it which can
be of zero capacity, bounded capacity, or unbounded capacity.
• In zero capacity, the sender waits until the receiver informs the
sender that it has received the message.
• In non-zero capacity cases, a process does not know whether
a message has been received or not after the send operation.
• For this, the sender must communicate with the receiver
explicitly.
• Implementation of the link depends on the situation, it can be
either a direct communication link or an in-directed
communication link.
• Direct Communication links are implemented when the
processes use a specific process identifier for the
communication,
• but it is hard to identify the sender ahead of time.
For example the print server.

• In-direct Communication is done via a shared mailbox (port),


which consists of a queue of messages.
• The sender keeps the message in mailbox and the receiver
picks them up.
Synchronous and Asynchronous Message Passing

• A process that is blocked is one that is waiting for some event,


such as a resource becoming available or the completion of an
I/O operation.
• IPC is possible between the processes on same computer as
well as on the processes running on different computer i.e. in
networked/distributed system.
• In both cases, the process may or may not be blocked while
sending a message or attempting to receive a message so
message passing may be blocking or non-blocking.
• Blocking is considered synchronous and blocking send means
the sender will be blocked until the message is received by
receiver.
• Similarly, blocking receive has the receiver block until a
message is available.
• Non-blocking is considered asynchronous and Non-blocking
send has the sender sends the message and continue.
• Similarly, Non-blocking receive has the receiver receive a valid
message or null.
• After a careful analysis, we can come to a conclusion that for a
sender it is more natural to be non-blocking after message
passing as there may be a need to send the message to
different processes.
• However, the sender expects acknowledgment from the
receiver in case the send fails.
• Similarly, it is more natural for a receiver to be blocking after
issuing the receive as the information from the received
message may be used for further execution.
• At the same time, if the message send keep on failing, the
receiver will have to wait indefinitely.
• That is why we also consider the other possibility of message
passing.
• There are basically three preferred combinations:

• Blocking send and blocking receive


• Non-blocking send and Non-blocking receive
• Non-blocking send and Blocking receive (Mostly used)
• In Direct message passing , The process which wants to
communicate must explicitly name the recipient or sender of
the communication.
e.g. send(p1, message) means send the message to p1.
Similarly, receive(p2, message) means to receive the message
from p2.
• In this method of communication, the communication link gets
established automatically, which can be either unidirectional or
bidirectional,
• but one link can be used between one pair of the sender and
receiver and one pair of sender and receiver should not
possess more than one pair of links.
• Symmetry and asymmetry between sending and receiving can
also be implemented
• i.e. either both processes will name each other for sending and
receiving the messages or only the sender will name the
receiver for sending the message and there is no need for the
receiver for naming the sender for receiving the message.
• The problem with this method of communication is that if the
name of one process changes, this method will not work.
• In Indirect message passing , processes use mailboxes (also
referred to as ports) for sending and receiving messages.
• Each mailbox has a unique id and processes can communicate
only if they share a mailbox.
• Link established only if processes share a common mailbox
and a single link can be associated with many processes.
• Each pair of processes can share several communication links
and these links may be unidirectional or bi-directional.
• Suppose two processes want to communicate through Indirect
message passing, the required operations are:
• create a mailbox, use this mailbox for sending and receiving
messages, then destroy the mailbox.
• The standard primitives used are:
• send(A, message) which means send the message to mailbox
A.
• The primitive for the receiving the message also works in the
same way e.g. received (A, message) .
• There is a problem with this mailbox implementation.
• Suppose there are more than two processes sharing the same
mailbox and suppose the process p1 sends a message to the
mailbox, which process will be the receiver?
• This can be solved by either enforcing that only two processes
can share a single mailbox or enforcing that only one process is
allowed to execute the receive at a given time
• or select any process randomly and notify the sender about
the receiver.
• A mailbox can be made private to a single sender/receiver pair
and can also be shared between multiple sender/receiver pairs.
• Port is an implementation of such mailbox that can have
multiple senders and a single receiver.
• It is used in client/server applications (in this case the server is
the receiver).
• The port is owned by the receiving process and created by OS
on the request of the receiver process and can be destroyed
either on request of the same receiver processor when the
receiver terminates itself.
• Enforcing that only one process is allowed to execute the
receive can be done using the concept of mutual exclusion.
Synchronization in Interprocess
Communication
• Synchronization is a necessary part of interprocess
communication.
• It is either provided by the interprocess control
mechanism or handled by the communicating
processes.
• Some of the methods to provide synchronization are as
follows −
• Semaphore
• A semaphore is a variable that controls the access to a
common resource by multiple processes.
• The two types of semaphores are binary semaphores and
counting semaphores.
• Mutual Exclusion
• Mutual exclusion requires that only one process thread can
enter the critical section at a time.
• This is useful for synchronization and also prevents race
conditions.
• Barrier
• A barrier does not allow individual processes to proceed until all
the processes reach it.
• Many parallel languages and collective routines impose
barriers.
• Spinlock
• This is a type of lock.
• The processes trying to acquire this lock wait in a loop while
checking if the lock is available or not.
• This is known as busy waiting because the process is not doing
any useful operation even though it is active
Approaches to Interprocess Communication
• The different approaches to implement interprocess
communication are given as follows −
• Pipe
• A pipe is a data channel that is unidirectional.
• Two pipes can be used to create a two-way data channel
between two processes.
• This uses standard input and output methods.
• Pipes are used in all POSIX systems as well as Windows
operating systems.
• Socket
• The socket is the endpoint for sending or receiving data in a
network.
• This is true for data sent between processes on the same
computer or data sent between different computers on the
same network.
• Most of the operating systems use sockets for interprocess
communication.
• File
• A file is a data record that may be stored on a disk or acquired
on demand by a file server.
• Multiple processes can access a file as required.
• All operating systems use files for data storage.
• Signal
• Signals are useful in interprocess communication in a limited
way.
• They are system messages that are sent from one process to
another.
• Normally, signals are not used to transfer data but are used for
remote commands between processes.
• Shared Memory
• Shared memory is the memory that can be simultaneously
accessed by multiple processes.
• This is done so that the processes can communicate with each
other.
• All POSIX systems, as well as Windows operating systems use
shared memory.
• Message Queue
• Multiple processes can read and write data to the message
queue without being connected to each other.
• Messages are stored in the queue until their recipient retrieves
them.
• Message queues are quite useful for interprocess
communication and are used by most operating systems.
Introduction of Deadlock in Operating System

• A deadlock is a situation where a set of processes is blocked


because each process is holding a resource and waiting for
another resource acquired by some other process.
• Deadlock is a situation in computing where two or more
processes are unable to proceed because each is waiting for
the other to release resources.
• Key concepts include mutual exclusion, resource holding,
circular wait, and no preemption.
• Consider an example when two trains are coming toward each
other on the same track and there is only one track,
• none of the trains can move once they are in front of each
other.
• This is a practical example of deadlock.
How Does Deadlock occur in the Operating
System?

• Before going into detail about how deadlock occurs in the


Operating System, let’s first discuss how the Operating System
uses the resources present.
• A process in an operating system uses resources in the
following way.
• Requests a resource
• Use the resource
• Releases the resource
• In the above diagram, the process 1 has resource 1 and
needs to acquire resource 2.
• Similarly process 2 has resource 2 and needs to acquire
resource 1.
• Process 1 and process 2 are in deadlock as each of
them needs the other’s resource to complete their
execution
• but neither of them is willing to relinquish their
resources.
Coffman Conditions

• A deadlock occurs if the four Coffman conditions hold


true.
• But these conditions are not mutually exclusive.
• The Coffman conditions are given as follows −
• Mutual Exclusion
• There should be a
resource that can only be
held by one process at a
time.
• In the diagram , there is a
single instance of
Resource 1 and it is held
by Process 1 only.
• Hold and Wait
• A process can hold multiple
resources and still request
more resources from other
processes which are holding
them.
• In the diagram given,
Process 2 holds Resource 2
and Resource 3 and is
requesting the Resource 1
which is held by Process 1.
• No Preemption
• A resource cannot be
preempted from a
process by force.
• A process can only
release a resource
voluntarily.
• In the diagram,
Process 2 cannot
preempt Resource 1
from Process 1.
• It will only be released
when Process 1
relinquishes it
voluntarily after its
execution is complete.
• Circular Wait
• A process is waiting for the resource held by the second
process,
• which is waiting for the resource held by the third
process and so on,
• till the last process is waiting for a resource held by the
first process.
• This forms a circular chain.
• For example: Process 1 is allocated Resource2 and it is
requesting Resource 1.
• Similarly, Process 2 is allocated Resource 1 and it is
requesting Resource 2.
• This forms a circular wait loop.
Deadlock Detection
• Deadlock detection is a process in computing where the
system checks if there are any sets of processes that are stuck
waiting for each other indefinitely, preventing them from
moving forward.
• In simple words, deadlock detection is the process of finding
out whether any process are stuck in loop or not.
• There are several algorithms like
• Resource Allocation Graph
• Banker’s Algorithm
• These algorithms helps in detection of deadlock in Operating
System.
Deadlock Prevention or Avoidance

• Deadlock Prevention and Avoidance is the one of the methods


for handling deadlock.
• First, we will discuss Deadlock Prevention, then Deadlock
Avoidance.
• What is Deadlock Prevention?
• In deadlock prevention the aim is to not let full-fill one of the
required condition of the deadlock.
• This can be done by these methods:
i) Mutual Exclusion
• We only use the Lock for the non-share-able resources and
• if the resource is share- able (like read only file) then we can
not use the locks here.
• That ensure that in case of share -able resource , multiple
process can access it at same time.
• Problem- Here the problem is that we can only do it in case of
share-able resources but in case of no-share-able resources
like printer , we have to use Mutual exclusion.
• (ii) Hold and Wait
• To ensure that Hold and wait never occurs in the system,
• we must guarantee that whenever process request for resource , it
does not hold any other resources.
• we can provide the all resources to the process that is required for
it’s execution before starting it’s execution .
• problem – for example if there are three resource that is required
by a process
• and we have given all that resource before starting execution of
process
• then there might be a situation that initially we required only two
resource and after one hour we want third resources
• and this will cause starvation for the another process that wants
this resources and
• in that waiting time that resource can allocated to other process and
complete their execution.
• iii) No Preemption
• If a process is holding some resource
• and requesting other resources that are acquired and these
resource are not available immediately
• then the resources that current process is holding are
preempted.
• After some time process again request for the old resources
and other required resources to re-start.
• For example – Process p1 have resource r1 and requesting for
r2 that is hold by process p2.
• then process p1 preempt r1 and after some time it try to restart
by requesting both r1 and r2 resources.
• Problem – This can cause the Live Lock Problem .
• Live Lock : Live lock is the situation where two or more
processes continuously changing their state in response to
each other without making any real progress.
• Example:
• suppose there are two processes p1 and p2 and two resources
r1 and r2.
• Now, p1 acquired r1 and need r2 & p2 acquired r2 and need r1.
• so according to above method- Both p1 and p2 detect that
they can’t acquire second resource, so they release resource
that they are holding and then try again.
• continuous cycle- p1 again acquired r1 and requesting to r2
• p2 again acquired r2 and requesting to r1 so there is no
overall progress still process are changing there state as they
preempt resources and then again holding them.
• This the situation of Live Lock.
• (iv) Circular Wait:
• To remove the circular wait in system we can give the ordering
of resources in which a process needs to acquire.
• Ex: If there are process p1 and p2 and resources r1 and r2
• then we can fix the resource acquiring order like the process
first need to acquire resource r1 and then resource r2.
• so the process that acquired r1 will be allowed to acquire r2 ,
other process needs to wait until r1 is free.
• This is the Deadlock prevention methods but practically only
fourth method is used as all other three condition removal
method have some disadvantages with them .
What is Deadlock Avoidance?
• Avoidance is kind of futuristic.
• By using the strategy of “Avoidance”, we have to make an
assumption.
• We need to ensure that all information about resources that
the process will need is known to us before the execution of
the process.
• We use Banker’s algorithm (Which is in turn a gift from
Dijkstra) to avoid deadlock.
• In prevention and avoidance, we get the correctness of data
but performance decreases.
What is Deadlock Recovery?
• If Deadlock prevention or avoidance is not applied to the
software then we can handle this by deadlock detection and
recovery.
• which consist of two phases:
1. In the first phase, we examine the state of the process and
check whether there is a deadlock or not in the system.
2. If found deadlock in the first phase then we apply the
algorithm for recovery of the deadlock.
In Deadlock detection and recovery, we get the correctness of
data but performance decreases.
Methods of Deadlock Recovery

• Methods of Deadlock Recovery


• There are several Deadlock Recovery Techniques:
• Manual Intervention
• Automatic Recovery
• Process Termination
• Resource Preemption
• 1. Manual Intervention
• When a deadlock is detected, one option is to inform the
operator and let them handle the situation manually.
• While this approach allows for human judgment and decision-
making,
• it can be time-consuming and may not be feasible in large-
scale systems.
• 2. Automatic Recovery
• An alternative approach is to enable the system to recover
from deadlock automatically.
• This method involves breaking the deadlock cycle by either
aborting processes or preempting resources.
• 3. Process Termination
• Abort all Deadlocked Processes
• This approach breaks the deadlock cycle, but it comes at a
significant cost.
• The processes that were aborted may have executed for a
considerable amount of time, resulting in the loss of partial
computations.
• These computations may need to be recomputed later.
• Abort one process at a time
• Instead of aborting all deadlocked processes simultaneously,
• this strategy involves selectively aborting one process at a
time until the deadlock cycle is eliminated.
• However, this incurs overhead as a deadlock-detection
algorithm must be invoked after each process termination to
determine if any processes are still deadlocked.
• Factors for choosing the termination order:
• The process’s priority
• Completion time and the progress made so far
• Resources consumed by the process
• Resources required to complete the process
• Number of processes to be terminated
• Process type (interactive or batch)
• 4. Resource Preemption
• Selecting a Victim
• Resource preemption involves choosing which resources
and processes should be preempted to break the deadlock.
• The selection order aims to minimize the overall cost of
recovery.
• Factors considered for victim selection may include the
number of resources held by a deadlocked process and the
amount of time the process has consumed.
• Rollback
• If a resource is preempted from a process, the process
cannot continue its normal execution as it lacks the required
resource.
• Rolling back the process to a safe state and restarting it is a
common approach.
• Determining a safe state can be challenging, leading to the
use of total rollback, where the process is aborted and
restarted from scratch.
• Starvation Prevention
• To prevent resource starvation, it is essential to ensure that
the same process is not always chosen as a victim.
• If victim selection is solely based on cost factors, one
process might repeatedly lose its resources and never
complete its designated task.
• To address this, it is advisable to limit the number of times a
process can be chosen as a victim,
• including the number of rollbacks in the cost factor.
What is Deadlock Ignorance?

• If a deadlock is very rare, then let it happen and reboot the


system.
• This is the approach that both Windows and UNIX take.
• we use the ostrich algorithm for deadlock ignorance.
• In Deadlock, ignorance performance is better than the above
two methods but the correctness of data is not there.
Safe State
• A safe state can be defined as a state in which there is no
deadlock.
• It is achievable if:
• If a process needs an unavailable resource, it may wait until the
same has been released by a process to which it has already
been allocated.
• if such a sequence does not exist, it is an unsafe state.
• All the requested resources are allocated to the process.

You might also like