Mod-2
Process: - Currently executing program is known as process
Section of process-
Text Section-Contains the executable code of the program. It is read-only to prevent
accidental modification.
Data Section - Stores global and static variables initialized before execution
Heap/Memory Section- Used for dynamic memory allocation (malloc, calloc, new in
C/C++). Grows upwards (towards higher memory addresses). Requires manual
deallocation (free, delete).
Stack Section - Stores local variables, function parameters, and return addresses.
Works in LIFO (Last In, First Out) order.
Types of process-
Indepent process
Co-operate process
Process state
A process state is the current activity of a program
that is being executed. The state of a process can be new,
ready, running, waiting, or terminated.
New- The process is just being created & has not
started execution yet.
Ready- The process is ready to run and is waiting to be
assigned to a CPU. It has everything it needs, just waiting for
its turn.
Running- The process is currently being executed by the CPU & Instructions are
actively being carried out.
Waiting- The process is waiting for some event (like input/output) to complete & It
cannot continue until that event finishes.
Suspended Ready- The process is not currently in main memory but is ready to run. It
was paused and moved to secondary storage (like disk) to free up memory.
Suspended Wait- The process was waiting for an event but has also been moved to
secondary storage. It will be moved back to "Waiting" once it's brought back to main
memory.
Terminated- The process has finished execution & All resources are released and it's
no longer active.
Process/Task Control Block- A Process Control Block (PCB) is a data structure that is
used by an Operating System to manage and regulate how processes are carried out.
Process Scheduling- Process scheduling is the activity of the process manager that handles the removal
the running process from the CPU and the selection of another process based on a
particular strategy.
Types of Categories- 2
Preemptive
Non-Preemptive
Types of Process Scheduling- 3
Short term Schedular / CPU Schedular- CPU Scheduler is responsible for
selecting one process from the ready state for running (or assigning CPU to
it).
Long Term Schedular- Long Term Scheduler loads a process from disk to main
memory for execution. The new process to the ‘Ready State’.
Medium Term Schedular- Medium Term Scheduler (MTS) is responsible for
moving a process from memory to disk (or swapping).
Context
Switching- Context switching is the process of saving the current state of a running
process and loading the state of another process. It allows the CPU to switch from one
process to another efficiently.
Thread- A thread is the smallest unit of execution within a process. It runs
independently but shares the same resources (memory, files, etc.) as other threads in
the same process.
Need of thread-
Types of thread/ Thread Models- 2
User-Level Threads (ULT): Managed by user-space libraries, OS is unaware of them.
Kernel-Level Threads (KLT): Managed by the OS, providing better control and multitasking.
Programming models
Thread execution models-
Dispatching - The process of assignment of the CPU to the first process on the ready list
is called dispatching
Dispatcher- is a module that hands over control of the CPU to the process chosen by
the scheduler, performing context switching and ensuring the correct execution
environment for the selected process
Function of Dispatcher-
Context Switching: The dispatcher saves the state of the currently running process
and restores the state of the new process that is about to run.
Switching to User Mode: It switches the CPU from kernel mode (where the OS
operates) to user mode (where the process executes).
Jumping to the Correct Location: It ensures the CPU starts executing the new
process at the correct instruction within its program.
Dispatch latency- The time it takes for the dispatcher to complete its task and hand
over control of the CPU is called dispatch latency, and it's an important factor in system
performance.
Scheduling Criteria- CPU scheduling criteria, such
as turnaround time, waiting time, and throughput,
are essential metrics used to evaluate the
efficiency of scheduling algorithms.
i. CPU Utilization
ii. Throughput- no of process completed per unit
time,per unit hour or per unit sec also
iii.Turnaround Time
iv.Waiting Time
v. Response Time
vi.Completion Time
vii. Priority
CPU Scheduling Algorithms-
FCFS (First Come First Serve)
SRTF (Shortest Remaining Time First)
SJF (Shortest Job First)
Round Robin
Priority Scheduling
Multi-level Feedback Scheduling- Multilevel Feedback
Queue (MLFQ) Scheduling dynamically moves processes
between multiple priority queues based on their
execution behavior to optimize CPU usage and prevent
starvation
i. It allows the process to move between queues, the
idea is to separate processes according to its
characteristics of its burst time.
ii. If the process uses too much CPU time it will be
moved to lower priority queue due to this scheme all
I/O bounded and interactive processes are in higher priority queue.
iii. In addition to this, a process which waits too long in a lower priority queue may be
moved to a higher priority queue and this form of aging prevents starvation.
iv. Thus, we can say multilevel feedback queues work similar to the RR algorithm so
they discriminate favorably towards short jobs.
Multi-level Queue Scheduling- Multilevel Queue (MLQ) scheduling is a CPU scheduling
algorithm that divides processes into multiple queues based on their characteristics and
assigns different scheduling algorithms to each queue.
i. Processes are permanently assigned to a specific queue based on their type (e.g.,
system processes, interactive processes, batch jobs).
ii. Each queue has its own scheduling algorithm (e.g., FCFS for one queue, RR for
another).
iii. No movement between queues—once a process is assigned to a queue, it stays
there.
iv. Higher priority queues execute first, and lower-priority queues get CPU only if
higher-priority queues are empty.
Parameter of Multilevel Feedback Scheduling
i. Number of Queues – Defines how many priority levels exist.
ii. Scheduling Algorithm per Queue – Each queue may have a different scheduling
algorithm (e.g., Round Robin for lower priority, FCFS for higher).
iii. Time Quantum for Each Queue – Determines how long a process can execute
before moving to a lower queue.
iv. Promotion and Demotion Criteria – Rules for moving processes between queues
based on CPU burst behavior.
v. Aging Mechanism – Ensures long-waiting processes in lower queues get a chance to
move up and execute.
Multiprocessor Scheduling- is the process of managing and distributing tasks among
multiple processors in a system to ensure efficient CPU utilization, load balancing, and
faster execution of processes.
Types of Multiprocessor Scheduling-
Asymmetric Multiprocessing (AMP):Only one processor (Master) controls all
scheduling and I/O operations. Other processors (Slaves) execute only assigned
tasks. Simple but may lead to bottlenecks.
Symmetric Multiprocessing (SMP):Each processor has equal rights to schedule and
execute tasks. They share memory and communicate efficiently. Used in modern
multi-core systems.
Process Synchronization- Process Synchronization is used in a computer system to
ensure that multiple processes or threads can run concurrently without interfering with
each other.The way synchronization is needed depends on whether a process is
independent or cooperating.
Race condition- A race condition occurs in a multi-threaded or multi-process
environment when multiple threads or processes access shared data concurrently, and
the final outcome depends on the order of execution.
When the process is not synchronized race condition occurs
Mutual condition- Mutual Exclusion ensures that only one process at a time can access
a shared resource (like a file, variable, or memory) to prevent conflicts in a multi-
processing environment.
Starvation- its means only high priority process are executing in the CPU & low priority
processes are waiting for the CPU for a long time.
Priority inversion- Priority inversion occurs when a higher-priority process is waiting for
a lower-priority process to release a resource, but the lower-priority process cannot
proceed because it is being preempted by medium-priority processes. This leads to an
unexpected situation where a low-priority task indirectly blocks a high-priority task.
Peterson’s Solution for Race Condition
Peterson’s Solution is one of the solutions to critical section problem involving two
processes .This solution states that when one process is executing its critical section
then the other process executes the rest of the code & vice versa
The algorithm uses two main components: a turn variable and a flag array.
The turn variable is an integer that indicates whose turn it is to enter the critical
section.
The flag array contains Boolean values for each process, indicating whether a
process wants to enter the critical section
Algorithm for Pi Process Algorithm for Pj Process
Explain all situation in peterson’s algorithm
Peterson solution satisfies all 3 situation explain with code
Limitations of Peterson’s Algorithm
Busy Waiting/Spin Lock-
In the entry section, a process continuously checks the condition in the
while loop (while(flag[j] && turn == j);).
If the other process is in the critical section, the waiting process keeps
looping indefinitely until it gets access.
This wastes CPU cycles and increases power consumption instead of
allowing the CPU to perform other tasks.
Imagine a process P0 waiting for P1 to finish. Instead of sleeping or yielding
CPU time, P0 continuously checks flag[1], leading to inefficiency
Limited to two Process
Peterson’s Algorithm only works for two processes (P0 and P1).
It does not scale to multiple processes efficiently because handling multiple
flag[] variables and ensuring correct execution would become complex.
Critical Section Problem- In a multi-process system, many processes may need to access
shared resources like variables, files, or printers. If two or more processes access these
resources at the same time, it may lead to incorrect results. This situation is known as
the Critical Section Problem.
A critical section is that part of the program where the shared resource is accessed. To
avoid problems, we must ensure that only one process enters the critical section at a
time.
Solution to Critical Section Problem-
Mutual Exclusion- Only one process should be allowed inside the critical section at a
time.
Progress- If no process is in the critical section, the system must allow some waiting
process to enter without unnecessary delay.
Bounded Waiting- There should be a limit on how long a process will wait before
entering its critical section.
Solution Approaches- Different ways the OS kernel handles process execution,
multitasking, and synchronization to avoid issues like race conditions, deadlocks, or
resource conflicts
Preemptive Kernel- In a preemptive kernel, the running process can be interrupted
by the operating system if a higher priority process becomes ready. This allows
better CPU utilization and quick response to important tasks.
It supports multitasking.
Context switching is allowed anytime.
Useful in real-time systems.
Example: Windows, Linux.
Non-Preemptive Kernel- In a non-preemptive kernel, once a process starts executing
in kernel mode, it cannot be interrupted until it finishes or voluntarily gives up the
CPU.
Simpler to implement.
No risk of data inconsistency.
Less overhead but not suitable for real-time tasks.
Example: Older versions of UNIX.
Bakery Algorithm- It is a software-based solution to the critical section problem for N
processes. It is a software-based solution to the critical section problem for N processes.
It works like taking a token in a bakery shop:
o Each process takes a number (token) before entering the critical section.
o The process with the smallest number is allowed to enter the critical section
first.
Properties Ensured:
Mutual Exclusion – Only one process in critical section.
Bounded Waiting – No process waits indefinitely.
Progress – If no process is in critical section, one will enter soon.
Hardware support for Synchronization
Memory Barrier- Prevents reordering of read and write operations by the compiler
or processor.
Hardware Instruction- Low-level atomic instructions like Test-and-Set, Compare-
and-Swap, and Exchange used to implement synchronization.
Atomic Variables- Variables that are accessed or modified using atomic operations,
ensuring no interruption during execution.
Mutex Lock- used to protect the critical section & thus prevent race conditions. The 2
key operations that are used to control critical section in multithreaded situation.
acquire()- Used to request the lock before entering the critical section.
release()- Used to release the lock after exiting the critical section. Only one thread
can acquire the mutex at a time.
Semaphore- is a synchronization tool used to control access to a shared resource.
wait()- Decreases the semaphore value. If value < 0, process waits.
wait(S)
{
while(S <= 0); // busy wait
S = S - 1;
}
signal()- Increases the semaphore value. If value ≤ 0, a waiting process is released.
signal(S)
{
S = S + 1;
}
Types of Semaphore- 2
i. Binary- It can hold only two values: 0 or 1. It works similarly to a mutex lock. It is
used when only one process can access the critical section at a time.
🔸 Example Use: Used in mutual exclusion where only one thread can execute at a
time.
🔸 Behavior:
o wait() sets value from 1 → 0.
o signal() sets value from 0 → 1.
ii. Counting- Can hold non-negative integer values (0 or more). It is used when
multiple instances of a resource are available. It controls access to a pool of
identical resources.
🔸 Example Use: Used in situations like managing a pool of printers, database
connections,
or threads.
🔸 Behavior:
o wait() decreases the count (S = S - 1).
o signal() increases the count (S = S + 1).
Readers-Writters Problem- The Readers-Writers Problem is a classic synchronization
problem involving shared data that can be read by multiple processes or written by one
process.
Here, Readers can read the data simultaneously without interfering & Writers need
exclusive access to the data to modify it. If a writer is writing, no reader or writer can
access the data and If readers are reading, writers must wait.
Significance:
Ensures data consistency by preventing simultaneous writing and reading/writing.
Allows multiple readers to read at the same time for efficiency.
Prevents race conditions during writing.
Possible Solutions:
Use semaphores or mutex locks to control access.
Implement policies like:
o Reader priority: Readers get preference over writers.
o Writer priority: Writers get preference to avoid starvation.
o Fair solution: Balances between readers and writers.
Dinning Philosopher- The Dining Philosopher Problem is a classic synchronization
problem that illustrates challenges of resource sharing and deadlock in concurrent
systems.
Here, suppose Five philosophers sit around a round table. There is a fork placed
between each pair of philosophers (total 5 forks). Each philosopher needs two forks
(left and right) to eat. Philosophers alternate between thinking and eating. A
philosopher can only eat when they have both forks.
So, Problem is- If every philosopher picks up their left fork at the same time, all will wait
indefinitely for the right fork → Deadlock.
Possible solution for Dinning Philosopher
i. Resource hierarchy solution: Assign an order to forks and require philosophers
to pick up forks in that order to avoid circular wait.
ii. Limit number of philosophers: Allow at most 4 philosophers to sit at the table
simultaneously, preventing deadlock.
iii. Arbitrator solution: Use a waiter or arbitrator to control who can pick forks.
iv. Pick forks only if both are available: Philosophers pick up forks only if both forks
are free; otherwise, they put down the fork.
Deadlock- Deadlock is a situation in an operating system where a group of processes
are blocked because each process is waiting for a resource held by another process in
the group.
In deadlock situation, none of the processes can proceed because they are waiting
indefinitely for resources & deadlock causes the system to freeze or halt progress of
involved processes.
Example-
o Process A holds Resource 1 and waits for Resource 2.
o Process B holds Resource 2 and waits for Resource 1.
o Both are waiting forever → deadlock.
Necessary conditions for Deadlock- Deadlock can occur only if all the following four
conditions hold simultaneously
Mutual Exclusion- At least one resource must be non-shareable. Only one process
can use the resource at a time.
No preemption- Resources cannot be forcibly taken away from a process. A process
must release the resource voluntarily.
Hold & Wait- A process is holding at least one resource and waiting to acquire
additional resources held by other processes.
Circular Wait- There exists a circular chain of processes, each waiting for a resource
held by the next process in the chain.
RAG(Resource Allocation Graph)- A Resource Allocation Graph (RAG) is used to
represent the allocation of resources to processes and the requests made by processes.
-> Process
-> This Resource has 3 instance
-> Resources
Edges- Edges in a Resource Allocation Graph (RAG) represent the connections or
relationships between processes and resources. Edges show which processes are
waiting for which resources, and which resources are currently assigned to which
processes. This helps in understanding resource allocation and detecting deadlocks.
Types of Edges- 2
Request Edges- This edge goes from a Process (P) to a Resource (R). It means the
process is requesting or waiting for that resource.
Assignment Edges- This edge goes from a Resource (R) to a Process (P). It means the
resource has been allocated or assigned to that process.
How to handle Deadlock-
Deadlock Prevention- The system prevents deadlock by ensuring that at least one of
the necessary conditions for deadlock does not occur. For example, it may avoid
allowing processes to hold and wait for resources simultaneously or eliminate
circular wait.
Deadlock Avoidance- The system avoids deadlock by allocating resources only when
it is safe to do so. It uses algorithms like the Banker’s Algorithm to keep the system
in a safe state and deny resource requests that could lead to deadlock.
Safe State- A system is in a safe state if there is at least one sequence of process
execution where every process can get the required resources and complete without
causing a deadlock.
In a safe state, the system can allocate resources to each process in some order and
avoid deadlock.
Unsafe State- An unsafe state is a state where the system may lead to deadlock if
resources are allocated without caution. It is not guaranteed that all processes can
finish, but deadlock may or may not occur.
Condition for Deadlock Prevention- Deadlock prevention works by breaking at least
one of the four necessary conditions for deadlock:
Mutual Exclusion- Make resources sharable whenever possible, so no process is
forced to wait exclusively.
Hold and Wait- Prevent processes from holding resources while waiting for others.
Require processes to request all resources at once or release held resources before
requesting new ones.
Non-preemptive- Allow resources to be forcibly taken away from processes if
needed, to avoid deadlock.
Circular Wait- Impose an ordering on resource requests, so circular waiting cannot
happen.
Types of Deadlock Avoidance- Deadlock avoidance is a technique where the system
dynamically checks resource allocation requests to ensure the system never enters an
unsafe state that can lead to deadlock.
RAG-
o A directed graph showing processes, resources, and their current allocations and
requests.
o Cycles in the graph indicate potential deadlocks.
o By analyzing the graph, the system can decide whether to grant resource
requests or not to avoid deadlock.
Banker’s Algorithm
o A resource allocation and deadlock avoidance algorithm.
o It simulates resource allocation for each request and checks if the system will
remain in a safe state (where all processes can complete).
o If the system will become unsafe, the request is denied to avoid deadlock.
Ways of Deadlock Detection-
Wait for Graph- It is the simplified version of Resource Allocation Graph (RAG).
Nodes
represent processes only. An edge from process Pi to Pj means Pi is waiting for a
resource held by Pj. If the graph contains a cycle, it indicates a deadlock.
Detection Algorithm- The operating system runs algorithms periodically to check for
cycles in the wait-for graph or analyze resource allocation states. When a deadlock is
detected, the system identifies involved processes and resources.
Types of Deadlock Recovery- Once a deadlock is detected, the system must recover
from it. There are two main recovery techniques:
Process Termination- Involves aborting one or more processes to break the deadlock
cycle.
Two strategies:
o Abort all deadlocked processes: Simple but may cause loss of significant
work.
o Abort one process at a time: Abort processes one-by-one until the
deadlock is resolved (based on priority, resource usage, etc.).
Resources Preemptive- Temporarily take resources away from processes and give
them to others to break the deadlock.
Challenges:
o Choosing the victim process.
o Rollback: Return the process to a safe state.
o Starvation risk: Same process might get pre-empted repeatedly.
Safety Algorithm:-
The Safety Algorithm checks whether the system is in a safe state.
A safe state means there exists a sequence of all processes such that each process can complete
execution with the currently available resources + resources held by the completed processes.