Detailed Operating System Answers
Detailed Operating System Answers
Answer:
🔹 Explanation:
Answer:
🔹 Explanation:
• Batch OS was suited for environments where users submitted jobs and waited for
results.
• Multitasking OS allows running tasks like browsing the web while listening to
music.
Answer:
🔹 Example:
In Linux, ps command shows process management, while /etc/fstab relates to file system
management.
Answer:
🔹 Examples:
Answer:
🔹 Explanation:
Answer:
🔹 Explanation:
• Multitasking simulates parallel execution by switching CPU time among tasks very
fast.
• Multiprocessing genuinely executes multiple processes at the same time using
multiple CPUs.
Answer:
Device Management is the part of the OS responsible for controlling all hardware devices.
🔹 Functions include:
🔹 Explanation:
For example, when printing a document, the OS manages the printer's queue and ensures
orderly access.
Answer:
• Process Management involves handling multiple active processes and ensuring fair
CPU access.
• Context Switching is the key mechanism:
➔ The OS saves the current process's state (like CPU registers, memory pointers) into
a Process Control Block (PCB).
➔ It then loads the state of the next process to resume its execution.
• Schedulers (short-term, medium-term, long-term) decide which process to run next.
🔹 Example:
When you switch from writing an email to browsing the internet, the OS quickly saves the
email app's state and loads the browser's state.
Answer:
🔹 Scenario:
In a heart-rate monitoring system, memory must be allocated instantly to process incoming
heart signals. Any delay might cause life-threatening outcomes.
Answer:
Context Switching is the process where the CPU switches from one process or thread to
another.
🔹 Steps:
🔹 Importance:
• Enables multitasking.
• Ensures that all processes get a fair share of CPU time.
• Essential for responsiveness in time-sharing systems.
🔹 Example:
When you minimize a video call app and open a web browser, context switching enables the
system to pause one app and resume another seamlessly.
Absolutely! Let's continue with detailed answers for Questions 11 to 20, formatted for
university-level responses:
11. List and explain the different states of a process in an operating system.
Answer:
A process in an operating system can exist in several states during its lifecycle:
State Transitions:
• New → Ready: After creation, the process moves to the ready state.
• Ready → Running: The scheduler selects the process for execution.
• Running → Waiting: The process requires an I/O operation.
• Waiting → Ready: The I/O operation is complete.
• Running → Ready: The process is preempted to allow another process to run.
• Running → Terminated: The process completes execution.
Understanding these states helps in managing processes efficiently and ensuring optimal CPU
utilization.
12. Explain the concept of process scheduling and its importance in operating
systems.
Answer:
Process Scheduling is the mechanism by which an operating system decides the order in
which processes access the CPU. It ensures that all processes get a fair share of CPU time and
system resources.
Importance:
Types of Schedulers:
• Long-term Scheduler: Decides which processes are admitted to the system for
processing.
• Short-term Scheduler: Decides which of the ready, in-memory processes is to be
executed next.
• Medium-term Scheduler: Swaps processes in and out of memory to manage the
degree of multiprogramming.
Answer:
FCFS is the simplest scheduling algorithm where the process that arrives first gets executed
first.
Characteristics:
Example:
Consider three processes with the following arrival and burst times:
Execution Order: P1 → P2 → P3
Waiting Times:
• P1: 0
• P2: 5 - 1 = 4
• P3: (5 + 3) - 2 = 6
While FCFS is simple, it can lead to the "convoy effect," where short processes wait for long
ones to finish.
14. What is the Shortest Job First (SJF) scheduling algorithm? Discuss its
advantages and disadvantages.
Answer:
SJF selects the process with the smallest execution time for the next execution.
Advantages:
• Optimal Average Waiting Time: Minimizes the average waiting time compared to
other algorithms.
Disadvantages:
Example:
Execution Order: P4 → P1 → P3 → P2
SJF is efficient but impractical in environments where process lengths are unpredictable.
15. Explain the Round Robin (RR) scheduling algorithm and its suitability for
time-sharing systems.
Answer:
Round Robin assigns a fixed time quantum to each process in the ready queue and cycles
through them.
Characteristics:
Example:
With a time quantum of 3 units, the execution order would cycle through the processes,
allocating 3 units each time until completion.
Suitability:
Ideal for time-sharing systems where responsiveness is crucial, as it ensures all processes are
served in a timely manner.
16. Define deadlock in operating systems and explain the necessary conditions
for its occurrence.
Answer:
A deadlock is a situation where a set of processes are blocked because each process is
holding a resource and waiting for another resource acquired by some other process.
Answer:
Deadlock Avoidance allows the system to enter into a state only if it doesn't lead to a
deadlock.
18. What is the Banker's Algorithm? How does it help in deadlock avoidance?
Answer:
The Banker's Algorithm is a deadlock avoidance algorithm that tests for the safety of
resource allocation by simulating the allocation for predetermined maximum possible
amounts of all resources.
Working:
• Each process must declare the maximum number of resources it may need.
• The system checks if allocating resources to a process leads to a safe state.
• If yes, resources are allocated; otherwise, the process waits.
Example:
If a process requests resources, the system checks if the remaining resources are sufficient for
the other processes. If not, the request is denied to avoid potential deadlock.
This algorithm ensures that the system always remains in a safe state, thus avoiding
deadlocks.
19. Explain the concept of memory management in operating systems and its
objectives.
Answer:
Objectives:
The layered architecture of an operating system is a design approach that divides the OS into
different layers, each of which handles specific tasks. This structure improves modularity,
maintainability, and understandability. The layers are typically stacked, where each layer
communicates only with the adjacent layers. The layers are usually defined as follows:
1. Layer 0: Hardware
The base of the OS is the hardware layer, which consists of the physical components
of the computer system, such as the CPU, memory, and I/O devices. The OS has
direct control over hardware resources through device drivers.
2. Layer 1: Kernel
The kernel layer is the core part of the OS responsible for low-level tasks such as
memory management, process management, device management, and system calls. It
provides an interface between the hardware and the software, ensuring safe and
efficient execution of processes.
3. Layer 2: System Call Interface
This layer acts as an intermediary between user applications and the kernel. It
provides an interface for user programs to make requests to the OS kernel, typically
via system calls (e.g., file operations, process creation).
4. Layer 3: User Interface
The user interface layer includes components like the shell, graphical user interface
(GUI), or command-line interface (CLI). It allows users to interact with the operating
system and run applications.
5. Layer 4: Application Software
The top layer consists of application programs that run on the operating system, such
as word processors, browsers, and games. These programs interact with the OS
through the system call interface.
A thread is the smallest unit of execution within a process. A thread consists of a set of
instructions that can be executed independently, but it shares the same memory space as other
threads within the same process.
Types of Threads:
1. User-Level Threads: Managed by user-level libraries rather than the OS. The OS
kernel does not know about these threads, making management lightweight.
2. Kernel-Level Threads: Managed directly by the OS. The kernel is responsible for
scheduling and managing these threads, and each thread is treated as a separate entity
by the operating system.
• Thread Creation: When a new thread is created, the OS allocates resources such as
stack memory and thread control blocks (TCB) to manage the thread's execution.
• Scheduling: The operating system schedules threads for execution based on their
priority and state. Scheduling algorithms such as Round-Robin, Shortest Job First,
and Priority Scheduling are often used.
• Synchronization: To ensure correct access to shared resources, threads often need to
synchronize. Common synchronization techniques include locks, semaphores, and
monitors.
• Termination: A thread is terminated when it finishes its execution or is forcibly
terminated by the OS due to an error or other condition.
Advantages of Threads:
Disadvantages of Threads:
Advantages:
Disadvantages:
A scheduler is a core component of an operating system that determines the order in which
processes or threads are executed. It is responsible for allocating CPU time to different
processes, ensuring that system resources are utilized efficiently.
Types of Schedulers:
1. Long-Term Scheduler: Determines which processes are admitted into the system
(i.e., moved from the job pool to the ready queue).
2. Short-Term Scheduler: Decides which of the ready processes will be executed next
by the CPU.
3. Medium-Term Scheduler: Involved in swapping processes in and out of memory to
balance the load.
Key Functions of a Scheduler:
• Fair Allocation of CPU Time: The scheduler ensures that no process monopolizes
the CPU and that all processes get a fair share of CPU time.
• Prioritization: The scheduler may prioritize certain processes based on factors like
priority, real-time constraints, or user preferences.
• Context Switching: The scheduler manages the state of the CPU when switching
between processes, ensuring that the system continues execution without loss of data.
The medium-term scheduler helps improve system performance by managing the swapping
of processes in and out of the main memory. This technique, known as swapping, allows the
operating system to handle more processes than the physical memory can accommodate at
once, thus preventing the system from being overwhelmed.
1. Process Swapping: The medium-term scheduler selects processes that are in the
ready queue but not currently being executed. These processes are swapped out of the
main memory to disk to free up space for other processes.
2. Balancing Memory Usage: By swapping processes in and out, the medium-term
scheduler ensures that the most active processes are kept in memory while less active
processes are temporarily moved to disk storage.
3. Improved Throughput: Swapping enables the system to run more processes
simultaneously, leading to better CPU utilization and higher system throughput.
4. Avoiding Deadlock and Resource Contention: Swapping can alleviate resource
contention by ensuring that the memory is not overwhelmed by too many processes
competing for space.
Benefits of Swapping:
• Improved Resource Utilization: By keeping the CPU busy with processes that fit in
memory, overall system throughput can be improved.
• Efficient Memory Management: Swapping allows the operating system to maximize
memory usage and maintain system responsiveness.
Challenges:
The long-term scheduler is also known as the job scheduler because its primary
responsibility is to control the admission of jobs (or processes) into the ready queue. It
determines which processes should be brought into memory for execution. The job scheduler
selects processes from the job pool based on various factors such as priority, system load, and
resource availability.
• The long-term scheduler has a direct impact on the system's load because it controls
the number of processes in memory at any given time.
• By admitting or rejecting jobs, the long-term scheduler ensures that the system doesn't
become overloaded with too many processes. This helps maintain optimal system
performance by controlling the degree of multiprogramming (how many processes
are active in memory).
• If too many processes are admitted, the system might become overloaded, leading to
excessive paging or swapping, which will slow down the system. On the other hand,
if too few processes are admitted, the system may not fully utilize its CPU, leading to
underutilization.
• Frequency: Runs much more frequently, typically every time a process is swapped
out or completed. It determines which process gets the CPU next.
• Impact on System Performance:
o It has a direct impact on system responsiveness and efficiency by
determining the CPU's execution sequence.
o Effective short-term scheduling algorithms can ensure fairness and
maximized CPU utilization while minimizing response times.
o Poor short-term scheduling can lead to inefficient CPU usage, poor process
turnaround, or starvation for some processes.
28. Define Pre-emptive CPU Scheduling and Give Any Two Examples
Pre-emptive CPU Scheduling is a type of CPU scheduling where the operating system can
interrupt (or preempt) a running process to allocate CPU time to another process. This
ensures that high-priority tasks get immediate CPU access, preventing lower-priority tasks
from monopolizing the CPU.
Examples:
1. Round Robin (RR): The scheduler assigns a fixed time quantum (e.g., 10ms) to each
process in the ready queue. If a process does not finish within its allocated time, it is
preempted and placed at the back of the ready queue.
2. Preemptive Priority Scheduling: In this algorithm, each process has a priority. A
running process can be preempted if a new process with a higher priority arrives,
allowing the higher-priority process to execute immediately.
Pre-emptive Scheduling:
• In pre-emptive scheduling, the operating system can interrupt the currently running
process to switch to another process.
• This allows the OS to allocate CPU resources more efficiently and respond to high-
priority tasks promptly.
• Example: Round Robin Scheduling — Each process is given a fixed time slice. If
the process doesn't complete within the slice, it is preempted, and the next process is
executed.
Non-Pre-emptive Scheduling:
Key Differences:
• Pre-emptive: The system can take control away from the process anytime (e.g., for
higher priority tasks).
• Non-Pre-emptive: Once a process starts, it runs to completion without interruption
(unless it voluntarily gives up control).
30. Explain How Context Switching Occurs in Pre-emptive CPU Scheduling
Context Switching occurs when the operating system's scheduler decides to stop the
execution of the current process and start the execution of another process. This process is
essential in preemptive scheduling to ensure that the CPU is allocated fairly and efficiently.
1. Save the Context of the Current Process: The state of the currently running process,
including its CPU registers, program counter, and other relevant information, is saved
into its process control block (PCB).
2. Select the Next Process: The scheduler selects the next process to execute from the
ready queue.
3. Load the Context of the New Process: The state of the new process, saved in its
PCB, is loaded into the CPU registers.
4. Update Process State: The current process's state is updated to "Ready," and the new
process’s state is updated to "Running."
5. Transfer Control: Control is passed to the new process, and it begins execution.
Context switching is an overhead operation, and excessive context switching can reduce
system performance by consuming CPU cycles without performing useful computation.
31. Shortest Job First (SJF) Scheduling with Average Turnaround Time
Calculation
Given:
SJF Scheduling (non-preemptive) works by selecting the process with the shortest burst time
next.
Gantt Chart (execution order):
P1 -> P2 -> P4 -> P3
Turnaround Time: The turnaround time for a process is the total time from its arrival to its
completion, including waiting time.
Given:
• P1: Waiting time = (start time of P1 - arrival time of P1) = 14ms - 0ms = 14ms
• P2: Waiting time = 0ms
• P3: Waiting time = 14ms - 2ms = 12ms
• P4: Waiting time = 14ms - 3ms = 11ms
Given:
Given:
Gantt Chart and Average Waiting Time: Follow the SRTF scheduling rules to compute
these metrics.
35. Preemptive Priority Scheduling - Gantt Chart, Average Waiting Time,
and Turnaround Time
Given:
• Preemptive Priority Scheduling assigns the CPU to the process with the highest
priority (the smallest priority number).
• If a new process arrives with a higher priority than the currently running process, the
running process is preempted, and the new process is assigned the CPU.
To create the Gantt chart, we simulate the scheduling process with the arrival times, burst
times, and priorities:
1. P1 starts at time 0ms (since it’s the only process available). But at time 1ms, P2
arrives with a lower priority (higher priority number) than P1 (P2 has priority 8, P1
has priority 6), so P1 continues its execution.
2. P3 arrives at time 3ms with priority 7, and since P1 (priority 6) has a higher priority,
it continues.
3. At time 4ms, P4 arrives with priority 5 (higher than P1 and P3), so it preempts P1.
4. After P4 executes for 4ms, the CPU goes back to P1 and the process continues.
Gantt Chart:
Time: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
17 18 19 20
Process: P1 P1 P1 P1 P4 P4 P4 P1 P2 P2 P2 P3 P3 P3 P3 P5 P5
P6 P6 P6
Waiting Times:
Turnaround Times:
Summary:
• Gantt Chart:
• Time: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
15 16 17 18 19 20
• Process: P1 P1 P1 P1 P4 P4 P4 P1 P2 P2 P2 P3 P3 P3 P3
P5 P5 P6 P6 P6
• Average Waiting Time: 4.83ms
• Average Turnaround Time: 11.83ms
36. Comparison and Analysis of Round Robin (RR), Shortest Remaining Time
First (SRTF) Scheduling Algorithms
Given:
• Round Robin executes processes in a cyclic order, allocating CPU time to each
process in the ready queue for a fixed time quantum (4ms here), after which it moves
to the next process.
• If a process's burst time is not completed within its quantum, it is preempted and
re-entered at the end of the queue. The process continues executing until its burst
time completes.
Time: 1 5 9 13 15 17 21 25 29
Process: P4 P5 P2 P1 P2 P3 P2 P3 P6
• P4 starts at time 1ms and runs for 4ms (until time 5ms).
• P5 runs next for 2ms (until time 7ms).
• P2 starts at time 7ms and runs for 4ms (until time 11ms).
• P1 runs for 4ms (until time 15ms).
• P2 continues and runs for 2ms (until time 17ms).
• P3 runs for 4ms (until time 21ms).
• P2 runs for the remaining 2ms (until time 23ms).
• P3 runs for the remaining 3ms (until time 26ms).
• P6 runs for 3ms (until time 29ms).
In RR, the response time is the time from when a process arrives until its first execution.
• SRTF is a preemptive scheduling algorithm where the process with the smallest
remaining burst time is selected next.
• A process can be preempted if another process with a shorter remaining burst time
arrives.
Time: 1 2 5 6 9 12 15 16 19 22 23 24
Process: P4 P5 P2 P6 P3 P1 P2 P3 P3 P6 P1
Summary Comparison:
Conclusion:
• Round Robin and SRTF performed similarly in this case with the same AWT,
ATAT, and ART.
• SRTF can be more efficient in minimizing waiting time when processes have highly
varied burst times, but it also has the potential for high overhead due to frequent
preemptions.
• RR provides a fairer distribution of CPU time, but may result in higher turnaround
times for some processes.
Concurrency:
Concurrency is the ability of a system to handle multiple tasks at the same time, but not
necessarily simultaneously. In a concurrent system, multiple tasks are in progress at
overlapping time intervals. However, these tasks may be executing one at a time, with the
system switching between them (multitasking), creating an illusion of simultaneous
execution.
1. Task Overlap: Tasks can be in progress at the same time, but only one task is
executed at any given instant.
2. Single Core/Processor: Concurrency can occur on a single-core system by rapidly
switching between tasks.
3. Context Switching: The operating system may switch between tasks using context
switching, where the state of one task is saved and another task is resumed.
4. Independent or Dependent Tasks: Concurrency is ideal when tasks can be
independent or need to coordinate with each other (e.g., in web servers or
multitasking OS).
5. Conceptual: Concurrency is more about structure or design—how tasks are designed
to be handled concurrently, not necessarily their actual simultaneous execution.
Parallel Processing:
Parallel processing involves performing multiple tasks simultaneously by dividing them into
smaller sub-tasks that can be executed at the same time on multiple processors or cores. In
parallel processing, tasks are physically executed at the same time, which speeds up the
overall computation.
Concurrency is more about structuring and handling tasks efficiently, while parallel
processing is about executing tasks simultaneously for faster performance. They can be used
together in modern systems, where concurrency handles multiple tasks at once, and parallel
processing speeds up specific computational tasks within those tasks.
The Operating System ensures mutual exclusion in concurrent processes to prevent race
conditions and allow only one process to access a critical resource at a time. Mutual
exclusion is primarily ensured using synchronization mechanisms such as:
1. Locks: A lock is used to guarantee that only one process can access the critical
section at any given time. If a process locks a resource, other processes that try to lock
the same resource will be blocked until the first process releases the lock.
o Example: Mutex (Mutual Exclusion) is a lock that allows only one process to
access a resource at a time.
2. Semaphores: A semaphore is a signaling mechanism that can be used to control
access to shared resources. It maintains a count and allows processes to access the
critical section based on the semaphore value.
3. Monitors: Monitors are high-level synchronization primitives that encapsulate shared
resources and provide automatic locking mechanisms. A process must acquire a
monitor to access a resource, ensuring mutual exclusion.
4. Disabling Interrupts: In single-processor systems, the OS may disable interrupts
temporarily to prevent context switching during critical operations.
These mechanisms ensure that only one process at a time can access a critical resource,
preventing conflicts and ensuring data integrity.
39. Explain the race condition with the help of a suitable example?
A race condition occurs when the outcome of a process is dependent on the sequence or
timing of uncontrollable events (e.g., the execution order of threads or processes). It happens
when multiple processes or threads access shared data and try to change it simultaneously,
leading to unpredictable or incorrect results.
Example:
1. Initially, x = 0.
2. P1 reads x (value 0) and prepares to increment it.
3. P2 reads x (value 0) and prepares to increment it.
4. P1 increments x and updates x = 1.
5. P2 increments x (which it read as 0) and updates x = 1.
Expected value of x after both processes complete should be 2, but due to the race condition,
x ends up being 1 because both processes read the value of x at the same time and did not
observe each other’s changes.
To prevent this, synchronization (e.g., using a lock or semaphore) should be applied to the
shared resource x to ensure mutual exclusion.
In concurrent programming, a critical section refers to a part of the code where shared
resources are accessed or modified by multiple processes or threads. Only one process should
be allowed to execute the critical section at any time to avoid conflicts and maintain data
integrity.
Example:
Consider two processes P1 and P2, both trying to access and update a shared bank account
balance. The critical section would be the code segment where the balance is read and
updated:
If both processes try to update the balance simultaneously, a race condition could occur,
leading to inconsistent or incorrect results. To ensure mutual exclusion, only one process
should be allowed in the critical section at a time.
Diagram:
+-------------------------------+
| Critical Section (Critical) |
| Process P1 or P2 enters. |
+-------------------------------+
|
Mutex / Semaphore / Lock
|
+-------------------------------+
| Shared Resource Access |
+-------------------------------+
This shows that only one process can enter the critical section at a time to access or modify
the shared resource.
41. Explain the concept of Race Condition with the help of a simple example
involving two processes incrementing a shared variable.
A race condition occurs when two or more processes attempt to update a shared resource
simultaneously, causing an unpredictable result.
Example:
Consider the shared variable counter initialized to 0. Two processes, P1 and P2, are
incrementing the counter.
If both processes execute concurrently without synchronization, they might both read the
same value of counter before either process writes back its increment. This results in one
update being lost.
Execution:
1. Initially, counter = 0.
2. P1 reads counter (value 0), increments it to 1.
3. P2 reads counter (value 0), increments it to 1.
4. P1 writes back counter = 1.
5. P2 writes back counter = 1.
42. Describe the Critical Section Problem? List the three necessary conditions
that a solution to the critical section problem must satisfy.
The Critical Section Problem involves ensuring that no two processes can be inside their
critical sections simultaneously when accessing shared resources. The solution must avoid
race conditions and provide mutual exclusion.
1. Mutual Exclusion: Only one process can be in the critical section at any given time.
2. Progress: If no process is in its critical section and multiple processes are waiting,
then one of the waiting processes must be allowed to enter the critical section.
3. Bounded Waiting: There must be a limit on the number of times a process can be
bypassed by other processes before it gets a chance to enter the critical section.
43. Describe how a simple mutex lock mechanism can be used to solve the
critical section problem. Support your explanation with a small pseudocode
snippet.
A mutex lock ensures that only one process can enter the critical section at a time. A mutex
is a binary flag that is locked when a process enters the critical section and unlocked when
the process leaves.
Pseudocode:
// Process 1
while (lock == 1); // Wait until the lock is free
lock = 1; // Lock the mutex
// Critical section code here
lock = 0; // Unlock the mutex
// Process 2
while (lock == 1); // Wait until the lock is free
lock = 1; // Lock the mutex
// Critical section code here
lock = 0; // Unlock the mutex
The mutex ensures that when Process 1 is executing its critical section, Process 2 must wait
until Process 1 releases the lock. This guarantees mutual exclusion.
CPU scheduling is the process by which the operating system decides which process or
thread will execute on the CPU at any given time. The goal of CPU scheduling is to
maximize CPU utilization and system throughput while minimizing response time, waiting
time, and turnaround time.
• CPU-bound Process: These processes require more CPU time than I/O time. They
perform heavy computation and typically have longer burst times. Examples include
scientific simulations, rendering tasks, and data processing.
• I/O-bound Process: These processes require more time for I/O operations (disk
reads, network operations) than for computation. They typically spend much of their
time waiting for I/O completion. Examples include database queries and file transfers.
comple4on of a
Minimize
Time process.
the
Fairness Maximize
CPU.
Example Computation:
47. Given Arrival Time {0,1,2} and Burst Time {5,3,6} — Compute Throughput,
Response Time, and Turnaround Time:
P1 0 5 0 5 5–0=5 0–0=0
P2 1 3 5 8 8–1=7 5–1=4
P3 2 6 8 14 14 – 2 = 12 8–2=6
48. Given Arrival {0,1,2} and Burst {5,3,6} — Compute Avg Waiting Time and Avg
Turnaround Time:
P2 7 7–3=4
P3 12 12 – 6 = 6
✅ Average Waiting Time = (0 + 4 + 6) / 3 = 3.33 units
• Result:
CPU becomes underutilized because short, quick processes are stuck wai4ng for a
long-running one to finish.
• Example:
If a long process is scheduled first under FCFS, short processes pile up behind it like a
convoy.
51. FCFS Scheduling (Processes P1–P4)
0 10 14 22 30 35
Calculations:
P1 0 10 10 10 – 0 = 10 10 – 10 = 0
P2 1 4 14 14 – 1 = 13 13 – 4 = 9
Completion Turnaround Waiting Time
Process Arrival Burst
Time Time (CT - AT) (TAT - BT)
P3 2 8 22 22 – 2 = 20 20 – 8 = 12
P4 3 5 35 35 – 3 = 32 32 – 5 = 27
Step-by-Step:
• At 4me 0: P1 arrives → Execute P1 (8 units)
0 8 11 15 19 24
Calculations:
P1 0 8 8 8–0=8 8–8=0
P2 1 5 24 24 – 1 = 23 23 – 5 = 18
P3 3 3 11 11 – 3 = 8 8–3=5
P4 5 4 15 15 – 5 = 10 10 – 4 = 6
Step-by-Step:
• At 4me 8: P2(5), P3(3), P4(4), P5(10) all available Shortest job = P3 (3 units) → P3
next
0 8 11 15 19 24 34
Calculations:
Completion
Process Arrival Burst Turnaround Waiting
P1 0 8 8 8–0=8 0
P2 1 5 24 24 – 1 = 23 23 – 5 = 18
P3 3 3 11 11 – 3 = 8 8–3=5
P4 5 4 15 15 – 5 = 10 10 – 4 = 6
P5 2 10 34 34 – 2 = 32 32 – 10 = 22
• Why is it Necessary?
o It enables processes to share CPU time fairly and supports efficient CPU
utilization.
Number of
Context Switches More context switches. Fewer context switches.
• Stack Pointer
✅ Memory-Related Information:
P1 Running
Update PCB of P1
P2 Running
Explanation of Steps:
1. Save P1’s context (registers, program counter, etc.) into P1’s PCB.
3. Select next process (P2) from the ready queue (based on scheduler decision).
4. Load P2’s context (restore CPU registers, program counter, stack pointer from P2’s
PCB).
5. Restore Context of P2 o Load CPU registers, PC, etc., from P2’s PCB.
6. Transfer Control to P2
o P2 starts/resumes execu4on.
• Each process has its own address space (protected by hardware MMU - Memory
Management Unit).
• Time Quantum: 10 ms
• First 10 ms
• Next 10 ms
• Last 5 ms
• Useful Work (CPU execution only) = CPU 4me for processes = 5 × 25 = 125 ms
✨ Quick Summary:
Part Answer
Ganv Chart P1 P2 P3 P4 P5 P1 P2 P3 P4 P5 P1 P2 P3 P4 P5
Part Answer
Context Switch
Overhead Higher Lower
✅ (b) User-Level Threads vs Kernel-Level Threads
Context Switching Yes (only one task at Some4mes (each core can
Needed a 4me) run a process)
No (only pseudo-
Parallel Execu4on Yes (true parallelism)
parallelism)
• During context switch, current PCB is updated, and new PCB is loaded. Example:
While switching from Process P1 to P2, OS saves P1's PC and registers into P1’s PCB and
loads P2’s informa4on from its PCB.
• By rapidly switching between tasks, OS creates the illusion that all tasks are running
simultaneously.
63. Basic States of a Process (State Transition Diagram) Here are the basic states:
New → Ready → Running → Wai4ng → Terminated
↑ ↓ ↑
Example:
When a process finishes wai4ng for I/O and now just needs CPU 4me.
CPU Usage Not using CPU yet Ac4vely using the CPU
Execu4on Unit
Simple:
✅ Key Point:
The process becomes eligible for CPU again but must wait in the Ready Queue un4l the
scheduler assigns it the CPU.
67. How a Process Moves from Creation to Termination (State Transition Concept)
Lifecycle of a process:
1. New:
2. Ready:
3. Running:
4. Waiting:
5. Ready (again):
6. Running (again):
7. Terminated:
✅ In multitasking systems:
Mul4ple processes constantly cycle between Ready, Running, and Waiting before
reaching Termination.
| ↓ |
| |
↓ ↓
✅ Explanation of States:
• Running: Execu4ng.
✅ Key Transitions:
• During context switching, PCB helps save and restore process states.
Category Details
• Saving:
When a process is preempted, OS saves its CPU state into its PCB.
• Loading:
Before resuming a process, OS loads the CPU state from the process’s PCB.
• Scheduling:
Scheduler uses PCB informa4on to pick the next process to run.
• Management:
PCB helps OS keep track of resource usage and status of every process.
Wai4ng → Ready
Aher I/O/event comple4on
66 transi4on
PCB Usage in
70
Context Switch Save/load CPU state, help scheduling
71. What information does a PCB store for a process in an operating system?
✅ A Process Control Block (PCB) stores:
Example:
A web browser:
Example:
Two threads trying to update the same bank account balance at the same 4me →
Synchroniza4on ensures updates happen safely.
Speed of Context Very fast (no kernel Slower (kernel needs to switch
Switching involvement) contexts)
• Ready:
The thread is ready to run but wai4ng for CPU alloca4on.
• Blocked (Waiting):
The thread is wai4ng for some event (e.g., I/O comple4on) before it can con4nue.
76. Differentiate Between Thread and Process (At least Four Points)
Aspect Thread Process
Crea4on
Low (lightweight) High (heavyweight)
Overhead
✅ Dekker’s Algorithm is the first known software solution for the critical section
problem involving two processes.
• It ensures mutual exclusion (only one process enters cri4cal sec4on at a 4me).
Key Idea:
• Each process sets its flag to indicate interest in entering the cri4cal sec4on.
• If both want to enter at the same 4me, turn variable decides who goes first.
Simple Pseudocode:
// Cri4cal Sec4on
...
turn = j; flag[i]
= false;
// Remainder Sec4on
...
} while (true);
• Synchronization:
Ensuring threads do not corrupt shared resources.
• Scheduling:
Efficiently alloca4ng CPU 4me among many threads.
• Resource Allocation:
Managing limited system resources (memory, files, etc.).
• Mul4ple threads share the same address space, files, and resources but execute
independently.
• Scheduling:
Scheduler manages thread execu4on order (can be preemp4ve or coopera4ve).
• Synchronization:
Mechanisms like mutexes, semaphores, and locks prevent race condi4ons.
• Thread Libraries:
threads API
✅ Modern OS Support:
if process i wants to enter cri4cal sec4on turn Indicates which process’s turn it is to
• Mutual Exclusion
• Progress
• Bounded Waiting
• Busy Waiting:
Processes constantly check condi4ons, was4ng CPU 4me (no CPU yielding).
• Complexity:
Difficult to understand and implement correctly.
• Hardware Dependency:
Works poorly on modern architectures with complex memory models (caching, out-
of-order execu4on).
Thread Management
78
Challenges Synchroniza4on, scheduling, deadlocks
Qn Focus Key Point
Peterson’s Algorithm
80 flag[i], turn
Variables
Dekker’s Algorithm
81
Drawbacks Busy wai4ng, complexity, 2-process limit
// Process i (0 or 1) do {
turn = 1 - i;
// Wait un4l the other process has finished or it’s our turn while (flag[1 - i] &&
turn == 1 - i) {
// Busy-wait
}
...
false;
...
} while (true);
• turn ensures mutual exclusion by giving one process priority if both processes try to
enter at the same 4me.
83. Can Peterson's and Dekker's Algorithms Be Extended to More Than Two
Processes?
✅ Peterson's Algorithm:
✅ Dekker's Algorithm:
✅ No, Dekker's and Peterson's algorithms are not commonly used in modern operating
systems for process synchroniza4on.
Reasons:
Steps:
1. Both processes have a flag indica4ng their desire to enter the cri4cal sec4on.
2. A turn variable decides which process will be allowed to enter if both try at the
same 4me.
3. Process 1:
o If Process 2 has its flag set and it’s not its turn, it waits (busy-wait). o If
it’s its turn or Process 2 is not interested, it enters the cri4cal sec4on.
• Only one process can enter the cri4cal sec4on at any 4me because one process is
always given priority based on the turn variable.
86. Compare and Contrast Dekker's and Peterson's Algorithms Aspect Dekker’s
1. Teaching synchronization:
As a simple and easy-to-understand algorithm for two processes.
2. Small-scale systems:
In systems with very few processes, Peterson's algorithm could be considered.
Advantages:
• Limited to two processes, making it imprac4cal for modern systems with many
concurrent tasks.
• Bounded Waiting: Every process has a limit on how long it must wait to enter the
cri4cal sec4on.
• Peterson’s Algorithm ensures one process will always get the cri4cal sec4on.
• Dekker’s Algorithm avoids deadlock by using a turn-based system that ensures that
one process will proceed.
Peterson's Algorithm
82 Flags, turn variable for mutual exclusion
Pseudocode
85 Dekker’s Algorithm Uses turn and flag variables for mutual exclusion
Compare Dekker’s vs
86 Differences in simplicity, efficiency, scalability
Peterson’s Algorithms