Index
S.No Topic Name
1 Introduction
2 What is CPU Scheduling?
3 Why CPU Scheduling is Required
4 Types of Scheduling Algorithms
5 Characteristics of CPU Scheduling
6 Objectives Of CPU Scheduling
7 Different Scheduling Algorithms
8 First Come First Serve (FCFS)
9 Shortest Job First (SJF)
10 Priority Scheduling
11 Round Robin Scheduling
12 Summary
13 Conclusion
14 References
Introduction
An Operating System (OS) is a software that acts as an interface between the hardware
and the user, managing all the resources of a computer efficiently. One of the most
important resources managed by the OS is the CPU (Central Processing Unit), which
executes instructions of processes.
In a multitasking environment, multiple processes may be ready to use the CPU at the
same time. Since the CPU can execute only one process at a time, the OS must decide
which process gets the CPU and when. This decision-making is called CPU Scheduling.
CPU scheduling is crucial for several reasons:
1. It ensures fairness, giving each process a chance to execute.
2. Improves CPU utilization, avoiding idle CPU time.
3. Reduces waiting time and turnaround time for processes.
4. Maximizes throughput, i.e., the number of processes completed per unit time.
Scheduling is especially important in real-time systems where certain tasks must complete
within strict deadlines. For example, in an air traffic control system, the OS must ensure that
critical processes like monitoring flight paths are executed immediately.
What is CPU Scheduling?
CPU scheduling is the method by which the operating system selects a process from the
ready queue (a queue of processes waiting for CPU) and allocates CPU time to it. The
ready queue can contain processes of different priorities, burst times, and arrival times. The
goal is to make CPU usage efficient, fair, and responsive.
Why CPU Scheduling is Required:
CPU scheduling is required because the CPU is one of the most critical and limited
resources in a computer system. In a multitasking environment, multiple processes may be
ready to use the CPU at the same time, but the CPU can execute only one process at a
time. Without proper scheduling, some processes may remain waiting for long periods,
leading to poor CPU utilization, increased waiting time, and reduced system
performance. CPU scheduling ensures that processes are executed in a fair and efficient
order, maximizing CPU usage, minimizing process waiting and turnaround times, and
improving overall system responsiveness. It is especially important in time-sharing and
real-time systems, where timely execution of processes is crucial for meeting system
requirements.
Types of Scheduling Algorithms:
CPU scheduling algorithms are broadly classified into:
• Preemptive Scheduling: A running process can be interrupted to give CPU to
another process.
• Non-preemptive Scheduling: Once a process starts execution, it cannot be
interrupted until it finishes or waits for I/O.
Characteristics of CPU Scheduling
1. Preemptive vs Non-preemptive:
o In preemptive scheduling, a process currently running on the CPU can be
stopped if a higher-priority process arrives. For example, in Round Robin, a
process is preempted when its time quantum expires.
o In non-preemptive scheduling, the process keeps the CPU until it finishes
its execution or voluntarily waits for I/O. FCFS and SJF are examples.
2. CPU-Bound vs I/O-Bound Processes:
o CPU-bound processes require more CPU time and fewer I/O operations,
e.g., video rendering.
o I/O-bound processes perform many I/O operations but require little CPU
time, e.g., reading from a database.
3. Performance Criteria:
o Throughput: Number of processes completed per unit time.
o Turnaround Time: Total time from process submission to completion.
o Waiting Time: Total time a process spends waiting in the ready queue.
o Response Time: Time from submission of a request to the first response.
4. Fairness: All processes should get a reasonable share of CPU time to avoid
starvation.
Objectives Of Cpu Scheduling
• Maximize CPU utilization.
• Minimize average waiting time.
• Minimize turnaround time.
• Provide fair and responsive scheduling.
• Support priorities for important tasks.
Different Scheduling Algorithms:
First Come First Serve (FCFS)
FCFS is the simplest CPU scheduling algorithm. The process that arrives first in the
ready queue is executed first. It is non-preemptive, meaning once a process starts
execution, it cannot be interrupted.
Characteristics:
• Uses a FIFO queue.
• Simple and easy to implement.
• Works well if process burst times are similar.
Advantages:
• Fair: Every process is executed in order of arrival.
• Easy to implement with queues.
Disadvantages:
• Convoy Effect: Short processes must wait for long processes to complete,
increasing waiting time.
• Poor CPU utilization if long processes dominate.
Example:
Process Arrival Burst Completion Turnaround Waiting
Time Time Time Time Time
P1 0 5 5 5 0
P2 1 3 8 7 4
P3 2 8 16 14 6
P4 3 6 22 19 13
Gantt Chart: P1 → P2 → P3 → P4
Shortest Job First (SJF)
SHORTEST JOB FIRST (SJF) SCHEDULING
SJF selects the process with the smallest CPU burst time next. It can be:
• Non-preemptive: Executes the process until completion.
• Preemptive (SRTF): If a new process arrives with a smaller burst time, the current
process is preempted.
Advantages:
• Minimizes average waiting time.
• Efficient for processes with predictable burst times.
Disadvantages:
• Requires knowledge of CPU burst times, which is not always possible.
• Can cause starvation for long processes if short processes keep arriving.
Example:
Process Arrival Burst Completion Turnaround Waiting
Time Time Time Time Time
P1 0 6 6 6 0
P2 1 8 14 13 5
P3 2 7 21 19 12
P4 3 3 24 21 18
Gantt Chart: P1 → P4 → P3 → P2
Priority Scheduling
Priority Scheduling assigns each process a priority value. The CPU is allocated to the
process with the highest priority. If two processes have the same priority, they are
scheduled based on arrival time (FCFS).
Priority Scheduling can be preemptive or non-preemptive:
1. Non-preemptive Priority Scheduling:
o Once a process starts execution, it cannot be preempted.
o CPU is assigned to the highest priority process in the ready queue.
2. Preemptive Priority Scheduling:
o If a new process arrives with higher priority than the currently running
process, the CPU is preempted and given to the new process.
Advantages:
• Ensures important or critical tasks are executed first.
• Can improve response time for high-priority processes.
Disadvantages:
• Starvation: Low-priority processes may wait indefinitely.
• Requires aging techniques to gradually increase priority of waiting processes and
prevent starvation.
Example:
Process Burst Priority Completion Turnaround Waiting
Time Time Time Time
P1 10 3 10 10 0
P2 1 1 1 1 0
P3 2 4 12 10 8
P4 1 2 11 8 7
Gantt Chart: P2 → P4 → P1 → P3
Round Robin Scheduling
Round Robin is a preemptive scheduling algorithm designed for time-sharing systems.
In RR, each process is assigned a fixed time slice or quantum. If the process does not
finish in its time quantum, it is moved to the end of the ready queue, and the CPU is given
to the next process.
Advantages:
• Fair and prevents indefinite waiting.
• Efficient for systems where all processes need equal attention.
• Simple and easy to implement.
Disadvantages:
• Average waiting time depends heavily on time quantum.
• If quantum is too small, context switching overhead increases, reducing CPU
efficiency.
• If quantum is too large, RR behaves like FCFS and loses fairness.
Example:
Time Quantum = 4 units. Processes: P1 (Arrival 0, Burst 5), P2 (Arrival 1, Burst 3), P3
(Arrival 2, Burst 8), P4 (Arrival 3, Burst 6)
Process Arrival Burst Completion Turnaround Time Waiting Time
Time Time Time (CT-AT) (TAT-BT)
P1 0 5 17 17 12
P2 1 3 8 7 4
P3 2 8 23 21 13
P4 3 6 21 18 12
Gantt Chart (time units):
0–4: P1 → 4–7: P2 → 7–11: P3 → 11–15: P4 → 15–16: P1 → 16–20: P3 → 20–21: P4 →
21–23: P3
Summary
CPU scheduling is one of the most fundamental functions of an operating system. It ensures
efficient allocation of the CPU, maximizing system performance while maintaining fairness
among processes. In this assignment, we studied several key scheduling algorithms, each
with its own characteristics, advantages, and disadvantages:
1. First Come First Serve (FCFS):
o Non-preemptive algorithm that executes processes in the order they arrive.
o Simple and fair, but may cause long waiting times due to the convoy effect.
2. Shortest Job First (SJF) / Shortest Remaining Time First (SRTF):
o Selects the process with the smallest burst time.
o Minimizes average waiting time but may cause starvation for longer
processes.
3. Priority Scheduling:
o Executes the highest-priority process first.
o Can be preemptive or non-preemptive.
o May require aging to prevent low-priority processes from indefinite waiting.
4. Round Robin (RR):
o Preemptive algorithm where each process is given a fixed time quantum.
o Fair and prevents starvation, ideal for time-sharing systems.
o Efficiency depends on the choice of time quantum.
Key Observations:
• No single scheduling algorithm is perfect for all situations.
• FCFS is simple but inefficient for varying burst times.
• SJF/SRTF is optimal for minimizing waiting time but may starve long processes.
• Priority Scheduling suits critical tasks, while Round Robin ensures responsiveness
in interactive systems.
• The choice of algorithm depends on system type: batch processing, real-time
systems, or time-sharing environments.
.
Conclusion
CPU scheduling plays a vital role in operating system performance. Proper scheduling
improves CPU utilization, system throughput, and response times while ensuring
fairness among processes.
By understanding the principles, working, and trade-offs of FCFS, SJF, Priority, and Round
Robin, system designers can select the most appropriate algorithm for their specific
environment. In modern operating systems, a combination of algorithms may also be used
to balance efficiency, fairness, and responsiveness.
In summary:
• CPU scheduling ensures system efficiency and fairness.
• Each scheduling algorithm has unique strengths and weaknesses.
• Knowledge of scheduling is essential for designing high-performance operating
systems.
References
1. Silberschatz, A., Galvin, P. B., & Gagne, G. Operating System Concepts, 10th
Edition, Wiley.
2. Tanenbaum, A. S., & Bos, H. Modern Operating Systems, 4th Edition, Pearson.
3. Stallings, W. Operating Systems: Internals and Design Principles, 9th Edition,
Pearson.