CPU Scheduling in Operating Systems
CPU scheduling is a process used by the operating system to decide which task or program
gets to use the CPU at a particular time. Since many programs can run at the same time,
the OS needs to manage the CPU’s time so that every program gets a proper chance to
run. The purpose of CPU Scheduling is to make the system more efficient and faster.
CPU scheduling is a key part of how an operating system works. It decides which task (or
process) the CPU should work on at any given time. This is important because a CPU can
only handle one task at a time, but there are usually many tasks that need to be processed.
In this article, we are going to discuss CPU scheduling in detail.
What is the Need for a CPU Scheduling Algorithm?
CPU scheduling is the process of deciding which process will own the CPU to use while
another process is suspended. The main function of CPU scheduling is to ensure that
whenever the CPU remains idle, the OS has at least selected one of the processes
available in the ready-to-use line.
In Multiprogramming, if the long-term scheduler selects multiple I/O binding processes
then most of the time, the CPU remains idle. The function of an effective program is to
improve resource utilization.
If most operating systems change their status from performance to waiting then there may
always be a chance of failure in the system. So in order to minimize this excess, the OS
needs to schedule tasks in order to make full use of the CPU and avoid the possibility of
deadlock.
Terminologies Used in CPU Scheduling
Arrival Time: The time at which the process arrives in the ready queue.
Completion Time: The time at which the process completes its execution.
Burst Time: Time required by a process for CPU execution.
Turn Around Time: Time Difference between completion time and arrival time.
Turn Around Time = Completion Time – Arrival Time
Waiting Time (W.T): Time Difference between turnaround time and burst time.
Waiting Time = Turn Around Time – Burst Time
Things to Take Care While Designing a CPU Scheduling
Algorithm
Different CPU Scheduling algorithms have different structures and the choice of a particular
algorithm depends on a variety of factors. Many conditions have been raised to compare
CPU scheduling algorithms.
The criteria include the following:
CPU Utilization: The main purpose of any CPU algorithm is to keep the CPU as busy
as possible. Theoretically, CPU usage can range from 0 to 100 but in a real-time
system, it varies from 40 to 90 percent depending on the system load.
Throughput: The average CPU performance is the number of processes performed
and completed during each unit. This is called throughput. The output may vary
depending on the length or duration of the processes.
Turn Round Time: For a particular process, the important conditions are how long it
takes to perform that process. The time elapsed from the time of process delivery to
the time of completion is known as the conversion time. Conversion time is the
amount of time spent waiting for memory access, waiting in line, using CPU, and
waiting for I / O.
Waiting Time: The Scheduling algorithm does not affect the time required to complete
the process once it has started performing. It only affects the waiting time of the
process i.e. the time spent in the waiting process in the ready queue.
Response Time: In a collaborative system, turnaround time is not the best option. The
process may produce something early and continue to computing the new results
while the previous results are released to the user. Therefore another method is the
time taken in the submission of the application process until the first response is
issued. This measure is called response time.
Different Types of CPU Scheduling Algorithms
There are mainly two types of scheduling methods:
Preemptive Scheduling: Preemptive scheduling is used when a process switches from
running state to ready state or from the waiting state to the ready state.
Non-Preemptive Scheduling: Non-Preemptive scheduling is used when a process
terminates, or when a process switches from running state to waiting state.
CPU Scheduling
Let us now learn about these CPU scheduling algorithms in operating systems one by
one:
1. First Come First Serve
FCFS considered to be the simplest of all operating system scheduling algorithms. First
come first serve scheduling algorithm states that the process that requests the CPU first
is allocated the CPU first and is implemented by using FIFO queue.
Characteristics of FCFS
FCFS supports non-preemptive and preemptive CPU scheduling algorithms.
Tasks are always executed on a First-come, First-serve concept.
FCFS is easy to implement and use.
This algorithm is not much efficient in performance, and the wait time is quite high.
Advantages of FCFS
Easy to implement
First come, first serve method
Disadvantages of FCFS
FCFS suffers from Convoy effect.
The average waiting time is much higher than the other algorithms.
FCFS is very simple and easy to implement and hence not much efficient.
To learn about how to implement this CPU scheduling algorithm, please refer to our
detailed article on First come, First serve Scheduling.
2. Shortest Job First (SJF)
Shortest job first (SJF) is a scheduling process that selects the waiting process with the
smallest execution time to execute next. This scheduling method may or may not be
preemptive. Significantly reduces the average waiting time for other processes waiting to
be executed. The full form of SJF is Shortest Job First.
Characteristics of SJF
Shortest Job first has the advantage of having a minimum average waiting time
among all operating system scheduling algorithms.
It is associated with each task as a unit of time to complete.
It may cause starvation if shorter processes keep coming. This problem can be solved
using the concept of ageing.
Advantages of SJF
As SJF reduces the average waiting time thus, it is better than the first come first
serve scheduling algorithm.
SJF is generally used for long term scheduling
Disadvantages of SJF
One of the demerit SJF has is starvation.
Many times it becomes complicated to predict the length of the upcoming CPU
request
To learn about how to implement this CPU scheduling algorithm, please refer to our
detailed article on Shortest Job First.
3. Longest Job First (LJF)
Longest Job First (LJF) scheduling process is just opposite of shortest job first (SJF), as
the name suggests this algorithm is based upon the fact that the process with the largest
burst time is processed first. Longest Job First is non-preemptive in nature.
Characteristics of LJF
Among all the processes waiting in a waiting queue, CPU is always assigned to the
process having largest burst time.
If two processes have the same burst time then the tie is broken using FCFS i.e. the
process that arrived first is processed first.
LJF CPU Scheduling can be of both preemptive and non-preemptive types.
Advantages of LJF
No other task can schedule until the longest job or process executes completely.
All the jobs or processes finish at the same time approximately.
Disadvantages of LJF
Generally, the LJF algorithm gives a very high average waiting time and average turn-
around time for a given set of processes.
This may lead to convoy effect.
To learn about how to implement this CPU scheduling algorithm, please refer to our
detailed article on the longest job first scheduling.
4. Priority Scheduling
Preemptive Priority CPU Scheduling Algorithm is a pre-emptive method of CPU scheduling
algorithm that works based on the priority of a process. In this algorithm, the editor sets
the functions to be as important, meaning that the most important process must be done
first. In the case of any conflict, that is, where there is more than one process with equal
value, then the most important CPU planning algorithm works on the basis of the FCFS
(First Come First Serve) algorithm.
Characteristics of Priority Scheduling
Schedules tasks based on priority.
When the higher priority work arrives and a task with less priority is executing, the
higher priority process will takes the place of the less priority process and
The later is suspended until the execution is complete.
Lower is the number assigned, higher is the priority level of a process.
Advantages of Priority Scheduling
The average waiting time is less than FCFS
Less complex
Disadvantages of Priority Scheduling
One of the most common demerits of the Preemptive priority CPU scheduling
algorithm is the Starvation Problem. This is the problem in which a process has to
wait for a longer amount of time to get scheduled into the CPU. This condition is
called the starvation problem.
To learn about how to implement this CPU scheduling algorithm, please refer to our
detailed article on Priority Preemptive Scheduling algorithm.
5. Round Robin
Round Robin is a CPU scheduling algorithm where each process is cyclically assigned a
fixed time slot. It is the preemptive version of First come First Serve CPU Scheduling
algorithm. Round Robin CPU Algorithm generally focuses on Time Sharing technique.
Characteristics of Round robin
It’s simple, easy to use, and starvation-free as all processes get the balanced CPU
allocation.
One of the most widely used methods in CPU scheduling as a core.
It is considered preemptive as the processes are given to the CPU for a very limited
time.
Advantages of Round robin
Round robin seems to be fair as every process gets an equal share of CPU.
The newly created process is added to the end of the ready queue.
To learn about how to implement this CPU scheduling algorithm, please refer to our
detailed article on the Round robin Scheduling algorithm.
6. Shortest Remaining Time First (SRTF)
Shortest remaining time first is the preemptive version of the shortest job first which we
have discussed earlier where the processor is allocated to the job closest to completion.
In SRTF the process with the smallest amount of time remaining until completion is
selected to execute.
Characteristics of SRTF
SRTF algorithm makes the processing of the jobs faster than SJF algorithm, given its
overhead charges are not counted.
The context switch is done a lot more times in SRTF than in SJF and consumes the
CPU’s valuable time for processing. This adds up to its processing time and
diminishes its advantage of fast processing.
Advantages of SRTF
In SRTF the short processes are handled very fast.
The system also requires very little overhead since it only makes a decision when a
process completes or a new process is added.
Disadvantages of SRTF
Like the shortest job first, it also has the potential for process starvation.
Long processes may be held off indefinitely if short processes are continually added.
To learn about how to implement this CPU scheduling algorithm, please refer to our
detailed article on the shortest remaining time first.
7. Longest Remaining Time First (LRTF)
The longest remaining time first is a preemptive version of the longest job first scheduling
algorithm. This scheduling algorithm is used by the operating system to program
incoming processes for use in a systematic way. This algorithm schedules those
processes first which have the longest processing time remaining for completion.
Characteristics of LRTF
Among all the processes waiting in a waiting queue, the CPU is always assigned to
the process having the largest burst time.
If two processes have the same burst time then the tie is broken using FCFS i.e. the
process that arrived first is processed first.
LRTF CPU Scheduling can be of both preemptive and non-preemptive.
No other process can execute until the longest task executes completely.
All the jobs or processes finish at the same time approximately.
Advantages of LRTF
Maximizes Throughput for Long Processes.
Reduces Context Switching.
Simplicity in Implementation.
Disadvantages of LRTF
This algorithm gives a very high average waiting time and average turn-around time
for a given set of processes.
This may lead to a convoy effect.
To learn about how to implement this CPU scheduling algorithm, please refer to our
detailed article on the longest remaining time first.
8. Highest Response Ratio Next (HRRN)
Highest Response Ratio Next is a non-preemptive CPU Scheduling algorithm and it is
considered as one of the most optimal scheduling algorithms. The name itself states that
we need to find the response ratio of all available processes and select the one with the
highest Response Ratio. A process once selected will run till completion.
Characteristics of HRRN
The criteria for HRRN is Response Ratio, and the mode is Non-Preemptive.
HRRN is considered as the modification of Shortest Job First to reduce the problem
of starvation.
In comparison with SJF, during the HRRN scheduling algorithm, the CPU is allotted to
the next process which has the highest response ratio and not to the process having
less burst time.
Response Ratio = (W + S)/S
Here, W is the waiting time of the process so far and S is the Burst time of the process.
Advantages of HRRN
HRRN Scheduling algorithm generally gives better performance than the shortest job
first Scheduling.
There is a reduction in waiting time for longer jobs and also it encourages shorter
jobs.
Disadvantages of HRRN
The implementation of HRRN scheduling is not possible as it is not possible to know
the burst time of every job in advance.
In this scheduling, there may occur an overload on the CPU.
To learn about how to implement this CPU scheduling algorithm, please refer to our
detailed article on Highest Response Ratio Next.
9. Multiple Queue Scheduling
Processes in the ready queue can be divided into different classes where each class has
its own scheduling needs. For example, a common division is a foreground
(interactive) process and a background (batch) process. These two classes have different
scheduling needs. For this kind of situation Multilevel Queue Scheduling is used.
The description of the processes in the above diagram is as follows:
System Processes: The CPU itself has its process to run, generally termed as System
Process.
Interactive Processes: An Interactive Process is a type of process in which there
should be the same type of interaction.
Batch Processes: Batch processing is generally a technique in the Operating system
that collects the programs and data together in the form of a batch before
the processing starts.
Advantages of Multilevel Queue Scheduling
The main merit of the multilevel queue is that it has a low scheduling overhead.
Disadvantages of Multilevel Queue Scheduling
Starvation problem
It is inflexible in nature
To learn about how to implement this CPU scheduling algorithm, please refer to our
detailed article on Multilevel Queue Scheduling.
10. Multilevel Feedback Queue Scheduling
Multilevel Feedback Queue Scheduling (MLFQ) CPU Scheduling is like Multilevel Queue
Scheduling but in this process can move between the queues. And thus, much more
efficient than multilevel queue scheduling.
Characteristics of Multilevel Feedback Queue Scheduling
In a multilevel queue-scheduling algorithm, processes are permanently assigned to a
queue on entry to the system, and processes are not allowed to move between
queues.
As the processes are permanently assigned to the queue, this setup has the
advantage of low scheduling overhead,
But on the other hand disadvantage of being inflexible.
Advantages of Multilevel feedback Queue Scheduling
It is more flexible
It allows different processes to move between different queues
Disadvantages of Multilevel Feedback Queue Scheduling
It also produces CPU overheads
It is the most complex algorithm.