KEMBAR78
Operating System Unit 2 | PDF | Scheduling (Computing) | Process (Computing)
0% found this document useful (0 votes)
16 views16 pages

Operating System Unit 2

This document provides an overview of process management in operating systems, detailing the definition of a process, its memory structure, and the Process Control Block (PCB). It explains the various states of a process, the role of process schedulers, and the types of CPU scheduling, including preemptive and non-preemptive scheduling. Additionally, it introduces several scheduling algorithms such as First Come First Serve, Shortest Job First, Longest Job First, and Round Robin, highlighting their characteristics and examples.

Uploaded by

rksc098
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views16 pages

Operating System Unit 2

This document provides an overview of process management in operating systems, detailing the definition of a process, its memory structure, and the Process Control Block (PCB). It explains the various states of a process, the role of process schedulers, and the types of CPU scheduling, including preemptive and non-preemptive scheduling. Additionally, it introduces several scheduling algorithms such as First Come First Serve, Shortest Job First, Longest Job First, and Round Robin, highlighting their characteristics and examples.

Uploaded by

rksc098
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

OPERATING SYSTEM

Modul – II
PROCESS MANAGEMENT
Process Management in Operating System (OS)

In this blog, we will learn about process management in the Operating System and various
Algorithms and interesting facts related to it. Before actually looking into process
management and how it works, let's start with the definition and various aspects of a
process.

Most simply, a program in execution is called a process. In other words, it is an instance of


a program that actually runs, i.e., an entity that can be assigned and executed on a
processor. Two essential elements of a process are program code and a set of data
associated with that code.

Process memory is divided into four sections:

The program memory stores the compiled program code, read in from non-volatile storage
when the program is launched.

The data section stores global and static variables, allocated and initialized before
executing the main program.

The heap section is used to manage dynamic memory allocation inside our program. In
other words, it is the portion of memory where dynamically allocated memory resides i.e.,
memory allocation via new or malloc and memory deallocation via, delete or free, etc.

1|Page Notes by Prof. Deepak Kumar


OPERATING SYSTEM
The stack section stores local variables defined inside our program or function. Stack space
is created for local variables when they are declared, and the space is freed up when they
go out of scope.

Note: stack and the heap start at opposite ends of the process's free space and grow
towards each other. A stack overflow error will occur if they meet, or a call to new memory
allocation will fail due to insufficient memory available in a heap section.

Process Control Block (PCB)


A program executing as a process is uniquely determined by various parameters. These
parameters are stored in a Process Control Block (PCB) for each process. It is a data
structure, which stores the following information:

Identifier or Process Id: A unique identifier or id associated with every process to


distinguish it from other processes.

Process State: It can be ready, running, halted, completed, etc.

CPU-Scheduling information: priority level relative to the other processes and pointers to
scheduling queues.

CPU registers and Program Counter: These need to be saved and restored when swapping
processes in and out of the CPU.

I/O status information: Includes outstanding I/O requests, I/O devices (e.g., disk drives)
assigned to this process, a list of files in use by the process.

Memory-Management information: page tables or segment tables.

2|Page Notes by Prof. Deepak Kumar


OPERATING SYSTEM
Accounting information: user and kernel CPU time consumed, account numbers, limits,
etc.

States of a Process in Operating System

Now, we understood the process, where we defined one parameter of a process called
State. Processes in the operating system can be in any of the five states: start, ready,
running, wait, and terminated.

Let's understand about these states and transition from one state to another state:

3|Page Notes by Prof. Deepak Kumar


OPERATING SYSTEM
Null -> Start state: When a new process is created, it is said that the process is initiated or
a process from the NULL state has come to a new state.

Start state -> Ready state: The OS then moves a process from the New state to the Ready
state when it is prepared to take on an additional process. At this stage, The process has all
the resources available that it needs to run and waiting to get the CPU time for its execution.

Ready state -> Running state: At this transition, the OS chooses one of the processes from
the Ready state using the process scheduler or dispatcher. After this, the CPU starts
executing the selected process in the running state.

Running state -> Terminated state: The currently running process is terminated by the OS
if the process indicates that it has been completed.

Running state -> Ready state: The scheduler sets a specific time limit for executing any
active process. But if the current process takes more time specified by the scheduler, it
pushes it to the ready state again. The most typical reason for this transition is that it has
achieved its maximum permissible period for uninterrupted execution. Almost all
multiprogramming operating systems enforce this time constraint.

Running state -> Wait state: If a process wants anything for which it must wait, it is placed
in the waiting state. The process cannot run now because it is waiting for some resource to
become available or for some event to occur. For example, the process may be waiting for
keyboard input, disk access request, inter-process messages, a timer to go off, or a child
process to finish.

Wait state -> Ready state: When the event for which it has been waiting occurs, a process
in the Waiting state is transferred to the Ready state.

Note: Some systems may have other states besides the ones listed here. Explore and think!

Execution of a process in Operating System


• The operating system executes various activities in creating a process, which uses a
process control block (PCB) to track the execution status of each process.
• To keep track of all processes, it assigns a process ID (PID) to each process to identify
it uniquely. As discussed above, it also stores several other critical details in the
process control block (PCB).
• The operating system update information in the process’s PCB as the process
transitions from one state to another.
• To access the PCB frequently, the operating system keeps pointers to each process’s
PCB in a process table.

4|Page Notes by Prof. Deepak Kumar


OPERATING SYSTEM

Process Schedulers in Operating System


Process scheduling is critical for selecting and removing the running process based on a
particular strategy or algorithm. The main objectives of the process scheduling are to keep
the CPU busy and deliver "acceptable" response times for all programs. Multiprogramming
operating systems allow more than one process to be loaded into the executable memory
at a time, and the loaded process shares the CPU using time multiplexing.

The operating system has three types of process schedulers for process scheduling.

Long Term or job scheduler: This scheduler's job is to bring the new process to the Ready
state. It determines which process is assigned to the CPU for processing, selects processes
from the queue, and loads them into memory for execution.

• The primary objective of the job scheduler is to provide a balanced mix of jobs (such
as I/O bound and processor bound) and control the degree of multiprogramming.
• It is a heavily loaded system, which can afford to take the time to implement
intelligent and advanced scheduling algorithms.

5|Page Notes by Prof. Deepak Kumar


OPERATING SYSTEM
Short-term or CPU scheduler: It is in charge of selecting one process from the ready state
and scheduling it to the running state. They are also known as Dispatchers.

• The CPU scheduler is responsible for ensuring no starvation due to high burst time
processes.
• It runs very frequently and quickly swaps one process from the running state to the
ready state.

Medium-term scheduler: It is in charge of swapping the processes when a particular


process is performing an I/O operation. If a running process makes an I/O request, it may be
suspended. A process that has been suspended cannot make any progress toward
completion. The suspended process is transferred to secondary storage in this situation to
remove it from memory and make room for other processes. This is known as switching,
and the procedure is referred to as being switched out or rolled out.

CPU Scheduling in Operating System


Now our next aim would be to understand the concept of CPU Scheduling and why do we
need it.

I/O and CPU time are both used in a typical procedure. Time spent waiting for I/O in an old
operating system like MS-DOS is wasted, and the CPU is free during this time. One process
can use the CPU while another waits for I/O in multiprogramming operating systems. Only
process scheduling allows for this.

CPU Scheduling is the process of determining which process will have exclusive use of the
CPU while another is paused. The basic goal of CPU scheduling is to ensure that whenever
the CPU is idle, the OS chooses at least one of the programs in the ready queue to run. The

6|Page Notes by Prof. Deepak Kumar


OPERATING SYSTEM
CPU scheduler will be in charge of the selection process. It chooses from among the
processes in memory that are ready to run.

Types of CPU Scheduling


There are two major Kinds Of CPU Scheduling:

Preemptive Scheduling: Tasks are generally assigned with their priorities in Preemptive
Scheduling. Even if the lower priority activity is still running, it is sometimes necessary to
run a higher priority task before a lower priority task. The lower priority task is put on hold
for a while and then resumes when the higher priority task is completed.

Non-Preemptive Scheduling: The CPU has been assigned to a certain process in this
scheduling mechanism. The process that keeps the CPU occupied will either switch context
or terminate to relieve the CPU. It’s the only method that works across a variety of hardware
platforms. That’s because, unlike preemptive scheduling, it doesn’t require any particular
hardware like timers.

Differences between Preemptive and Non-Preemptive Scheduling

1. In Pre-emptive the memory allocated in the main memory of the CPU is for a limited
time, it may be assigned to any other process depending on the state of the current
process or priority of the new incoming process. Whereas in Non-Preemptive the
memory is allocated to the same process until it is completed.
2. In Preemptive Scheduling, if high priority processes keep on coming that low priority
process will remain interrupted for an indefinite time whereas in Non-Preemptive
Scheduling if any large process is processing it will keep on processing and will not
allow even a very small process to process before it’s complete execution.
3. In Preemptive Scheduling, it has to save all the data of the process in a halted state
so that it can continue from that point only whereas no such requirements are there
in Non-Preemptive Scheduling.

Different Scheduling Algorithms in Operating System


Now, we will look at the overview of the various scheduling algorithms involved in Process
Management one by one. We will cover each algorithm separately in different blogs.

First Come First Serve (FCFS) Scheduling Algorithm

FCFS stands for “First Come, First Serve.” It is the most basic and straightforward CPU
scheduling algorithm. The process that asks the CPU to get the CPU allocation first in this
type of method. A FIFO queue can be used to manage this scheduling strategy. The PCB
(Process Control Block) of the process is linked to the tail of the queue as it enters the ready
queue. As a result, whenever a CPU becomes available, it should be assigned to the process
at the front of the queue.

Some Important Points of this method:

It offers a non-preemptive and pre-emptive scheduling algorithm.

7|Page Notes by Prof. Deepak Kumar


OPERATING SYSTEM
Jobs are always executed on a first-come, first-serve basis

It is easy to implement and use.

Poor performance and the general wait time are quite high.

A simple example for this Algorithm:

Suppose we have 5 processes p1, p2, p3, p4, p5, and they are received by the ready queue
at time t1, t2, t3, t4, t5 such that t1 < t2 < t3 < t4 < t5. Hence p1 arrived first in the ready queue
and therefore it will be executed first followed by p2, p3, p4, p5 respectively.

Convoy Effect

In the FCFS (First Come First Serve) type of algorithm if a certain effect with a large CPU
burst time arrived before any small process then the small process will get blocked with
that large process and this effect is called Convoy Effect.

Shortest Job First(SJF) Scheduling Algorithm


The SJF algorithm is a non-preemptive one. It’s a scheduling policy that prioritizes the
waiting process with the shortest execution time. Among all scheduling algorithms,
Shortest Job First has the advantage of having the shortest average waiting time. It first
sorts all of the processes by arrival time. Then choose the method with the shortest arrival
time and the shortest burst time. After completing the process, create a pool of processes
that will run until the preceding process is completed, then choose the process with the
shortest Burst time from the pool.

Some Important Points of this method:

• Short processes are handled very quickly.


• The system also has a low overhead because it only makes decisions when a process
is finished or a new one is added.
• When a new process is added, the algorithm just has to compare the presently
running process to the new process, ignoring any other processes that are waiting
to run.
• Long processes can be postponed indefinitely if short processes are added on a
regular basis.
• It is further categorized into two types: 1. Pre-emptive Priority Scheduling if due to
incoming of smaller CPU Burst time process the current process is sent to halted
state and execution of the new process proceeds. 2. Non-Preemptive Priority
Scheduling if due to incoming smaller CPU Bursts time process the current process
is not disturbed.

A simple example for this Algorithm:

Suppose we have 5 processes p1, p2, p3, p4, p5, and they are received by the ready queue
at time t1, t2, t3, t4, t5 such that t1 < t2 < t3 < t4 < t5. Now, you can assume this time the
ready queue as the priority queue which rearranges the incoming process on the basis of

8|Page Notes by Prof. Deepak Kumar


OPERATING SYSTEM
CPU bursts time. Therefore, the process with the least CPU burst time is delivered first, and
so on.

Longest Job First Scheduling Algorithm


The algorithm Longest Job First (LJF) is a type of non-preemptive scheduling. This
algorithm primarily keeps track of the Burst time of all processes accessible at the moment
of arrival, and then assigns the processor to the process with the longest burst time. In this
algorithm, once a process begins to run, it cannot be halted in the middle of its execution.
Only until the allocated process has completed its processing and been terminated may
any other process be executed. It organizes the processes in ascending order of their Arrival
Time. Then, out of all the processes that have arrived up to that point, it will choose the one
with the longest Burst Time. After that, it will process it throughout the duration of the
burst. Until this process completes its execution, the LJF monitors if any more processes
arrive.

Some Important Points of this method:

• This algorithm reduces processing speed, resulting in a decrease in system


efficiency and utilization.
• The average waiting time and average turn-around time for a particular set of
procedures rise as a result of this approach.
• With this technique, it’s possible that a short process will never be executed, while
the system continues to run large processes.

A simple example for this Algorithm:

Similar to the above example but you can assume here that the same ready queue
prioritizes on the basis of a larger CPU first time i.e. out of those five processes the one with
the largest CPU burst time will be executed first and so on.

Round Robin scheduling Algorithm


Round Robin is a CPU scheduling system in which each process is cyclically assigned a set
time slot. The Round Robin algorithm was created with time-sharing systems in mind. This
algorithm is similar to FCFS scheduling, except that it includes preemption, which allows
the system to transition between processes. Each process has a set amount of time
assigned to it, and once that time period has passed, the process is preempted and another
process takes its place. As a result, all processes get an equal amount of CPU time. This
algorithm performs the best in terms of average response time.

Some Important Points of this method:

• Round robin is a pre-emptive algorithm and similar to FCFS with some


improvements
• The CPU is shifted to the next process after a fixed interval time.
• Widely used scheduling method in traditional OS.

9|Page Notes by Prof. Deepak Kumar


OPERATING SYSTEM
A simple example for this Algorithm:

Again suppose we have 5 processes p1, p2, p3, p4, p5 and let them have total execution time
t1, t2, t3, t4, t5. Now, we have one extra factor t` (a time quanta) which is going to ensure
the equal sharing of CPU time for each process. Suppose the first state arrives and after t`
time this p1 process is been executed for (t1 — t`) time. Now, it moves to the wait state
where it can perform its I/O operation but now the main memory is released for the next
process p2. After completing its I/O operation the process p1 is pushed to the ready queue
again for its next cycle of processing. The data of the p1 process for its execution up till (t1 —
t`) is already saved by the CPU so that it can continue from that state in the next cycle. The
same goes for all the processes.

Priority Based Scheduling


Priority scheduling is one of the most often used batch scheduling methods. A priority is
assigned to each process. The highest-priority process is carried out first, and so on. On a
first-come, first-served basis, processes of the same priority are executed. Prioritization can
be determined by memory limitations, time constraints, or any other resource constraint.
Priority scheduling does not always set the priority as the inverse of the CPU burst time and
memory; instead, it can be set internally or externally, but the scheduling is done on the
basis of process priority, with the most urgent processes being processed first, followed by
those with lower priority in order.

Some Important Points of this method:

• In priority scheduling algorithm, the chances of indefinite blocking or starvation.


• We can leverage the concept of aging to prevent the starving of any process by
increasing the priority of low-priority processes based on their waiting time.
• It is also categorized into two types: 1. Pre-emptive Priority Scheduling if due to
incoming of higher priority process the current process is sent to halted state and
execution of the new process proceeds. 2. Non-Preemptive Priority Scheduling if
due to incoming higher priority process the current process is not disturbed.

A simple example for this Algorithm:

Similar to the example of the FCFS algorithm the processes are inserted in the ready queue
but here on the basis of a priority now which can be CPU burst time, memory constraints,
etc, and then its execution follows similar to the FCFS algorithm.

What is Interprocess Communication?

Interprocess communication is the mechanism provided by the operating system that


allows processes to communicate with each other. This communication could involve a
process letting another process know that some event has occurred or the transferring of
data from one process to another.

10 | P a g e Notes by Prof. Deepak


Kumar
OPERATING SYSTEM
A diagram that illustrates interprocess communication is as follows −

Synchronization in Interprocess Communication


Synchronization is a necessary part of interprocess communication. It is either provided by
the interprocess control mechanism or handled by the communicating processes. Some of
the methods to provide synchronization are as follows −

• Semaphore: A semaphore is a variable that controls the access to a common


resource by multiple processes. The two types of semaphores are binary
semaphores and counting semaphores.
• Mutual Exclusion: Mutual exclusion requires that only one process thread can enter
the critical section at a time. This is useful for synchronization and also prevents race
conditions.
• Barrier: A barrier does not allow individual processes to proceed until all the
processes reach it. Many parallel languages and collective routines impose barriers.
• Spinlock: This is a type of lock. The processes trying to acquire this lock wait in a
loop while checking if the lock is available or not. This is known as busy waiting
because the process is not doing any useful operation even though it is active.

What are Threads?


Thread is an execution unit that consists of its own program counter, a stack, and a set of
registers where the program counter mainly keeps track of which instruction to execute
next, a set of registers mainly hold its current working variables, and a stack mainly
contains the history of execution

Threads are also known as Lightweight processes. Threads are a popular way to improve
the performance of an application through parallelism. Threads are mainly used to
represent a software approach in order to improve the performance of an operating
system just by reducing the overhead thread that is mainly equivalent to a classical
process.

The CPU switches rapidly back and forth among the threads giving the illusion that the
threads are running in parallel.

As each thread has its own independent resource for process execution; thus Multiple
processes can be executed parallelly by increasing the number of threads.

11 | P a g e Notes by Prof. Deepak


Kumar
OPERATING SYSTEM
It is important to note here that each thread belongs to exactly one process and outside a
process no threads exist. Each thread basically represents the flow of control separately.
In the implementation of network servers and web servers threads have been successfully
used. Threads provide a suitable foundation for the parallel execution of applications on
shared-memory multiprocessors.

The given below figure shows the working of a single-threaded and a multithreaded
process:

Before moving on further let us first understand the difference between a process and a
thread.

Process Thread
A Process simply means any program Thread simply means a segment of a
in execution. process.

The process consumes more resources Thread consumes fewer resources.

The process requires more time for Thread requires comparatively less time
creation. for creation than process.
Thread is known as a lightweight
The process is a heavyweight process
process
The process takes more time to
The thread takes less time to terminate.
terminate
A thread mainly shares the data
Processes have independent data and
segment, code segment, files, etc. with its
code segments
peer threads.
The process takes more time for The thread takes less time for context
context switching. switching.
12 | P a g e Notes by Prof. Deepak
Kumar
OPERATING SYSTEM
Communication between processes Communication between threads needs
needs more time as compared to thread. less time as compared to processes.

For some reason, if a process gets In case if a user-level thread gets


blocked then the remaining processes blocked, all of its peer threads also get
can continue their execution blocked.

Advantages of Thread

Some advantages of thread are given below:

1. Responsiveness
2. Resource sharing, hence allowing better utilization of resources.
3. Economy. Creating and managing threads becomes easier.
4. Scalability. One thread runs on one CPU. In Multithreaded processes, threads can
be distributed over a series of processors to scale.
5. Context Switching is smooth. Context switching refers to the procedure followed
by the CPU to change from one task to another.
6. Enhanced Throughput of the system. Let us take an example for this: suppose a
process is divided into multiple threads, and the function of each thread is
considered as one job, then the number of jobs completed per unit of time
increases which then leads to an increase in the throughput of the system.

Types of Thread

There are two types of threads:

1. User Threads

1. Kernel Threads

User threads are above the kernel and without kernel support. These are the threads that
application programmers use in their programs.

Kernel threads are supported within the kernel of the OS itself. All modern OSs support
kernel-level threads, allowing the kernel to perform multiple simultaneous tasks and/or to
service multiple kernel system calls simultaneously.

Let us now understand the basic difference between User level Threads and Kernel level
threads:

13 | P a g e Notes by Prof. Deepak


Kumar
OPERATING SYSTEM
User Level threads Kernel Level Threads
These threads are implemented by
These threads are implemented by users.
Operating systems
These threads are not recognized by These threads are recognized by
operating systems, operating systems,
In User Level threads, the Context switch In Kernel Level threads, hardware
requires no hardware support. support is needed.
These threads are mainly designed as These threads are mainly designed as
dependent threads. independent threads.
On the other hand, if one kernel thread
In User Level threads, if one user-level
performs a blocking operation then
thread performs a blocking operation then
another thread can continue the
the entire process will be blocked.
execution.
Example of User Level threads: Java Example of Kernel level threads:
thread, POSIX threads. Window Solaris.
While the Implementation of the
Implementation of User Level thread is
kernel-level thread is done by the
done by a thread library and is easy.
operating system and is complex.
This thread is generic in nature and can
This is specific to the operating system.
run on any operating system.

Multithreading Models

The user threads must be mapped to kernel threads, by one of the following strategies:

• Many to One Model


• One to One Model
• Many to Many Model

Many to One Model

• In the many to one model, many user-level threads are all mapped onto a single
kernel thread.
• Thread management is handled by the thread library in user space, which is
efficient in nature.
• In this case, if user-level thread libraries are implemented in the operating system
in some way that the system does not support them, then the Kernel threads use
this many-to-one relationship model.

14 | P a g e Notes by Prof. Deepak


Kumar
OPERATING SYSTEM

One to One Model

• The one to one model creates a separate kernel thread to handle each and every
user thread.
• Most implementations of this model place a limit on how many threads can be
created.
• Linux and Windows from 95 to XP implement the one-to-one model for threads.
• This model provides more concurrency than that of many to one Model.

15 | P a g e Notes by Prof. Deepak


Kumar
OPERATING SYSTEM
Many to Many Model

• The many to many model multiplexes any number of user threads onto an equal
or smaller number of kernel threads, combining the best features of the one-to-one
and many-to-one models.
• Users can create any number of threads.
• Blocking the kernel system calls does not block the entire process.
• Processes can be split across multiple processors.

Reference: Operating System Concepts by Abraham Silberschatz, Greg Gagne, and Peter
Baer Galvin

16 | P a g e Notes by Prof. Deepak


Kumar

You might also like