KEMBAR78
OS Unit2 | PDF | Kernel (Operating System) | Thread (Computing)
0% found this document useful (0 votes)
32 views40 pages

OS Unit2

Uploaded by

padma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views40 pages

OS Unit2

Uploaded by

padma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 40

User mode and kernel mode

A processor in a computer running Windows has two different modes: user mode and kernel
mode.

The processor switches between the two modes depending on what type of code is running on
the processor. Applications run in user mode, and core operating system components run in
kernel mode. While many drivers run in kernel mode, some drivers may run in user mode.

User mode

When you start a user-mode application, Windows creates a process for the application. The
process provides the application with a private virtual address space and a private handle
table. Because an application's virtual address space is private, one application cannot alter
data that belongs to another application. Each application runs in isolation, and if an
application crashes, the crash is limited to that one application. Other applications and the
operating system are not affected by the crash.

In addition to being private, the virtual address space of a user-mode application is limited. A
processor running in user mode cannot access virtual addresses that are reserved for the
operating system. Limiting the virtual address space of a user-mode application prevents the
application from altering, and possibly damaging, critical operating system data.

Kernel mode

All code that runs in kernel mode shares a single virtual address space. This means that a
kernel-mode driver is not isolated from other drivers and the operating system itself. If a
kernel-mode driver accidentally writes to the wrong virtual address, data that belongs to the
operating system or another driver could be compromised. If a kernel-mode driver crashes,
the entire operating system crashes.

This diagram illustrates communication between user-mode and kernel-mode components.


Operating System

An operating system or OS is system software that works as an interface between hardware


components and end-user. It enables other programs to run. Each computer system, whether it
is desktop, laptop, tablet, or smartphone, all must have an OS to provide basic functionalities
for the device. Some widely used operating systems are Windows, Linux,
MacOS, Android, iOS, etc.

What is Kernel in Operating System?

o As discussed above, Kernel is the core part of an OS(Operating system); hence it has
full control over everything in the system. Each operation of hardware and software is
managed and administrated by the kernel.
o It acts as a bridge between applications and data processing done at the hardware
level. It is the central component of an OS.
o It is the part of the OS that always resides in computer memory and enables the
communication between software and hardware components.
o It is the computer program that first loaded on start-up the system (After the
bootloader). Once it is loaded, it manages the remaining start-ups. It also manages
memory, peripheral, and I/O requests from software. Moreover, it translates all I/O
requests into data processing instructions for the CPU. It manages other tasks also
such as memory management, task management, and disk management.
o A kernel is kept and usually loaded into separate memory space, known as protected
Kernel space. It is protected from being accessed by application programs or less
important parts of OS.
o Other application programs such as browser, word processor, audio & video player
use separate memory space known as user-space.
o Due to these two separate spaces, user data and kernel data don't interfere with each
other and do not cause any instability and slowness.
Functions of a Kernel

A kernel of an OS is responsible for performing various functions and has control over the
system. Some main responsibilities of Kernel are given below:

o Device Management
To perform various actions, processes require access to peripheral devices such as a
mouse, keyboard, etc., that are connected to the computer. A kernel is responsible for
controlling these devices using device drivers. Here, a device driver is a computer
program that helps or enables the OS to communicate with any hardware device.
A kernel maintains a list of all the available devices, and this list may be already
known, configured by the user, or detected by OS at runtime.
o Memory Management
The kernel has full control for accessing the computer's memory. Each process
requires some memory to work, and the kernel enables the processes to safely access
the memory. To allocate the memory, the first step is known as virtual
addressing, which is done by paging or segmentation. Virtual addressing is a
process of providing virtual address spaces to the processes. This prevents the
application from crashing into each other.
o Resource Management
One of the important functionalities of Kernel is to share the resources between
various processes. It must share the resources in a way that each process uniformly
accesses the resource.
The kernel also provides a way for synchronization and inter-process
communication (IPC). It is responsible for context switching between processes.
o Accessing Computer Resources
A kernel is responsible for accessing computer resources such as RAM and I/O
devices. RAM or Random-Access Memory is used to contain both data and
instructions. Each program needs to access the memory to execute and mostly wants
more memory than the available. For such a case, Kernel plays its role and decides
which memory each process will use and what to do if the required memory is not
available.
The kernel also allocates the request from applications to use I/O devices such as
keyboards, microphones, printers, etc.

Types of Kernel

There are mainly five types of Kernel, which are given below:

40.1M
710
Prime Ministers of India | List of Prime Minister of India (1947-2020)
Next
Stay
1. Monolithic Kernels

In a monolithic kernel, the same memory space is used to implement user services and
kernel services.

It means, in this type of kernel, there is no different memory used for user services and kernel
services.

As it uses the same memory space, the size of the kernel increases, increasing the overall size
of the OS.

The execution of processes is also faster than other kernel types as it does not use separate
user and kernel space.

Examples of Monolithic Kernels are Unix, Linux, Open VMS, XTS-400, etc.

Advantages:

o The execution of processes is also faster as there is no separate user space and kernel
space and less software involved.
o As it is a single piece of software hence, it's both sources and compiled forms are
smaller.

Disadvantages:

o If any service generates any error, it may crash down the whole system.
o These kernels are not portable, which means for each new architecture, they must be
rewritten.
o Large in size and hence become difficult to manage.
o To add a new service, the complete operating system needs to be modified.
2. Microkernel

A microkernel is also referred to as μK, and it is different from a traditional kernel or


Monolithic Kernel. In this, user services and kernel services are implemented into two
different address spaces: user space and kernel space. Since it uses different spaces for both
the services, so, the size of the microkernel is decreased, and which also reduces the size of
the OS.

Microkernels are easier to manage and maintain as compared to monolithic kernels. Still, if
there will be a greater number of system calls and context switching, then it might reduce the
performance of the system by making it slow.

These kernels use a message passing system for handling the request from one server to
another server.

Only some essential services are provided by microkernels, such as defining memory address
spaces, IPC(Interprocess Communication), and process management. Other services such as
networking are not provided by Kernel and handled by a user-space program known
as servers.

One of the main disadvantages of monolithic kernels that an error in the kernel can crash the
whole system, can be removed in the microkernel. As in a microkernel, if a kernel process
crashes, the crashing of the whole system can still be prevented by restarting the error-caused
services.

Examples of Microkernel are L4, AmigaOS, Minix, K42, etc.

Advantages

o Microkernels can be managed easily.


o A new service can be easily added without modifying the whole OS.
o In a microkernel, if a kernel process crashes, it is still possible to prevent the whole
system from crashing.

Disadvantages

o There is more requirement of software for interfacing, which reduces the system
performance.
o Process management is very complicated.
o The messaging bugs are difficult to fix.
3. Hybrid Kernel

Hybrid kernels are also known as modular kernels, and it is the combination of both
Monolithic and Microkernels. It takes advantage of the speed of monolithic kernels and the
modularity of microkernels.

A hybrid kernel can be understood as the extended version of a microkernel with additional
properties of a monolithic kernel. These kernels are widely used in commercial OS, such as
different versions of MS Windows.

It is much similar to a microkernel, but it also includes some additional code in kernel space
to enhance the performance of the system.

Hybrid kernels allow to run some services such as network stack in kernel space to reduce
the performance compared to a traditional microkernel, but it still allows to run kernel code
(such as device drivers) as servers in user-space.

Examples of Hybrid Kernel are Windows NT, Netware, BeOS, etc.

Advantages:

o There is no requirement for a reboot for testing.


o Third-party technology can be integrated rapidly.

Disadvantages:

o There is a possibility of more bugs with more interfaces to pass through.


o It can be a confusing task to maintain the modules for some administrators, especially
when dealing with issues such as symbol differences.

4. Nanokernel

As the name suggests, in Nanokernel, the complete code of the kernel is very small, which
means the code executing in the privileged mode of the hardware is very small. Here the
term nano defines a kernel that supports a nanosecond clock resolution.

Examples of Nanokernel are EROS etc.

Advantages

o It provides hardware abstractions even with a very small size.


Disadvantages

o Nanokernel lacks system services.

5. Exokernel

Exokernel is still developing and is the experimental approach for designing OS.

This type of kernel is different from other kernels as in this; resource protection is kept
separated from management, which allows us to perform application-specific customization.

Advantages:

o The exokernel-based system can incorporate multiple library operating systems. Each
library exports a different API, such as one can be used for high-level UI
development, and the other can be used for real-time control.

Disadvantages:

o The design of the exokernel is very complex.

The differences between kernel and OS are


Kernel Operating System
Kernel is the core part of operating Operating system (OS) is a
system collection of software that
manages computer hardware
resources
It acts as an interface between It acts as an interface between
software and hardware of the user and hardware of the
computer system. computer
It responsible for protection and
security of the computer
It plays an important role in memory system.
management, task management,
process management and disk
management.
Monolithic Kernel and Micro kernel Single and Multiprogramming
are the two types of kernel. batch system, Distributed
operating system, Real-time
operating system are the types
of operating system.

What are system calls in Operating System?


The interface between a process and an operating system is provided by system calls. In
general, system calls are available as assembly language instructions. They are also included
in the manuals used by the assembly level programmers. System calls are usually made when
a process in user mode requires access to a resource. Then it requests the kernel to provide
the resource via a system call.
A figure representing the execution of the system call is given as follows −

As can be seen from this diagram, the processes execute normally in the user mode until a
system call interrupts this. Then the system call is executed on a priority basis in the kernel
mode. After the execution of the system call, the control returns to the user mode and
execution of user processes can be resumed.
In general, system calls are required in the following situations −
 If a file system requires the creation or deletion of files. Reading and writing from files
also require a system call.
 Creation and management of new processes.
 Network connections also require system calls. This includes sending and receiving
packets.
 Access to a hardware devices such as a printer, scanner etc. requires a system call.

Types of System Calls


There are mainly five types of system calls. These are explained in detail as follows −
Process Control
These system calls deal with processes such as process creation, process termination etc.
File Management
These system calls are responsible for file manipulation such as creating a file, reading a file,
writing into a file etc.
Device Management
These system calls are responsible for device manipulation such as reading from device
buffers, writing into device buffers etc.
Information Maintenance
These system calls handle information and its transfer between the operating system and the
user program.
Communication
These system calls are useful for interprocess communication. They also deal with creating
and deleting a communication connection.
Some of the examples of all the above types of system calls in Windows and Unix are given
as follows −
Types of
Windows Linux
System Calls

CreateProcess() fork()
Process ExitProcess() exit()
Control WaitForSingleObjec wait()
t()

CreateFile() open()
File ReadFile() read()
Management WriteFile() write()
CloseHandle() close()

SetConsoleMode() ioctl()
Device
ReadConsole() read()
Management
WriteConsole() write()

GetCurrentProcessI getpid(
Information D() )
Maintenance SetTimer() alarm()
Sleep() sleep()

pipe()
CreatePipe()
shmget
Communicati CreateFileMapping(
()
on )
mmap(
MapViewOfFile()
)

There are many different system calls as shown above. Details of some of those system calls
are as follows −
open()
The open() system call is used to provide access to a file in a file system. This system call
allocates resources to the file and provides a handle that the process uses to refer to the file.
A file can be opened by multiple processes at the same time or be restricted to one process. It
all depends on the file organisation and file system.
read()
The read() system call is used to access data from a file that is stored in the file system. The
file to read can be identified by its file descriptor and it should be opened using open() before
it can be read. In general, the read() system calls takes three arguments i.e. the file descriptor,
buffer which stores read data and number of bytes to be read from the file.
write()
The write() system calls writes the data from a user buffer into a device such as a file. This
system call is one of the ways to output data from a program. In general, the write system
calls takes three arguments i.e. file descriptor, pointer to the buffer where data is stored and
number of bytes to write from the buffer.
close()
The close() system call is used to terminate access to a file system. Using this system call
means that the file is no longer required by the program and so the buffers are flushed, the
file metadata is updated and the file resources are de-allocated.
What is the purpose of System Programs?

System programs provide an environment where programs can be developed and executed.
In the simplest sense, system programs also provide a bridge between the user interface and
system calls. In reality, they are much more complex. For example, a compiler is a complex
system program.
System Programs Purpose
The system program serves as a part of the operating system. It traditionally lies between the
user interface and the system calls. The user view of the system is actually defined by system
programs and not system calls because that is what they interact with and system programs
are closer to the user interface.
An image that describes system programs in the operating system hierarchy is as follows −

In the above image, system programs as well as application programs form a bridge between
the user interface and the system calls. So, from the user view the operating system observed
is actually the system programs and not the system calls.
Types of System Programs
System programs can be divided into seven parts. These are given as follows:
Status Information
The status information system programs provide required data on the current or past status of
the system. This may include the system date, system time, available memory in system, disk
space, logged in users etc.
Communications
These system programs are needed for system communications such as web browsers. Web
browsers allow systems to communicate and access information from the network as
required.
File Manipulation
These system programs are used to manipulate system files. This can be done using various
commands like create, delete, copy, rename, print etc. These commands can create files,
delete files, copy the contents of one file into another, rename files, print them etc.
Program Loading and Execution
The system programs that deal with program loading and execution make sure that programs
can be loaded into memory and executed correctly. Loaders and Linkers are a prime example
of this type of system programs.
File Modification
System programs that are used for file modification basically change the data in the file or
modify it in some other way. Text editors are a big example of file modification system
programs.
Application Programs
Application programs can perform a wide range of services as per the needs of the users.
These include programs for database systems, word processors, plotting tools, spreadsheets,
games, scientific applications etc.
Programming Language Support
These system programs provide additional support features for different programming
languages. Some examples of these are compilers, debuggers etc. These compile a program
and make sure it is error free respectively.

Views of Operating System

An operating system is a framework that enables user application programs to interact with
system hardware. The operating system does not perform any functions on its own, but it
provides an atmosphere in which various apps and programs can do useful work. The
operating system may be observed from the point of view of the user or the system, and it is
known as the user view and the system view. In this article, you will learn the views of the
operating system.

Viewpoints of Operating System

The operating system may be observed from the viewpoint of the user or the system. It is
known as the user view and the system view. There are mainly two types of views of the
operating system. These are as follows:

1. User View
2. System View

User View

The user view depends on the system interface that is used by the users. Some systems are
designed for a single user to monopolize the resources to maximize the user's task. In these
cases, the OS is designed primarily for ease of use, with little emphasis on quality and none
on resource utilization.

The user viewpoint focuses on how the user interacts with the operating system through the
usage of various application programs. In contrast, the system viewpoint focuses on how the
hardware interacts with the operating system to complete various tasks40Next

1. Single User View Point

Most computer users use a monitor, keyboard, mouse, printer, and other accessories to
operate their computer system. In some cases, the system is designed to maximize the output
of a single user. As a result, more attention is laid on accessibility, and resource allocation is
less important. These systems are much more designed for a single user experience and meet
the needs of a single user, where the performance is not given focus as the multiple user
systems.

2. Multiple User View Point

Another example of user views in which the importance of user experience and performance
is given is when there is one mainframe computer and many users on their computers trying
to interact with their kernels over the mainframe to each other. In such circumstances,
memory allocation by the CPU must be done effectively to give a good user experience. The
client-server architecture is another good example where many clients may interact through a
remote server, and the same constraints of effective use of server resources may arise.

3. Handled User View Point

Moreover, the touchscreen era has given you the best handheld technology ever. Smartphones
interact via wireless devices to perform numerous operations, but they're not as efficient as a
computer interface, limiting their usefulness. However, their operating system is a great
example of creating a device focused on the user's point of view.

4. Embedded System User View Point

Some systems, like embedded systems that lack a user point of view. The remote control used
to turn on or off the tv is all part of an embedded system in which the electronic device
communicates with another program where the user viewpoint is limited and allows the user
to engage with the application.

System View

The OS may also be viewed as just a resource allocator. A computer system comprises
various sources, such as hardware and software, which must be managed effectively. The
operating system manages the resources, decides between competing demands, controls the
program execution, etc. According to this point of view, the operating system's purpose is to
maximize performance. The operating system is responsible for managing hardware
resources and allocating them to programs and users to ensure maximum performance.

From the user point of view, we've discussed the numerous applications that require varying
degrees of user participation. However, we are more concerned with how the hardware
interacts with the operating system than with the user from a system viewpoint. The hardware
and the operating system interact for a variety of reasons, including:

1. Resource Allocation

The hardware contains several resources like registers, caches, RAM, ROM, CPUs, I/O
interaction, etc. These are all resources that the operating system needs when an application
program demands them. Only the operating system can allocate resources, and it has used
several tactics and strategies to maximize its processing and memory space. The operating
system uses a variety of strategies to get the most out of the hardware resources, including
paging, virtual memory, caching, and so on. These are very important in the case of various
user viewpoints because inefficient resource allocation may affect the user viewpoint, causing
the user system to lag or hang, reducing the user experience.

2. Control Program

The control program controls how input and output devices (hardware) interact with the
operating system. The user may request an action that can only be done with I/O devices; in
this case, the operating system must also have proper communication, control, detect, and
handle such devices.

According to the computer system, the operating system is the bridge between applications
and hardware. It is most intimate with the hardware and is used to control it as required.
The different types of system view for operating system can be explained as follows:
 The system views the operating system as a resource allocator. There are many resources
such as CPU time, memory space, file storage space, I/O devices etc. that are required by
processes for execution. It is the duty of the operating system to allocate these resources
judiciously to the processes so that the computer system can run as smoothly as possible.
 The operating system can also work as a control program. It manages all the processes and
I/O devices so that the computer system works smoothly and there are no errors. It makes
sure that the I/O devices work in a proper manner without creating problems.
 Operating systems can also be viewed as a way to make using hardware easier.
 Computers were required to easily solve user problems. However it is not easy to work
directly with the computer hardware. So, operating systems were developed to easily
communicate with the hardware.
 An operating system can also be considered as a program running at all times in the
background of a computer system (known as the kernel) and handling all the application
programs. This is the definition of the operating system that is generally followed.

The Process Abstraction


Operating System Abstractions
Abstractions simplify application design by:
 hiding undesirable properties,
 adding new capabilities, and
 organizing information.
Abstractions provide an interface to application programmers that separates policy—what
the interface commits to accomplishing—from mechanism—how the interface is
implemented.
Example Abstraction: File
What undesirable properties do file systems hide?
 Disks are slow!
 Chunks of storage are actually distributed all over the disk.
 Disk storage may fail!
What new capabilities do files add?
 Growth and shrinking.
 Organization into directories.
What information do files help organize?
 Ownership and permissions.
 Access time, modification time, type, etc.
Preview of Coming Abstractions
 Threads abstract the CPU.
 Address spaces abstract memory.
 Files abstract the disk.
 We will return to these abstractions. We are starting with an organizing principle.

The Process
Processes are the most fundamental operating system abstraction.
 Processes organize information about other abstractions and represent a single thing
that the computer is "doing."
 You know processes as app(lication)s.
Organizing Information
Unlike threads, address spaces and files, processes are not tied to a hardware component.
Instead, they contain other abstractions.
Processes contain:
 one or more threads,
 an address space, and
 zero or more open file handles representing files.
Figure 2. The Process
Process as Protection Boundary
The operating system is responsible for isolating processes from each other.
 What you do in your own process is your own business but it shouldn’t be able to
crash the machine or affect other processes—or at least processes started by other
users.
 Thus: safe intra-process communication is your problem; safe inter-process
communication is an operating system problem.
Intra-Process Communication: Easy

 Communication between multiple threads in a process is usually accomplished


using shared memory.
 Threads within a process also share open file handles and both static and dynamically-
allocated global variables.
 Thread stacks and thus thread local variables are typically private.
Intra-Process Communication: Easy… Maybe

 Sharing data requires synchronization mechanisms to ensure consistency.


 We will return to this later.
Inter-Process Communication: Harder

 A variety of mechanism exist to enable inter-process communication (IPC), including


shared files or sockets, exit codes, signals, pipes and shared memory.
 All require coordination between the communicating processes.
 Most have semantics limiting the degree to which processes can interfere with each
other.
o A process can’t just send a SIGKILL to any other process running on the
machine!

Processes in Operating Systems

What is a process?
In an operating system, a process can be defined as an entity that represents the basic unit of
work to be implemented in the system. When a user boots up a PC, many processes are
started unknown to the user.
Some of the most common processes are as follows.

1. A process that waits for incoming email.


2. A process that runs on behalf of an anti-virus to check for viruses periodically.
3. Browsing the web is also a process.
4. Printing files is also a process.

You probably get an idea of what a process is.


Activity: Write down ten other processes in the comment section. For a challenge, name 5
background processes and 5 foreground processes.
However, at any instant of time, the CPU is running just one process in a multiprogramming
system. But the CPU jumps around from one process to another in such quick succession, it
gives the feeling of executing both processes simultaneously. This illusion of parallelism is
called pseudoparallelism.
What is the Process Model?
When a program is loaded into memory, it becomes a process, which can be divided into four
sections, namely, stack, heap, text, and data.
The ‘stack’ contains the temporary data of the program such as function parameters, return
address, etc. The ‘heap’ is a dynamically allocated memory to a process during its time of
execution. ‘Text’ includes the current activity of the process and the ‘Data’ section contains
the global and local variables.
Now, in our computers, many processes need to be executed simultaneously, so to make the
execution of every process smooth and efficient, the operating system organizes all the
runnable software on the computer into several sequential processes, allotting each process its
own virtual Central Processing Unit (CPU). This whole organization of processes is termed
as a Process Model.
Process Creation
There are different operations in the system for which a process can be created. Some of the
events that lead to process creation are:

 User requests to create a new process


 System initialization
 Batch job initialization
 A running process requires the creation of another process using a system call.

A process may be created by another process using the fork() method. The process which
calls the fork() method or simply, the process creating another process is known as the parent
process and the created process is called a child process. A child process can have only one
parent, but a parent process can have multiple child processes. Also, both the parent and child
processes have the same memory, open files, and environment strings, but they have distinct
address spaces.
Process Termination
Process termination occurs when a process is simply terminated. The exit() system call is
used to terminate a process. Now, a process can be terminated because of various reasons.
They can be categorized into two different categories, voluntary and involuntary. Some of the
causes of process termination are:

 A process may be terminated naturally after its execution releasing all of its resources.
 A child process may be terminated if its parent process requests for the same.
 A process can be terminated if it tries to use a resource that it is not allowed to.
 Another cause of a process termination can be an I/O failure for a process.
 If a parent process is terminated, then it will be another cause for its child process’s
termination.
 A process can also be terminated if it requires more memory than what is currently
available in the system.

Why do processes terminate?

1. Most processes terminate because they have done their work. Processes after
completing their work, perform a system call to tell the Operating System that they
have completed the work. In UNIX, this system call is ‘exit’. In Windows, this goes
by ‘EndProcess’. For the foreground processes, the user has a clickable icon to exit
the process. It’s usually denoted by the x symbol for windows or the red button in
macOS.
2. There may be some instances where the processes exit involuntarily. This may be due
to some fatal error. For example, if you command the compiler to compile a file
called red.c but no such file exists, the compiler will terminate.
3. Sometimes the process causes an error due to which it has to terminate. These errors
include, but are not limited to:
– Referencing nonexistent memory
– Divide by zero error
– Executing an illegal instruction
In some cases, the process signals the operating system that it wishes to handle the
error itself using an interrupt.
4. There are some processes that instruct the OS to kill other processes. In UNIX, this
command is straightforward – ‘kill’. Anyway, the killer must have the necessary
authorization to kill the other process.

Process Hierarchy
In a computer system, we require to run many processes at a time and some processes need to
create other processes whilst their execution. When a process creates another process, then
the parent and the child processes tend to associate with each other in certain ways and
further. The child process can also create other processes if required. This parent-child like
structure of processes form a hierarchy, called Process Hierarchy.
Unlike sexual reproduction though, the child processes have only one parent. A process can
be a parent by itself and that is usually the case.
In UNIX, a process and all of its children and grandchildren form a process group. When a
user sends a signal from the keyboard, the signal is delivered to all members of the process
group currently associated with the keyboard. Individually, each process can catch the signal
and do as it wishes.
There is no hierarchy system in Windows. Each process is treated equally. The only remote
association with the hierarchy that it does is that a parent process gets a handle. But then
again, this parent process can pass the handle to some other process thereby invalidating the
hierarchy system.
Two-State Process Model
A Two-State Process Model categorizes a process in two categories:

1. A process running on CPU


2. A process not running on CPU

When a process is created, it is in the not in running state and when the CPU runs the process,
it is in the running state. If there is a process in a running state and another process is created
with a higher priority, then the newly created process will be run by the CPU. The former
process will go in the not running state.
Five-State Process Model
The Five-State Process Model categorizes a process in five states:

1. New: When a process is newly created, it is said to be in the new state.


2. Ready: All those processes which are loaded into the primary memory and are
waiting for the CPU are said to be in ready state.
3. Running: All the processes which are running are said to be in the running state.
4. Waiting: All the processes, which leave the CPU because of any reason (I/O or for
any high priority process) and wait for their execution, are in waiting state.
5. Terminated: A process that exits or terminates from the CPU and the primary
memory is said to be in the terminated state.

Implementation of Processes
To implement the process table, the operating system maintains a table called the process
table, with one entry per process.
The process table is a data table which is used by an operating system to keep track of all the
processes present in the system. The entries in the table tell about the information about the
states of the processes, their program counters, stack pointers, allocated memories, the status
of their files and their accounting and scheduling information. They contain every
information about the present processes and also, of those processes which are switched with
the current processes.
Some of the fields of a typical process table is given below.
Process Management Memory Management File Management

Registers Pointer to text segment info Root directory


Program counter Pointer to the data segment info Working directory

Program status word Pointer to stack segment info File descriptors

Stack pointer User ID

Process state Group ID

Priority

Scheduling parameters

Process identity

Parent process

Process group

Signals

Time when process started

CPU time used

CPU time of children

Time of next alarm

Thread in Operating System

What is a Thread?
A thread is a path of execution within a process. A process can contain multiple threads.
Why Multithreading?
A thread is also known as lightweight process. The idea is to achieve parallelism by dividing
a process into multiple threads. For example, in a browser, multiple tabs can be different
threads. MS Word uses multiple threads: one thread to format the text, another thread to
process inputs, etc. More advantages of multithreading are discussed below
Process vs Thread?
The primary difference is that threads within the same process run in a shared memory space,
while processes run in separate memory spaces.
Threads are not independent of one another like processes are, and as a result threads share
with other threads their code section, data section, and OS resources (like open files and
signals). But, like process, a thread has its own program counter (PC), register set, and stack
space.
Advantages of Thread over Process
1. Responsiveness: If the process is divided into multiple threads, if one thread completes its
execution, then its output can be immediately returned.
2. Faster context switch: Context switch time between threads is lower compared to process
context switch. Process context switching requires more overhead from the CPU.
3. Effective utilization of multiprocessor system: If we have multiple threads in a single
process, then we can schedule multiple threads on multiple processor. This will make process
execution faster.
4. Resource sharing: Resources like code, data, and files can be shared among all threads
within a process.
Note: stack and registers can’t be shared among the threads. Each thread has its own stack
and registers.
5. Communication: Communication between multiple threads is easier, as the threads shares
common address space. while in process we have to follow some specific communication
technique for communication between two process.
6. Enhanced throughput of the system: If a process is divided into multiple threads, and each
thread function is considered as one job, then the number of jobs completed per unit of time
is increased, thus increasing the throughput of the system.
Types of Threads
There are two types of threads.
User Level Thread
Kernel Level Thread

Threads and its types in Operating System

Thread is a single sequence stream within a process. Threads have same properties as of the
process so they are called as light weight processes. Threads are executed one after another
but gives the illusion as if they are executing in parallel. Each thread has different states.
Each thread has
1. A program counter
2. A register set
3. A stack space
Threads are not independent of each other as they share the code, data, OS resources etc.
Similarity between Threads and Processes –
 Only one thread or process is active at a time
 Within process both execute sequential
 Both can create children
Differences between Threads and Processes –
 Threads are not independent, processes are.
 Threads are designed to assist each other, processes may or may not do it

Types of Threads:
1. User Level thread (ULT) – Is implemented in the user level library, they are not created
using the system calls. Thread switching does not need to call OS and to cause interrupt to
Kernel. Kernel doesn’t know about the user level thread and manages them as if they were
single-threaded processes.
 Advantages of ULT –
 Can be implemented on an OS that doesn’t support multithreading.
 Simple representation since thread has only program counter, register set, stack
space.
 Simple to create since no intervention of kernel.
 Thread switching is fast since no OS calls need to be made.
 Limitations of ULT –
 No or less co-ordination among the threads and Kernel.
 If one thread causes a page fault, the entire process blocks.
2. Kernel Level Thread (KLT) – Kernel knows and manages the threads. Instead of thread
table in each process, the kernel itself has thread table (a master one) that keeps track of all
the threads in the system. In addition kernel also maintains the traditional process table to
keep track of the processes. OS kernel provides system call to create and manage threads.
 Advantages of KLT –
 Since kernel has full knowledge about the threads in the system, scheduler may
decide to give more time to processes having large number of threads.
 Good for applications that frequently block.
 Limitations of KLT –
 Slow and inefficient.
 It requires thread control block so it is an overhead.
Summary:
1. Each ULT has a process that keeps track of the thread using the Thread table.
2. Each KLT has Thread Table (TCB) as well as the Process Table (PCB).

Difference between Process and Thread


Process: Processes are basically the programs that are dispatched from the ready state and
are scheduled in the CPU for execution. PCB(Process Control Block) holds the concept of
process. A process can create other processes which are known as Child Processes. The
process takes more time to terminate and it is isolated means it does not share the memory
with any other process.
The process can have the following states new, ready, running, waiting, terminated, and
suspended.
Thread: Thread is the segment of a process which means a process can have multiple
threads and these multiple threads are contained within a process. A thread has three states:
Running, Ready, and Blocked.
The thread takes less time to terminate as compared to the process but unlike the process,
threads do not isolate.
Process vs Thread

Difference between Process and Thread:


S.NO Process Thread

Process means any program is in


1. execution. Thread means a segment of a process.

The process takes more time to


2. terminate. The thread takes less time to terminate.

3. It takes more time for creation. It takes less time for creation.

It also takes more time for


4. context switching. It takes less time for context switching.

The process is less efficient in Thread is more efficient in terms of


5. terms of communication. communication.

We don’t need multi programs in action for


Multiprogramming holds the multiple threads because a single process consists
6. concepts of multi-process. of multiple threads.

7. The process is isolated. Threads share memory.

The process is called the A Thread is lightweight as each thread in a process


8. heavyweight process. shares code, data, and resources.

Thread switching does not require calling an


Process switching uses an operating system and causes an interrupt to the
9. interface in an operating system. kernel.

If one process is blocked then it


will not affect the execution of If a user-level thread is blocked, then all other user-
10. other processes level threads are blocked.

The process has its own Process


Control Block, Stack, and Thread has Parents’ PCB, its own Thread Control
11. Address Space. Block, and Stack and common Address space.

12. Changes to the parent process do Since all threads of the same process share address
not affect child processes. space and other resources so any changes to the
main thread may affect the behavior of the other
S.NO Process Thread

threads of the process.

13. A system call is involved in it. No system call is involved, it is created using APIs.

The process does not share data


14. with each other. Threads share data with each other.

Threading Issues in OS
There are several threading issues when we are in a multithreading environment. In this
section, we will discuss the threading issues with system calls, cancellation of thread, signal
handling, thread pool and thread-specific data.

Threading Issues in OS

1. System Calls
2. Thread Cancellation
3. Signal Handling
4. Thread Pool
5. Thread Specific Data

1. fork() and exec() System Calls


The fork() and exec() are the system calls. The fork() call creates a duplicate process of the
process that invokes fork(). The new duplicate process is called child process and process
invoking the fork() is called the parent process. Both the parent process and the child process
continue their execution from the instruction that is just after the fork().
Let us now discuss the issue with the fork() system call. Consider that a thread of the
multithreaded program has invoked the fork(). So, the fork() would create a new duplicate
process. Here the issue is whether the new duplicate process created by fork() will duplicate
all the threads of the parent process or the duplicate process would be single-threaded.
Well, there are two versions of fork() in some of the UNIX systems. Either the fork() can
duplicate all the threads of the parent process in the child process or the fork() would only
duplicate that thread from parent process that has invoked it.
Which version of fork() must be used totally depends upon the application.
Next system call i.e. exec() system call when invoked replaces the program along with all its
threads with the program that is specified in the parameter to exec(). Typically the exec()
system call is lined up after the fork() system call.
Here the issue is if the exec() system call is lined up just after the fork() system call then
duplicating all the threads of parent process in the child process by fork() is useless. As the
exec() system call will replace the entire process with the process provided to exec() in the
parameter.
In such case, the version of fork() that duplicates only the thread that invoked the fork()
would be appropriate.

2. Thread cancellation
Termination of the thread in the middle of its execution it is termed as ‘thread cancellation’.
Let us understand this with the help of an example. Consider that there is a multithreaded
program which has let its multiple threads to search through a database for some information.
However, if one of the thread returns with the desired result the remaining threads will be
cancelled.
Now a thread which we want to cancel is termed as target thread. Thread cancellation can be
performed in two ways:
Asynchronous Cancellation: In asynchronous cancellation, a thread is employed to
terminate the target thread instantly.
Deferred Cancellation: In deferred cancellation, the target thread is scheduled to check itself
at regular interval whether it can terminate itself or not.
The issue related to the target threads are listed below:

 What if the resources had been allotted to the cancel target thread?
 What if the target thread is terminated when it was updating the data, it was sharing with
some other thread.

Here the asynchronous cancellation of the thread where a thread immediately cancels the
target thread without checking whether it is holding any resources or not creates troublesome.
However, in deferred cancellation, the thread that indicates the target thread about the
cancellation, the target thread crosschecks its flag in order to confirm that it should it be
cancelled immediately or not. The thread cancellation takes place where they can be
cancelled safely such points are termed as cancellation points by Pthreads.

3. Signal Handling
Signal handling is more convenient in the single-threaded program as the signal would be
directly forwarded to the process. But when it comes to multithreaded program, the issue
arrives to which thread of the program the signal should be delivered.
Let’s say the signal would be delivered to:

 All the threads of the process.


 To some specific threads in a process.
 To the thread to which it applies
 Or you can assign a thread to receive all the signals.

Well, how the signal would be delivered to the thread would be decided, depending upon the
type of generated signal. The generated signal can be classified into two type’s synchronous
signal and asynchronous signal.
Synchronous signals are forwarded to the same process that leads to the generation of the
signal. Asynchronous signals are generated by the event external to the running process thus
the running process receives the signals asynchronously.
So if the signal is synchronous it would be delivered to the specific thread causing the
generation of the signal. If the signal is asynchronous it cannot be specified to which thread
of the multithreaded program it would be delivered. If the asynchronous signal is notifying to
terminate the process the signal would be delivered to all the thread of the process.
The issue of an asynchronous signal is resolved up to some extent in most of the
multithreaded UNIX system. Here the thread is allowed to specify which signal it can accept
and which it cannot. However, the Window operating system does not support the concept of
the signal instead it uses asynchronous procedure call (ACP) which is similar to the
asynchronous signal of the UNIX system.
UNIX allow the thread to specify which signal it can accept and which it will not whereas the
ACP is forwarded to the specific thread.

4. Thread Pool
When a user requests for a webpage to the server, the server creates a separate thread to
service the request. Although the server also has some potential issues. Consider if we do not
have a bound on the number of actives thread in a system and would create a new thread for
every new request then it would finally result in exhaustion of system resources.
We are also concerned about the time it will take to create a new thread. It must not be that
case that the time require to create a new thread is more than the time required by the thread
to service the request and then getting discarded as it would result in wastage of CPU time.
The solution to this issue is the thread pool. The idea is to create a finite amount of threads
when the process starts. This collection of threads is referred to as the thread pool. The
threads stay in the thread pool and wait till they are assigned any request to be serviced.
Whenever the request arrives at the server, it invokes a thread from the pool and assigns it the
request to be serviced. The thread completes its service and return back to the pool and wait
for the next request.
If the server receives a request and it does not find any thread in the thread pool it waits for
some or the other thread to become free and return to the pool. This much better than creating
a new thread each time a request arrives and convenient for the system that cannot handle a
large number of concurrent threads.

5. Thread Specific data


We all are aware of the fact that the threads belonging to the same process share the data of
that process. Here the issue is what if each particular thread of the process needs its own copy
of data. So the specific data associated with the specific thread is referred to as thread-
specific data.
Consider a transaction processing system, here we can process each transaction in a different
thread. To determine each transaction uniquely we will associate a unique identifier with it.
Which will help the system to identify each transaction uniquely.
As we are servicing each transaction in a separate thread. So we can use thread-specific data
to associate each thread to a specific transaction and its unique id. Thread libraries such as
Win32, Pthreads and Java support to thread-specific data.
So these are threading issues that occur in the multithreaded programming environment. We
have also seen how these issues can be resolved.

Thread Libraries in OS
Thread Libraries has a collection of functions that useful in creating and controlling threads.
Programmers can access these thread libraries using an application programming interface
(API). Thread libraries can be the user level library or kernel level library.
If the thread library is implemented at the userspace then code and data of the thread library
would reside in user space. In this case, invoking any function from thread library would be a
simple function call and it won’t be a system call.
If the thread library is implemented at the kernel space then code and data of the library
would reside in the kernel space and would be supported by the operating system. In this
case, invoking a function from thread library would be a system call to the kernel. In the
section further, we would be discussing three kinds of thread libraries.
Thread Libraries in Operating System

1. PthreadsLibrary
2. Win32 Library
3. Java Library

Thread Library
A thread library provides the programmer with an Application program interface for creating
and managing thread.
Ways of implementing thread library
There are two primary ways of implementing thread library, which are as follows −
 The first approach is to provide a library entirely in user space with kernel support. All
code and data structures for the library exist in a local function call in user space and not
in a system call.
 The second approach is to implement a kernel level library supported directly by the
operating system. In this case the code and data structures for the library exist in kernel
space.
Invoking a function in the application program interface for the library typically results in a
system call to the kernel.
The main thread libraries which are used are given below −
 POSIX threads − Pthreads, the threads extension of the POSIX standard, may be
provided as either a user level or a kernel level library.
 WIN 32 thread − The windows thread library is a kernel level library available on
windows systems.
 JAVA thread − The JAVA thread API allows threads to be created and managed directly
as JAVA programs.

Operating System - Process Scheduling

What is Process Scheduling?

The act of determining which process is in the ready state, and should be moved to
the running state is known as Process Scheduling.

The prime aim of the process scheduling system is to keep the CPU busy all the time and to
deliver minimum response time for all programs. For achieving this, the scheduler must apply
appropriate rules for swapping processes IN and OUT of CPU.

Scheduling fell into one of the two general categories:

 Non Pre-emptive Scheduling: When the currently executing process gives up the
CPU voluntarily.
 Pre-emptive Scheduling: When the operating system decides to favour another
process, pre-empting the currently executing process.

What are Scheduling Queues?

 All processes, upon entering into the system, are stored in the Job Queue.
 Processes in the Ready state are placed in the Ready Queue.
 Processes waiting for a device to become available are placed in Device Queues.
There are unique device queues available for each I/O device.
A new process is initially put in the Ready queue. It waits in the ready queue until it is
selected for execution(or dispatched). Once the process is assigned to the CPU and is
executing, one of the following several events can occur:
 The process could issue an I/O request, and then be placed in the I/O queue.
 The process could create a new subprocess and wait for its termination.
 The process could be removed forcibly from the CPU, as a result of an interrupt, and
be put back in the ready queue.

In the first two cases, the process eventually switches from the waiting state to the ready
state, and is then put back in the ready queue. A process continues this cycle until it
terminates, at which time it is removed from all queues and has its PCB and resources
deallocated.

Types of Schedulers
There are three types of schedulers available:

1. Long Term Scheduler


2. Short Term Scheduler
3. Medium Term Scheduler
Let's discuss about all the different types of Schedulers in detail:

Long Term Scheduler


Long term scheduler runs less frequently. Long Term Schedulers decide which program must
get into the job queue. From the job queue, the Job Processor, selects processes and loads
them into the memory for execution. Primary aim of the Job Scheduler is to maintain a good
degree of Multiprogramming. An optimal degree of Multiprogramming means the average
rate of process creation is equal to the average departure rate of processes from the execution
memory.

Short Term Scheduler


This is also known as CPU Scheduler and runs very frequently. The primary aim of this
scheduler is to enhance CPU performance and increase process execution rate.

Medium Term Scheduler


This scheduler removes the processes from memory (and from active contention for the
CPU), and thus reduces the degree of multiprogramming. At some later time, the process can
be reintroduced into memory and its execution van be continued where it left off. This
scheme is called swapping. The process is swapped out, and is later swapped in, by the
medium term scheduler.

Swapping may be necessary to improve the process mix, or because a change in memory
requirements has overcommitted available memory, requiring memory to be freed up. This
complete process is descripted in the below diagram:

Addition of Medium-term scheduling to the queueing diagram.

What is Context Switch?

1. Switching the CPU to another process requires saving the state of the old process
and loading the saved state for the new process. This task is known as a Context
Switch.
2. The context of a process is represented in the Process Control Block(PCB) of a
process; it includes the value of the CPU registers, the process state and memory-
management information. When a context switch occurs, the Kernel saves the context
of the old process in its PCB and loads the saved context of the new process
scheduled to run.
3. Context switch time is pure overhead, because the system does no useful work
while switching. Its speed varies from machine to machine, depending on the
memory speed, the number of registers that must be copied, and the existence of
special instructions(such as a single instruction to load or store all registers). Typical
speeds range from 1 to 1000 microseconds.
4. Context Switching has become such a performance bottleneck that programmers are
using new structures(threads) to avoid it whenever and wherever possible.

Operations on Process
Below we have discussed the two major operation Process Creation and Process
Termination.

Process Creation
Through appropriate system calls, such as fork or spawn, processes may create other
processes. The process which creates other process, is termed the parent of the other process,
while the created sub-process is termed its child.

Each process is given an integer identifier, termed as process identifier, or PID. The parent
PID (PPID) is also stored for each process.

On a typical UNIX systems the process scheduler is termed as sched, and is given PID 0. The
first thing done by it at system start-up time is to launch init, which gives that process PID 1.
Further Init launches all the system daemons and user logins, and becomes the ultimate
parent of all other processes.

A child process may receive some amount of shared resources with its parent depending on
system implementation. To prevent runaway children from consuming all of a certain system
resource, child processes may or may not be limited to a subset of the resources originally
allocated to the parent.

There are two options for the parent process after creating the child :
 Wait for the child process to terminate before proceeding. Parent process makes
a wait() system call, for either a specific child process or for any particular child
process, which causes the parent process to block until the wait() returns. UNIX shells
normally wait for their children to complete before issuing a new prompt.
 Run concurrently with the child, continuing to process without waiting. When a
UNIX shell runs a process as a background task, this is the operation seen. It is also
possible for the parent to run for a while, and then wait for the child later, which
might occur in a sort of a parallel processing operation.
There are also two possibilities in terms of the address space of the new process:

1. The child process is a duplicate of the parent process.


2. The child process has a program loaded into it.
To illustrate these different implementations, let us consider the UNIX operating system. In
UNIX, each process is identified by its process identifier, which is a unique integer. A new
process is created by the fork system call. The new process consists of a copy of the address
space of the original process. This mechanism allows the parent process to communicate
easily with its child process. Both processes (the parent and the child) continue execution at
the instruction after the fork system call, with one difference: The return code for the fork
system call is zero for the new(child) process, whereas the(non zero) process identifier of
the child is returned to the parent.

Typically, the execlp system call is used after the fork system call by one of the two
processes to replace the process memory space with a new program. The execlp system call
loads a binary file into memory - destroying the memory image of the program containing the
execlp system call – and starts its execution. In this manner the two processes are able to
communicate, and then to go their separate ways.

Process Termination
By making the exit(system call), typically returning an int, processes may request their own
termination. This int is passed along to the parent if it is doing a wait(), and is typically zero
on successful completion and some non-zero code in the event of any problem.

Processes may also be terminated by the system for a variety of reasons, including :

 The inability of the system to deliver the necessary system resources.


 In response to a KILL command or other unhandled process interrupts.
 A parent may kill its children if the task assigned to them is no longer needed i.e. if
the need of having a child terminates.
 If the parent exits, the system may or may not allow the child to continue without a
parent (In UNIX systems, orphaned processes are generally inherited by init, which
then proceeds to kill them.)
When a process ends, all of its system resources are freed up, open files flushed and closed,
etc. The process termination status and execution times are returned to the parent if the parent
is waiting for the child to terminate, or eventually returned to init if the process already
became an orphan.
The processes which are trying to terminate but cannot do so because their parent is not
waiting for them are termed zombies. These are eventually inherited by init as orphans and
killed off.

CPU Scheduling in Operating System

CPU scheduling is a process that allows one process to use the CPU while the execution of
another process is on hold(in waiting state) due to unavailability of any resource like I/O etc,
thereby making full use of CPU. The aim of CPU scheduling is to make the system efficient,
fast, and fair.

Whenever the CPU becomes idle, the operating system must select one of the processes in
the ready queue to be executed. The selection process is carried out by the short-term
scheduler (or CPU scheduler). The scheduler selects from among the processes in memory
that are ready to execute and allocates the CPU to one of them.

CPU Scheduling: Dispatcher


Another component involved in the CPU scheduling function is the Dispatcher. The
dispatcher is the module that gives control of the CPU to the process selected by the short-
term scheduler. This function involves:

 Switching context
 Switching to user mode
 Jumping to the proper location in the user program to restart that program from where
it left last time.

The dispatcher should be as fast as possible, given that it is invoked during every process
switch. The time taken by the dispatcher to stop one process and start another process is
known as the Dispatch Latency. Dispatch Latency can be explained using the below figure:

Types of CPU Scheduling


CPU scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state(for I/O request
or invocation of wait for the termination of one of the child processes).

2. When a process switches from the running state to the ready state (for example,
when an interrupt occurs).

3. When a process switches from the waiting state to the ready state(for example,
completion of I/O).

4. When a process terminates.

In circumstances 1 and 4, there is no choice in terms of scheduling. A new process(if one


exists in the ready queue) must be selected for execution. There is a choice, however in
circumstances 2 and 3.

When Scheduling takes place only under circumstances 1 and 4, we say the scheduling
scheme is non-preemptive; otherwise, the scheduling scheme is preemptive.

Non-Preemptive Scheduling
Under non-preemptive scheduling, once the CPU has been allocated to a process, the process
keeps the CPU until it releases the CPU either by terminating or by switching to the waiting
state.

This scheduling method is used by the Microsoft Windows 3.1 and by the Apple Macintosh
operating systems.

It is the only method that can be used on certain hardware platforms because It does not
require the special hardware(for example a timer) needed for preemptive scheduling.

In non-preemptive scheduling, it does not interrupt a process running CPU in the middle of
the execution. Instead, it waits till the process completes its CPU burst time, and then after
that it can allocate the CPU to any other process.

Some Algorithms based on non-preemptive scheduling are: Shortest Job First (SJF basically
non-preemptive) Scheduling and Priority (non- preemptive version) Scheduling, etc.
Preemptive Scheduling
In this type of Scheduling, the tasks are usually assigned with priorities. At times it is
necessary to run a certain task that has a higher priority before another task although it is
running. Therefore, the running task is interrupted for some time and resumed later when the
priority task has finished its execution.

Thus this type of scheduling is used mainly when a process switches either from running state
to ready state or from waiting state to ready state. The resources (that is CPU cycles) are
mainly allocated to the process for a limited amount of time and then are taken away, and
after that, the process is again placed back in the ready queue in the case if that process still
has a CPU burst time remaining. That process stays in the ready queue until it gets the next
chance to execute.

Some Algorithms that are based on preemptive scheduling are Round Robin Scheduling
(RR), Shortest Remaining Time First (SRTF), Priority (preemptive version) Scheduling, etc.
CPU Scheduling: Scheduling Criteria
There are many different criteria to check when considering the "best" scheduling algorithm,
they are:

CPU Utilization
To make out the best use of the CPU and not to waste any CPU cycle, the CPU would be
working most of the time(Ideally 100% of the time). Considering a real system, CPU usage
should range from 40% (lightly loaded) to 90% (heavily loaded.)

Throughput
It is the total number of processes completed per unit of time or rather says the total amount
of work done in a unit of time. This may range from 10/second to 1/hour depending on the
specific processes.

Turnaround Time
It is the amount of time taken to execute a particular process, i.e. The interval from the time
of submission of the process to the time of completion of the process(Wall clock time).

Waiting Time
The sum of the periods spent waiting in the ready queue amount of time a process has been
waiting in the ready queue to acquire get control on the CPU.
Load Average
It is the average number of processes residing in the ready queue waiting for their turn to get
into the CPU.

Response Time
Amount of time it takes from when a request was submitted until the first response is
produced. Remember, it is the time till the first response and not the completion of process
execution(final response).

In general CPU utilization and Throughput are maximized and other factors are reduced for
proper optimization.

Scheduling Algorithms
To decide which process to execute first and which process to execute last to achieve
maximum CPU utilization, computer scientists have defined some algorithms, they are:

1. First Come First Serve(FCFS) Scheduling

2. Shortest-Job-First(SJF) Scheduling

3. Priority Scheduling

4. Round Robin(RR) Scheduling

5. Multilevel Queue Scheduling

6. Multilevel Feedback Queue Scheduling

7. Shortest Remaining Time First (SRTF)

8. Longest Remaining Time First (LRTF)

9. Highest Response Ratio Next (HRRN)

Preemptive and Non-Preemptive Scheduling


1. Preemptive Scheduling:
Preemptive scheduling is used when a process switches from running state to
ready state or from the waiting state to ready state. The resources (mainly
CPU cycles) are allocated to the process for a limited amount of time and
then taken away, and the process is again placed back in the ready queue if
that process still has CPU burst time remaining. That process stays in the
ready queue till it gets its next chance to execute.
2.
Algorithms based on preemptive scheduling are: Round Robin (RR),Shortest Remaining
Time First (SRTF), Priority (preemptive version) , etc.
2. Non-Preemptive Scheduling:
Non-preemptive Scheduling is used when a process terminates, or a process switches from
running to the waiting state. In this scheduling, once the resources (CPU cycles) are
allocated to a process, the process holds the CPU till it gets terminated or reaches a waiting
state. In the case of non-preemptive scheduling does not interrupt a process running CPU in
the middle of the execution. Instead, it waits till the process completes its CPU burst time,
and then it can allocate the CPU to another process.
Algorithms based on non-preemptive scheduling are: Shortest Job First (SJF basically non
preemptive) and Priority (non preemptive version) , etc.

Key Differences Between Preemptive and Non-Preemptive Scheduling:


1. In preemptive scheduling, the CPU is allocated to the processes for a limited time whereas,
in Non-preemptive scheduling, the CPU is allocated to the process till it terminates or
switches to the waiting state.
2. The executing process in preemptive scheduling is interrupted in the middle of execution
when higher priority one comes whereas, the executing process in non-preemptive
scheduling is not interrupted in the middle of execution and waits till its execution.
3. In Preemptive Scheduling, there is the overhead of switching the process from the ready
state to running state, vise-verse and maintaining the ready queue. Whereas in the case of
non-preemptive scheduling has no overhead of switching the process from running state to
ready state.
4. In preemptive scheduling, if a high-priority process frequently arrives in the ready queue
then the process with low priority has to wait for a long, and it may have to starve. , in the
non-preemptive scheduling, if CPU is allocated to the process having a larger burst time
then the processes with small burst time may have to starve.
5. Preemptive scheduling attains flexibility by allowing the critical processes to access the
CPU as they arrive into the ready queue, no matter what process is executing currently.
Non-preemptive scheduling is called rigid as even if a critical process enters the ready
queue the process running CPU is not disturbed.
6. Preemptive Scheduling has to maintain the integrity of shared data that’s why it is cost
associative which is not the case with Non-preemptive Scheduling.
Comparison Chart:
Parameter PREEMPTIVE SCHEDULING NON-PREEMPTIVE SCHEDULING

Once resources(CPU Cycle) are allocated


In this resources(CPU Cycle) are to a process, the process holds it till it
allocated to a process for a limited completes its burst time or switches to
Basic time. waiting state.

Process can be interrupted in Process can not be interrupted until it


Interrupt between. terminates itself or its time is up.

If a process having high priority


frequently arrives in the ready If a process with a long burst time is
queue, a low priority process may running CPU, then later coming process
Starvation starve. with less CPU burst time may starve.

It has overheads of scheduling the


Overhead processes. It does not have overheads.

Flexibility flexible rigid

Cost cost associated no cost associated

CPU In preemptive scheduling, CPU


Utilization utilization is high. It is low in non preemptive scheduling.

Waiting Preemptive scheduling waiting time Non-preemptive scheduling waiting time


Time is less. is high.
Parameter PREEMPTIVE SCHEDULING NON-PREEMPTIVE SCHEDULING

Response Preemptive scheduling response Non-preemptive scheduling response time


Time time is less. is high.

Examples of preemptive scheduling Examples of non-preemptive scheduling


are Round Robin and Shortest are First Come First Serve and Shortest Job
Examples Remaining Time First. First.

You might also like