0 ratings0% found this document useful (0 votes) 109 views38 pagesOS Unit-1-1
Computer science and engineering
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here.
Available Formats
Download as PDF or read online on Scribd
i ; 7 F data,
‘An Operating System is the interface between the computer hardware and the end-user E rocessing Of ee
Tuning applications, file management and handling the memory is all managed by o Se
‘ Mac, Android ete. Are examples of Operating systems which are generally used NOW)
History of the Operating System
ven the user and the hardware. Itis
ine. Various tasks that are
‘memory management,
ferface betwe
An operating system is a type of software that acts as an i
responsible to handle various critical functions of thé computer or any other mac!
handled by OS are file management, task management, garbage management,
Process management, disk management, I/O management, peripherals management, ete.
Functions of Operating
lated
Security To safeguard user data, the operating system employs password protection and other r=
‘measures. It also protects programs and user data from illegal access.
(ors the overall health of the system in
vstem’s health, keep track of the time
ing critical information
Control over System Performance The operating system monit
order to optimize performance. To get a thorough picture of the
between system responses and service requests. This can aid performance by prov
for troubleshooting issues.
wumed by
Job Accounting The operating system maintains track of how much time and resources are cot
different tasks and users, and this data can be used to measure resource utilisation for a specific user or
group of users.
Error Detecting Aids The OS constantly monitors the system in order to discover faults and prevent a
computer system from failing.
n between Users and Other Software
Coordit
well as other software to
Operating systems also organise and assign interpreters, compilers, assemblers,
computer users.
Memory Management
The operat ging the primary memory, often known as the main memory, The
main memory consists of a vast array of bytes or words, each of which is allocated an address. Main
memory is rapid storage that the CPU can access directly. A program must first be loaded into the main
memory before it can be executed. For memory management, the OS performs the following t
The OS Keeps track of primary memory — meaning, which user program can use which bytes of
memory, memory addresses that have already been assigned, as well as memory addresses yet to be ased
‘The OS determines the order in which processes would be permitted memory aceess and for how long in
multiprogramming.
Inallocates memory to the process when the process asks for it and deallocates memory when the process
exits or performs an /O activity.
Process Managementh processing,
nd how much pr
for this feature
ress to the processor a
weduling is the name
ny tasks;
tem determines which processes have a
na multiprogramming environment, Process sched
sor management, the OS performs the follow
The oper y
time every process ha
of the operating system.
For procs
Device Management
{. Other directories and
rating system are: it
ther things. The
(em is divided into direetories to make navigation and usage more efficient
Phe file management tasks performed by an ope!
\d the state of each file, among ot
Afie
files may be found in these directories,
Keeps track of where data is kept, user access settings,
file system is the name given to all of these features.
Serviees by OS
ter
. in one form or another, provides certain services to the comput
Further more, the operating sys
h a number of services, which can be summarised as follows
system. The OS provides users wi
whether they are user or
|. Program Execution: The OS is in charge of running all types of programs, ‘
‘ Be Tereesur that all types of
system programs. The operating system makes use of a variety of resourees to ensure
functions perform smoothly. &
of
ins; The operating system is in charge of handling various types
2. Handling Input/Output Opes
sktop. Regarding all types of inputs and outputs. the
inputs, such as those from the keyboard, mouse, and d
operating system handles all interfaces in the most appropriate manner.
For instance, the nature of all types of peripheral devices, such as mice or keyboards, differs, and the
operating system is responsible for transferring data between them,
3. Manipulation of File System: The OS is in charge of deciding where data or files should be stored, such
as on a floppy disk, hard disk, or pen drive, The operating system determines how data should be stored and
handled.
4, Error Detection and Handling: The OS is in charge of detecting any errors or flaws that may occur
during any task. The well-secured OS can also operate as a countermeasure, preventing and possibly
handling any type of intrusion into the computer system from an external source.
5, Resource Allocation: The operating system guarantees that all available resources are properly wlilised
by determining which resource should be used by whom and for how long, The operating system makes all
of the choices.
6. Accounting: The operating system keeps track of all the functions that are active in the computer system
at any one time. The operating system keeps track of all the facts, including the types of mistakes that
happened.
7. Information and Resouree Protection: The operating system is in charge of making the most secure
use of all the data and resources available on the machine. Any attempt by an external resource to obstruct
data or information must be foiled by the operating system,
‘An OS refers to a collection of well-organised applications that manage computerhardy ,
Ware. It is a form of system software which is responsible for the computer system's flawless
operation.
Th oe
Tere are basically eight types of Operating Systems, Here are all the (yPes of OS along with their pros and
1, Batch Operating System
tched together and
Batel ;
ich processing was quite popular in the 1970s, Similar types of jobs have been ba
t a single computer,
com) iets
: pleted on time using this method, People were very much accustomed to utilising
Hown as a mainframe.
Now, ei ;
Now, more than one individual can have access to this Operating System, and they can submit their jobs to
the system for execution. The system places all ofthe jobs in a queue first come, first served, and then
-Necutes them one ata time, When all of the jobs are completed, the users receive their respective Outpuls:
Batch
Jobs
"Jobs
Jobs
Jobs
Jobs
Monitor
Operating System
Hardware
2, Multiprogramming Operating System
Multiprogramming isa variation of batch processing in which the CPU is Kept busy at all times. CPU time
and IO time are two forms of system time required by each process. When a process completes its /O in a
svvtiprogramming environment, the CPU can begin the execution of other processes. As a result
rnultiprogramming helps in improving the systems eficieney. Read more on Multiprogramming Operating
System here.
Main Memory
Operating System
writing
output data, aa
Program B
Program ©
Execution in ovatjing arcu)
progress
Secondary
Gisk Storage eeu
Jobs in multiprogramming systems
+3, Multiprocessing Operating Systemssors in a system, cach of
Multiprocessing helps in performing parallel computing, There are several proce:
be significantly increased
which can run multiple processes at the sai . The system’s throughp
esses at the same tims
systen ighput w
Multitasking Operating System
ystem. which
The asking OS refers to a S
multitasking OS refers to a logical extension of the multiprogramming Operating
ks
allows me
ats users fo run many programs at the same time. It enables a user to complete multiple computer ta
a ‘ame time. Read more on the Multitasking operating System here.
Network Operating System
A Netw : scat
aah oe OS is a type of Operating System that incorporates software and protocols for communicating
with other computers over a networkin convenient and cost-effective manner. Read more on Network OS
Real-Time Operating System
In this type of system, each job has a deadline by which it must be completed; otherwise, there will be a
significant loss, or even if the output is provided, it will be utterly useless. For example, in military
applications, if you wish to drop a missile, the missile must be dropped with a specific degree of precision
Click here to read more on Real-Time OS. .
Time-Sharing Operating System
The Time-Sharing OS provides computer resources to numerous programs at the same time in a time-
dependent manner. As a result, it aids in providing direct access to the main computer to a large number of
users, It's a natural progression from multiprogramming. The CPU is swapped between numerous
programs provided by the different users in time-sharing on a seheduled basis
Distributed Operating System
‘The Distributed OS is separated into sections and loaded on different machines rather than being placed o:
single machine. Each machine has a piece of the distributed OS installed to allow them to communicat 7
Because they must deal with a variety of networking protocols, distributed Operating Systems are fa ro
‘and complex than network Operating Systems, far more.
sophisticated, massive,
OPERATING SYSTEM STRUCTUR)
ing systems are large and complex .A common approach is to partition the task into small
.dules, rather than have one monolithic system.
erating system can be defined the following structures,
The opera
components, of mo
‘The structure of aN 01
simple structure
red approach
LayMicro kernels
Modules
Hybrid systems
ture:
Simple stra
not tha not well-defined structure:
The Simple structured operating systems do
MS-
1s. Exampli
ie systems will be simple ,small and limited system
application program
ROM BIOS device drivers
Se
ated.
functionality are not well sepa
‘nes, This causes the entire systems
In MS-DOS, the interfaces and levels of
Ie to access the basic /O routl
+ In MS-DOS application programs are ab!
to be crashed when user programs fail.
Example: Traditional UNIX OS
Ireonbists of two separable parts: the kernel and the system progr
of interfaces and device drivers
nagement, and other operating-system
= The kernel is further separated into a serie
CPU scheduling, memory man
« The kernel provides the file system,
functions through system calls
(the users)
Shells and commands
compilers and interpreters
‘system libraries
interface to the kernel
(GPU scheduling
systom-cal
signals terminal tile system
handling swapping block VO page replacement
system ‘demand paging
character VO system
terminal drivers virtual memory
kernel interface to the hardware
device controllers | memory controllers
sks and tapes | physical memory
disk and tape drivers,
kernel
minal controllers
ten
als1 system is
ris the user
Layered appr
in whieh the opers
One method is the layered approach. i
the hardware: the highest laye
A system ean be made modular inv
broken into a number of layers (levels). ‘The bottom layer (layer 0) i
interface,
up of data and the operations that
an implementation of an abstract object made
+ An operating-system layer is
manipulate those dat
ayers are selected so
The
Xl approach is simplicity of construction and debugging
ers
ations) and services of only lower-level fa
in advantage of the k
ions (op
+The v
that each uses fi
1 need to know how
1 layer is implemented only with operations provided by lower-level layers, A layer does not
ations are implemented; it needs to know only what these operations do.
~~
é
Microkernels:
In the mid-1980s, researchers at Carnegie Mellon University developed an operating system called Mach that
modularized the kernel using the microkernel approach.
This method structures the operating system by removing all nonessential components from the kernel "‘e
implementing them as system and user-level programs.
+ Microkemel provide minimal process and memory management, in
{The main function of the microkernel is to provide communication between the client progra
idition to a communication facility.
and the various
services that are also running in user space
othe client program and service never interact directly. Rather, they communicate indirectly by exchanging messayes
with the microkerel.
makes extending the operating system easier. All new services are
of the microkernel approach is that i
race and consequently do not require modification of the kernel.
to inereased systent-function overhead.
+ One bene!
added to user sp:
+The performance of microkernel c
[ ‘ayehourenModutes:
The best
best current methodology for operating-system design involves using loadable kernel modules
ther at boot time or during
* The kernel hy
Tun time. et HAS A Set-of core components and links in additional services via modules,
+The is running
"= Kemel provides core services while other services are implemented dynamically, asthe Kemel is runnin
ree services dynamically is more comfortable than adding new features directly t0 the Kernel, which would
recompiling the kernel every time a change was made
Example: Solaris OS
he Solaris operating system structure is organized around a core kernel with seven types of loadable Kernel modules:
+ Scheduling classes
File systems
+ Loadable system calls
* Executable formats
=p + STREAMS modules
* Miscellaneous
* Device and bus drivers
Hybrid Systems:
‘The Operating System combines different structures, resulting in hybrid systems that address performance, se
1d usability issues.
"They are monolithic, because having the operating system in a si
ms
+ However, they are also modular, so th
Example: Linus and Solaris are monolithic (simple) an
ity,
gle address space provides very efficient
ty can be dynamic
10 modular, 1S.
new funetio ly added to the kernel,
System Call
in computing, a system call is a programmatic way in which a computer program requests service from the kernel of
the operating system it is executed on. A system call is a way for programs to interact withthe operating system. A
Gomputer program makes a system call when it makes a request tothe operating sytemn’s kernel. System eal provides
ceenesrvices of the operating system 10 the user programs via Application Program Interface(API), It provides an
fierfice between a process and an operating system t0 allow user-level processes to request services of the operating
‘Mstem. System ealls are the only ent points ito the Kee system, All programs needing resources must use syste
calls.
1m can interact with the operating system using a system call. A number of services are requested by the
the OS responds by launching a number of systems calls to fulfill the request. A system call can be
evel languages like C or Pascal or in assembly language. If @ high-level language is used. the
em may directly invoke system calls, which are predefined functions
pil MW ee vin pane
A user prosra
rogram, and
written inhi
operating systtem Calls
Is provide a well-defined interface between user programs and the operating system. Prozrany
fic functions, and the operating system responds by executing the requeste,
System
make requests by calling spe
service and returning a result.
+ Protes
System calls are used to access privile;
operating system uses this privilege to protect the system from ma
+ Kernel Mode:
When a system call is made. the program is temporarily switehed from user mode to kernel mode. In ket
and other processes
1d operations that are not available to normal user programs. The
jous or unauthorized access.
mode. the program has access to all system resources, ineluding hardware, memon
+ Context Switehi
A system call requires a contest switeh, which involves saving the state of the current process
the heonel mode to execute the requested service. This can introduce overhead, which can impact s
performance
Error Handlin
System
these errors and handle them appropriately
+ Synchronization
System calls can be used to synehronize access to shared resources, such as files or network connections. ‘Th
‘operating system provides synchronization mechanisms, such as locks or semaphores, to ensure that multip a5
programs ean access these resources safely
ind swsitehing to,
stem
HS can return ettor codes to indicate problems with the requested service, Programs must check fy
ystem Calls Advant
+ Access to hardware resources
System calls allow programs to access hardware resources such as disk drives, printers.
+ Memory management
System calls provide a way for programs to allocate and deallocate memory. as well as aK
mapped hardware devices.
+ Process management
System calls allows programs to create and. terminate
communication
+ Security:
System calls provide a way for programs to access privileged resources, such as the ability lo modify system
settings or perform operations that require administrative permissions.
nd network dev
s memory
processes, as well asm pler-prowss
meract with the operating system, ensuring
system versions,
System calls provide a standardized interface for programs to
consisteney and compatibility across different hardware platforms and opera
indows and Unix
Examples of a System Call in V
System calls for Windows and Unix come in many different forms. These are listed in the table below as follows:
Process Windows Quiz
Process Control CreateProcess() | Foro
NitProcess() (exit
WaitlorSingleObject() [wang
File manipulation CreateFile() Open)
| ReadFile() Read()
WriteFile() Write
Close()Process
oct!)
SetConsoleMode() .
A process iS
basically
program i
-¢ Management 4)
Rew)
ReadConsole()
write()
i WriteConsole()
— ection
se Geipid() aa
Wformation Mainienanes GetCurrentProcess1D0 Getp ° te
Alanon execution
Serrine a process
must
sleep)
= progress in @
. Pipe sequential
Communication aan vine) re
nget
CreateFileMapping0) shmgetO
‘Mmap() A process iS:
defined as an
entity which
represents,
the basic unit
of work to be
implemented
=a sexScuriyDesrinorsu
|
in the system,
chmod)
Protection
Initializes Umask()
a text file and when we execute this progeam It becomes
“To put it in simple terms, we write our computer program
‘a process which performs all the tasks mentioned in the program.
jdted into four sections — st
mory and it becomes a process, it ean be d
n memory ~
Fa process inside
When a program is loaded into th
heap. text and data. The following image shows a simplified layout of
Process Components
fa such as method/function parameters, return address and local
1StakThe process Stack contains the temporary dat
variables.
2ifeap This is dynamically allocated memory toa process during its run time
‘yrext This includes the current activity represented by the value of Program Counter and the conte
pracesso' reise. rn and the contents of the
the global and static variables.
This section contain
Ps fe Cyc
ra exces, pss though i
ren a provess executes, it passes through different states. These stages may differ .
‘nd the names of these states are also not standardized, ges may differ in different operating systems,
In general, a proces
au
scan have one of the following five states at atime,
al state when a process is
‘Start This isthe i 1 started/ereated
Realy The process is waiting to be assigned 10 a processor. Read
“ rocesses it
Mfosated to them bythe operating system so that they can run Pe aaa nenatrecara
are ranning it by but interrupted bythe schedule 10 assign CPU to some anryecct. Hore oe Siar sine ot
1 other processRunning Once the process by
ul the processor exectites its
Waitin
waiting
Termi
to the terminated state where i
Process
to become av
Process Control Block
Process Control Block is a
moves into the waiti
been as
pstructions.
ailable.
IE waits to be removed from
2
New Peasy Bur
Suspend Ready Block
C ‘Suspend wait
data structure that con
block is also known as a task control block. entry of the process table, ete.
It is very important for process manayement as the data structuring for processes is done in tern
defines the current state of the operating system,
Structure of the Process Control Block
The process control stores many data items that are needed for efficient process management
items are explained with the
help of the given diagram ~
wned to a processor by the OS scheduler. the procs
state if it needs to wait for a resource, such as
edd or Exit Once the process finishes its execution, oF itis terminated by the operating
pgs
Terminate
ns information of the process related to it. Th
ss state is et to running
ing for user input, of
system. itis moved
process contra!
of the PCB, trail
Some of these datsThe following are the data items ~
Proce
Focess State
i nated
ng, waiting or term
new, ready, rum
‘This specifies the process stat
Th
Number
shows the number of the particular process.
Pro,
iam Counter
‘This contains the address of the next instruction thi executed in the process
needs to be
index registers. stack
Repisters
; , fe accumulators
‘This specifies the registers that are used by the process. They may inelude accumu
Pointers, general purpose registets ee. :
List or Open Filey
‘These are the different files that are associated with the process
CPU Scheduling Information ned in the
‘The process priority, pointers to scheduling queues ete, is the CPU scheduling information that is contained in t
SD PCB. This may also include any other scheduling parameters
Memory Management Information
The memory management information includes the page tables of
system used. It also contains the value of the base registers, limit registers etc.
1 the segment tables depending on the memory
VO Status Information
‘This information includes the list of /O devices used by the process. the list of files ete,
Accounting informatica
‘The time limits, account numbers, amount of CPU used, process numbers
information,
Ac. are all a part of the PCB accounti
Location of the Process Control Block
The process control block is kept in a memory area that is protected from the normal user access. This is done because
it contains important process information. Some of the operating systems place the PCB at the beginning of the kernel
stack for the process as it is a safe location
BED 6 rions on the Process »
1. Creation Once the process is created, it will be ready and come into the ready queue (main memory) and v
ready for the execution.
2, Scheduling Out of the many processes present in the eady queue. the Operating system chooses one proce
‘Start executing it, Selecting the process which isto be executed next. is known as scheduling eae
43, Execution Once the process is scheduled for the execution, the processor starts exec
the blocked or wait state during the execution then in that case the processor starts executing
1 it. Process may e
the other processes.
me to
4, Deletion/killing Once the purpose of the process gets over then the OS will
process (PCB) wil be deleted and the process gets terminated bythe Operating sysens he process, The Context of the
Cooperating Process in Operating System
Pre-requisites: Process Synchronizationis around the process. How the process goes through several different states. So in
is one type of process called as Cooperating Process. In the operating system there
In an operating system, everythin
this article, we are going to discus
are two types of processes:
Ion any other
re those processes whose task is not depende:
Independent Processes
aces
Independent
processes, :
ses that depenc! on other processes or processes. They
each other by sharing
Cooperating Processes are those proc
her to achieve a common task in an operating system. These processes interact with
the resourees such as CPU, memory, and VO deviees to complete the task.
So now let's discuss the concept of cooperating processes and how they are used in operating systems.
ng, Process:
Communication (IPC): Cooperating processes interact with each other via Inte
another se run
+ Inter-Process
Communication (IPC). As they are interacting to each other and sharing some resources with rn
yet the synchronization and possibilities of deadlock decreases. To implement the IPC there are many opt
such as pipes. message queues, semaphores, and shared memory. ;
+ Concurrent execution: These cooperating processes executes simultaneously which can be done
operating system scheduler which helps to select the process from ready queue 10 go to the running state, Because
ution of several processes the completion time decreases
resources including,
of concurrent exe
ie by sharin;
. Deadlocks: As cooperating processes shares their resources. there might be a deadlock
condition, Dealiack means i pl process holds he
for A. in iis coon deadlock oecur in cooperating process To avou! deadlocks. operating systems Ispically
ihins such as the Banker's algorithn to manage and allocate resources to processes.
+ Process scheduling: Cooperating processes runs simultaneously but afler contest switch, which process
should be next on CPU to executes. this is done by the scheduler. Scheduler do it by using several scheduling
algorithms such as Round-Robin. FCFS,
In conclusion, cooperating processes are essential unit to increase the concurrent exe
perlormance of the overall system increase:
ition and because of it, thCritical Section
rocesses, and
° anal section refers to a segment of code that is executed by multiple concurrent coe pee ea
thach B¢cesses shared resources. These resources may include hared memory. Sea easy
‘at can only be accessed by one thread or process at a time te void data inconsistency
means that once cne thread or process has
he executing tread or process exits the
that only oie thread or process can
we Sritical section must be executed as an atomic operation, which me
tered the critical section, all other threads or processes must wait until t
Gritical section, The purpose of synchronization mechanisms is to ensure t
©xecute the critical section at a time.
ssary to ensure that
nization
ections
The concept of a critical section is central to synchronization in computer systems, as it is neces
™ultiple threads or processes can execute concurrently without interfering with each other. Various synchro
Mechanisms such as semaphores, mutexes, monitors, and condition variables are used to implement critical s
and ensure that shared resources are accessed in a mutually exclusive manner.
The use of critical sections in synchronization can be advantageous in improving the performance of concurrent
systems, as it allows multiple threads or processes to work together without interfering with each other. However,
Care must be taken in designing and implementing critical sections, as incorrect synchronization can lead to race
conditions and deadlocks.
Critical Section
‘When more than one processes try to access the same code segment that segment is known as the critical section.
The critical section contains shared variables or resources that need to be synchronized to maintain the consistency
of data variables.
In simple terms, a critical section is a group of instructions/statements or regions of code that need to be executed
atomically (read this post for atomicity), such as accessing a resource (file, Input or output port, global data, etc.) In
oo, programming, if one thread tries to change the value of shared data at the same time as another thread
Mss to read the value (i, data race across threads), the result is unpredictable. The access to such shared variables
(shared memory, shared files, shared port, etc) is to be synchronized.
Few programming languages have built-in support for synchronization. It is critical to understand the
importance of race conditions while writing kernel-mode programming (a device driver, kernel thread, etc)
since the programmer can directly access and modify kernel data structures
‘Although there are some properties that should be followed if any code in the critical section
1. Mutual Exclusion: If process Pi is executing in its critical section, then no other processes can be
‘executing in their critical sections.
2, Progress: If no process is executing in its critical section and some processes wish to enter their critical
sections, then only those processes that are not executing in their remainder s:
deciding which will enter its critical section next, and this selection cannot be
3, Bounded Waiting: There exists a bound, or limit, on the number of time:
allowed to enter their critical sections after a process has made a re
before that request is granted =
two general approaches are used to handle critical sections:
1. Preemptive kernels: A preemptive kernel allows a process to be preempted while itis running in kernel
mode. mpted pg in kernel
ections can participate in
Postponed indefinitely.
s that other processes are
‘quest to enter its critical section’and2. Non preemptive kernels: A non preemptive kernel does not allow a process running in kemel mode to
be preempted; a kernel-mode process will run until it exists in kernel mode, blocks, or voluntarily yields
control of the CPU. A non preemptive kernel is essentially free from race conditions on kernel data
structures, as only one process is active in the kernel at a time.
Critical Section Problem
The use of critical sections in a program can cause a number of issues, including:
Deadlock: When two or more threads or processes wait for each other to release a critical section, it can
result in a deadlock situation in which none of the threads or processes can move. Deadlocks can be difficult
to detect and resolve, and they can have a significant impact on a program's performance and reliability.
Starvation: When a thread or process is repeatedly prevented from entering a critical section, it can result in,
starvation, in which the thread or process is unable to progress. This can happen if the critical section is held”
for an inusually lorig period of time, or if a high-priority thréad or process is always given priority when_
entering the critical section, °
Overhead: When using critical sections, threads or processes must acquire and release locks or semaphores,
which can take time and resources, This may reduce the program's overall performance. e&
Entry Section)
Sriticar
Remainaer
Sosen
Mutual Exclusion
Mutual Exclusion is a property of process synchronization that states that "no two processes can exist in
the critical section at any given point of time”. The term was first coined by Dijkstra. Any process
synchronization technique being used must satisfy the property of mutual exclusion, without which it would
not be possible to get rid of a race condition. -
The need for mutual exclusion comes with concurrency. There are several kinds of concurrent execution:
1. Interrupt handlers
2. Interleaved, preemptively scheduled processes/threads
3. Multiprocessor clusters, with shared memory
4. Distributed systems
Mutual exclusion methods are used in concurrent programming to avoid the simultaneous use of a
common resource, such as a global variable, by pieces of computer code called critical sections «
The requirement of mutual exclusion is that when process PLis accessing a shared resource RI, another
Process should not be able to access resource R1 until process PI has finished its operation with resource
Ri.
Examples of such resources include files, 1/0 devices such as printers, and shared data structures,
Conditions Required for Mutual Exclusion *
According to the following four criteria, mutual exclusion is applicable:
1. When using shared resources itis important to ensure mutual exclusion between various processes
There cannot be two processes running simultaneously in either of their critical sections, :
2. Itisnot advisable to make assumptions about the relative speeds of the unstable processes,st another process:
3. For access to the critical section, a process that is outside of it must not oo ot time; multiple
4. Its critical section must be accessible by multiple processes in 2 finite ar
Processes should never be kept waiting in an infinite loop.
Approaches To Implementing Mutual Exclusion
hiontware method: Leave the responsibility to the processes t
ly error-prone and carry high overheads. i
2. Hardware method: Specal-purpose machine instructions are used for accessing Par fi
santhod is faster but cannot provide a complete solution. Hardware solutions can
sence of deadlock and starvation. h the
3. Programming language method: Provide support through the operating system or throug}
Programming language.
Requirements of Mutual Exclusion
At any time, only one process is allowed to enter its critical section.
The solution is implemented purely in software on a machine.
A process remains inside its critical section for a bounded time only.
No assumption can be made about the relative speeds of asynchronous concurrent processes:
A process cannot prevent any other process from entering into a critical section.
. A process must not be indefinitely postponed from entering its critical section.
In order to understand mutual exclusion, let's take an example.
Lock Variable
hemselves. These methods are usually
red resources. This
farantee the
This is the simplest synchronization mechanism. This is a Softwate Mechanism implemented in User mode. This
is a busy waiting solution which can be used for more than two processes.
In this mechanism, a Lock variable lockis used. Two values of lock can be possible, either 0 or 1. Lock value 0
means that the critical section is vacant while the lock value 1 means that it is occupied.
A process which wants to get into the critical section first checks the value of the lock variable. If it is 0 then it
sets the value of lock as 1 and enters into the critical section, otherwise it waits.
mugpynchronization Hardware
Process Synchronization refers to coordinating the execution of processes so that no two processes
can have access to the same shared data and resources. A problem occurs when two processes runnin
simultaneously share the same data or variable, 9
There are three hardware approaches to solve process synchronization problems:
swap
Test() and Set
Unlock and lock
Test and Set
In Test and Set the shared variable is a lock that is initialized to false,
The algorithm returns whatever value is sent to it and sets the loc
here, as till the lock is set to true, other processes will not be abl
However, after one process is completed any other process can
swap
k to true. Mutual exclusion is ensured
fe to enter and the loop continues,
90 in as no queue is maintained.i d then
In this algorithm, instead of directly setting the lock to true, the key is first set to true an
swapped with the lock. a
Similar to Test and Set, when there are no processes in the critical section,
allows other processes to enter. Hence, mutual exclusion and progress are ensur:
waiting is not ensured for the very same reason
Unlock and Lock
i nafil i s in the
In addition to Test and Set, this algorithm uses waiting[i] to check if there are any processe
wait. The processes are set in the ready queue with respect to the critical section. a
Unlike the previous algorithms, it doesn’t set the lock to false and checks the ready queue {or oy
waiting processes. If there are no processes waiting in the ready queue, the lock is then se
and any process can enter.
wait and signal
Two standard operations, wait and signal are defined on the semaphore. Entry to the critical section is
controlled by the wait operation and exit from a critical region is taken care by signal operation. The wait, 2
signal operations are also called P and V operations.
the lock turns to false and
ed but the bound
Semaphores
The Semaphore is just a normal integer. The Semaphore cannot be negative. The least value for a
Semaphore is zero (0). The Maximum value of a Semaphore can be anything. The Semaphores usually
have two operations. The two operations have the capability to decide the values of the semaphores.
The two Semaphore Operations are:
1. Wait ()
2. Signal ()
Wait Semaphore Operation e
‘The Wait Operation is used for deciding the condition for the process to enter the critical state or wait for
execution of process. Here, the wait operation has many different names. The different names are:
1, Sleep Operation
2. Down Operation
3. Decrease Operation
4. P Function (most important alias name for wait operation)
The Wait Operation works on the basis of Semaphore or Mutex Value,
Here, if the Semaphore value is greater than zero or positive then th
7 1e Proce: iti
Section Area. ss can enter the Criticalit the Critical
‘ocess to exit the
If the Semaphore value is equal to zero then the Process has to wait for the Pr
Section Area,
rs the
; i Processes enter!
This function is only present until the process enters the critical state. If the
ritical state, then the P Function or Wait Operation has no job to do:
re
Tf the Process exits the Critical Section we have to reduce the value of Semapho!
Signal Semaphore Operation
value is
The Signal Semaphore Operation is used to update the value of Semaphore. The Semaphore
Updated when the new processes are ready to enter the Critical Section.
The Signal Operation is also known as:
Ds wate up Operation
2. Up Operation
3. Increase Operation
4. V Function (most important alias name for signal operation)
We know that the semaphore value is decreased by one in the wait operation when the process left
the critical state. So, to counter balance the decreased number 1 we use signal operation which
increments the semaphore value. This induces the critical section to receive more and more processes
into it.
The most important part is that this Signal Operation or V Function is executed only when the process
comes out of the critical section. The value of semaphore cannot be incremented before the exit of
process from the critical section
hi.
Types of Semaphores
There are two types of Semaphores.
They are:
1. Binary Semaphore
Here, there are only two values of Semaphore in Binary Semaphore Con
ul cept. The two values are land
If the Value of Binary Semaphore is 1, then the Process has the capabili
area. If the value of Binary Semaphore is 0 then the process does
critical section area.2. Counting Semaphore
. The two types of
Here, there are two sets of values of Semaphore in Counting Semaphore Concept
i zero.
values are values greater than and equal to one and other type is value equal to
as the capability to
hi
If the Value of Binary Semaphore is greater than or equal to 1, then the process fas BN" TS
enter the critical section area. If the value of Binary Semaphore is 0 then the proc
the capability to enter the critical section area.
This is the brief description about the Binary and Counting Semaphores
Advantages of a Semaphore
a jtten in the
© Semaphores are machine independent since their implementation and codes are wri
microkernel's machine independent code area. =
it in the
© They strictly enforce mutual exclusion and let processes enter the crucial part one at a time (only in
‘case of binary semaphores).
© With the use of semaphores, no resources are lost due to busy waiting since we do not need any
Processor time to verify that a condition is met before allowing a process access to the crucial area.
© Semaphores have the very good management of resources
© They forbid several processes from entering the crucial area. They are significantly more effective than
other synchronization approaches since mutual exclusion is made possible in this way.
Disadvantages of a Semaphore
© Due to the employment of semaphores, it is possible for high priority processes to reach the vital area
before low priority processes. Se
© Because semaphores are alittle complex, its important to design the wait and signal actions in a way
that avoids deadlocks.
© Programming a semaphore is very challenging, and there is a danger that mutual exclusion won't be
achieved,
©The wait () and signal () actions must be carried out in the appropriate order to prevent deadlocks
Classical Problems of Synchronization
we will discuss about various classical problem of synchronization,Se .
manner can be used in other synchronization problems besides Mutual
lon,
Bel
b low are some of the classical problem depicting flaws of process
'Ynchronaization in systems where cooperating processes are present.
We will discuss the following three problems:
1. Bounded Buffer (Producer-Consumer) Problem
2. Dining Philosophers Problem
3. The Readers Writers Problem
unded Buffer Problem
Because the buffer pool has a maximum size, this problem is often called
the Bounded buffer problem.
+ This problem is generalised in terms of the Producer Consumer problem,
where a finite buffer pool is used to exchange messages between producer
and consumer processes.
+ Solution to this problem is, creating two counting semaphores "full" and
"empty" to keep track of the current number of full and empty buffers
respectively.
«In this Producers mainly produces a product and consumers consume the
product, but both can use of one of the containers each time.
« The main complexity of this problem is that we must have to maintain the
count for both empty and full containers that are available,
Dining Philosophers Problem
+ The dining philosopher's problem involves the allocation of limited
resources to a group of processes in a deadlock-free and starvation-free
manner.
+ There are five philosophers sitting around a table, in whi
+ 7 ich there i
chopsticks/forks kept beside them and a bowl of rice in the Siw
philosopher wants to eat, he uses two chopsticks - one from their left aeone from their right. When a philosopher wants to think, he keeps down |
both chopsticks at their original place.
The Readers Writers Problem
«In this problem there are some processes(called readers) that only read the
shared data, and never change it, and there are other
processes(called writers) who may change the data in addition
or instead of reading it.
+ There are various type of readers-writers problem,
priorities of readers and writers.
The main complexity with this problem occurs from allowing more than on
reader to access the data at the same time.
to reading,
most centred on relative
Monitors
Monitors are a higher-level synchronization construct that simplifies process synchronization by
providing a high-level abstraction for data access and synchronization. Monitors are implemented as
programming language constructs, typically in object-oriented languages, and provide mutual
exclusion, condition variables, and data encapsulation in a single construct.
1. Amonitor is essentially a module that encapsulates a shared resource and provides access to
that resource through a set of procedures, The procedures provided by a monitor ensure that
only one process can access the shared resource at any given time, and that processes waiting €-
for the resource are suspended until it becomes available.
Monitors are used to simplify the implementation of concurrent programs by providing a
higher-level abstraction that hides the details of synchronization. Monitors provide a structured
way of sharing data and synchronization information, and eliminate the need for complex
synchronization primitives such as semaphores and locks.
The key advantage of using monitors for process synchronization is that they provide a simple,
high-level abstraction that can be used to implement complex concurrent systems. Monitors
also ensure that synchronization is encapsulated within the module, making it easier to reason
about the correctness of the system.ficient than lower-level
n be less ef
an Ive additional overhead,
¥ they may invo
vay not be suitable for all types of
may be required for optimal
However, monitors have some limitations. For example,
Synchronization primitives such as semaphores and locks,
due to their higher-level abstraction. Additionally, monitors may n
SyAchronization problems, and in some cases, lower-level primitives
Performance.
in itor is supported by
The monitor is one of the ways to achieve Process synchronization. The potest
Programming languages to achieve mutual exclusion between processes. For exé
Synchronized methods. Java provides wait() and notify) constructs.
. i cial kind of
1. Itis the collection of condition variables and procedures combined together in a sp
module or a package.
=
: ; itor but
2. The processes running outside the monitor can’t access the internal variable of the monitor
can call procedures of the monitor.
3. Only one process at a time can execute code inside monitors.
Advantages of Monitor: Monitors have the advantage of making parallel
programming easier and less error prone than using techniques such as semaphore.
Disadvantages of Monitor: Monitors have to be implemented as part of the
jogramming language . The compiler must generate code for them. This gives the
compiler the additional burden of having to know what operating system facilities are
available to control access to critical sections in concurrent processes. Some
languages that do support monitors are Java,C#,Visual Basic
Process Schedulers in Operating System
Process scheduling is the activity of the process Manager that handles the removal
of the running process from the CPU and the selection of another proce: he
basis of a particular strategy. p SS on theProcess scheduling is an essential part of a Multiprogramming-operetig eae
Such operating systems allow more than one process to be = 4 into th
executable memory at a time and the loaded process shares the C ig time
multiplexing.
Categories in Scheduling
Scheduling falls into one of two categories:
Non-preemptive: In this case, a process's resource cannot be taken before the
process has finished running. When a running process finishes and transitions to
a waiting state, resources are switched.
Preemptive: In this case, the OS assigns resources to a process for a
predetermined period of time. The process switches from running state to ready
state or from waiting for state to ready state during resource allocation. This
switching happens because the CPU may give other processes priority and
substitute the currently active process for the higher priority process.
There are three types of process schedulers.
Long Term or Job Scheduler
It brings the new process to the ‘Ready State’. It controls the Degree of Multi-
programming, i.e., the number of processes present in a ready state at any point in
time. It is important that the long-term scheduler make a careful selection of both
I/O and CPU-bound processes. I/O-bound tasks are which use much of their time in
input and output operations while CPU-bound processes are which spend their tim
on the CPU. The job scheduler increases efficiency by maintaining a balance
between the two. They operate at a high level and are typically used in batch-
processing systems.
Short-Term or CPU Scheduler
It is responsible for selecting one process from the ready state for scheduling it on
the running state. Note: Short-term scheduler only selects the process to schedule it
doesn't load the process on running. Here is when all the scheduling algorithms are
used. The CPU scheduler is responsible for ensuring no starvation due to high burst
time processes.The dispatcher is responsible for loading the process selected by
the Short-term scheduler on the CPU (Ready to Running State) Context switching is
done by the dispatcher only. A dispatcher does the following:
i1. Switching context.
2 Switching to user mode, rogram.
3. Jumping to the proper location in the newly loaded pI
Medium-Term Scheduler
mainly does swapping
It is responsible for suspending and resuming the Process a eee
(moving Processes from main memory to disk and vi sae ey cements
Necessary to improve the process mix or because a ae alfa ita
has overcommitted available memory, requiring meron te Cee EEE
in maintaining a perfect balance between the I/O boul
a the degree of multiprogramming.
‘Comparison among Scheduler
Medium Term Scheduler
| Short torm schedutar
Lone Term Scheduter | sho
|
Nis aJob schedutor
Context Switching
In order for a process execution to be continued from
time, context switching is a mechanism to store and rest
a CPU in the Process Control block. A context switc
multiple processes to share a single cpu Using thi
operating system must include context switching amon,
the same point at a later
Ore the state or context of
her makes it Possible for
is method, A multitasking
ig its features,
The state of the currently running process is saved j
nto the process control block
when the scheduler switches the CPU from executin:
J One process to another, Thestate used to set the PC, registers, etc. for the process that will run next is then
loaded from its own PCB. After that, the second can start processing.
——
naw —— =
anny ‘iene
In order for a process execution to be continued from the same point at a late
time, context switching is a mechanism to store and restore the state or context or
a CPU in the Process Control block. A context switcher makes it possible for
multiple processes to share a single CPU using this method. A multitasking
operating system must include context switching among its features.
+ Program Counter
+ Scheduling information
+ The base and limit register value
+ Currently used register
« Changed State
« 1/0 State information
+ Accounting information
=
Thread in Operating System
jential flow of activities being executed in a process; it is also known as
trol. Now, thread execution is possible within any OS's
e several threads. A distinct programme counter, a stack of
3, Thread is
A thread refers to a single sequ
the thread of execution or the thread of co!
process. Apart from that, a process can hav
activation records as well as control blocks are used by each thread of the same proces
frequently described as a light technique.; i ina
The procedure can be easily broken down into numerous different threads. Multiple tabsin ;
or example, can be considered threads, MS Word employs many threads to prepare
thread, receive input in another thread, and so on.
browser,
in one
Difference between Process and Thread
=2
+ fesource iaeersve ED
In multipte processing
fg Multiple processes without using
3)
Why Do We Need Thread?
Thread switching does not need
All threads can share same set of
Open flies, child processes.
Multiple threaded processes use
change another thread's data
+ Creating a new thread in a curent process requires significantly less time than creating a new process.
+ Threads can share common data without needing to communicate with each other.
When working with threads, context switching is faster.
«Terminating a thread requires less time than terminating a process,
Types of Threads
1, User-Level Thread
The user-level thread is ignored by the operating system, User threads
are done so by the user. The entire process is blocked ifa user execute:
are simple to implement and
S a user-level operation offhread. User-level
1 thread is completely unaware of the user-level t
thread blocking. The kernel-level
thread.
threads are managed as single-threaded processes by the kernel-level
Threads in Java, POSIX, and other languages are examples.
Pros
+ User threads are easier to implement than kernel threads. ve
«Threads at the user level can be used in operating systems that do not allow threads at the kernel level:
+ Itis more effective and efficient.
Context switching takes less time than kernel threads.
It does not necessitate any changes to the operating system.
The representation of user-level threads is relatively straightforward, The user-level process's address space
contains the register, stack, PC, and mini thread control blocks. S&S
Threads may be easily created, switched, and synchronised without the need for process interaction.
Cons
Threads at the user level are not coordinated with the kernel.
The entire operation is halted if a thread creates a page fault
EN
2. Kernel-Level Thread
The operating system is recognised by the kernel thread. Each thread and process in the kernel-leve
thread has its own thread control block as well as process control block in the system, The operating
system implements the kernel-level thread, The kernel is aware of all threads and controls them, The
kernel-level thread provides a system call for user-space thread creation and management. Kernel
threads are more complex to build than user threads. The kernel thread's context switch time is
longer. The execution of the Banky thread can continue in case a kernel thread performs a blocking
operation.
Solaris, for example.Userspace
Kernel space
Pros
amg All threads are completely aware of the kernel-level thread.
+The scheduler may decide to devote extra CPU time to threads with large numerical values.
+ Applications that happen to block the frequency should use the kernel-level thread.
Cons
‘+ Allthreads are managed and scheduled by the kernel thread.
‘+ Kemel threads are more complex to build than user threads.
+ Kernel-level threads are slower than user-level threads.
Components of Threads
1. Stack space
2. Register set
3. Program counter
mm
Benefits of Threads
+ Enhanced system throughput: The number of jobs completed per unit time increases whe
divided into numerous threads, and each thread is viewed as a job. As a result, the system's
likewise increases.
«Effective use of a Multiprocessor system: You can schedule multiple threads i
you have many threads in a single process.
Faster context switch: The thread context switching time is shorter than the
The process context switch adds to the CPU's workload,
Responsiveness: When a process is divided into many threads,
then the process can be responded to as quickly as possible.
Communication: Multiple-thread communication is straightforward
‘address space, while communication between two processes is limite
mechanisms.
«Resource sharing: Code, data, and files, for example, can be shared
that threads cannot share the stack or register. Each thread has itso
n the process is
throughput
in multiple processors when
Process context switching time.
a
nd each of them completes its execution,
because the threads use the same
1d to a few exclusive communication
among all threads in a process. Note
WN stack and register,Difference between User-Level & Kernel-Level Thread
S.N. User-Level Threads Kernel-Level Thread
10
User-level threads are faster to create and _Kernel-level threads are slower t¢
2 manage. create and manage.
2 Implementation is by a thread library at the Operating system supports,
user level. creation of Kernel threads.
3 __User-level thread is generic and can run on Kernel-level thread is specific to
any operating system. the operating system.
4 Multi-threaded applications cannot take Kernel routines themselves can be
advantage of multiprocessing, multithreaded.
Schedule Processes
Scheduling is important in many different computer environments. One of the most important areas is
scheduling which programs will work on the CPU. This task is handled by the Operating System (OS) of the
computer and there are many different ways in which we can choose to configure programs.
Process Scheduling allows the OS to allocate CPU time for each process. Another important reason to use a
process scheduling system is that it keeps the CPU busy at all times. This allows you to get less response tira
for programs.
Considering that there may be hundreds of programs that need to work, the OS must launch the program, stop
it, switch to another program, etc. The way the OS configures the system to run another in the CPU is called
“contest switching’ Ifthe OS keeps context switching programs in and out of the provided CPUs, it can give
the user a tricky idea that he or she can run any programs he or she wants to run, all at once.
So now that we know we can run 1 program at a given CPU, and we know we can change the operating system
and remove another one using the context switch, how do we choose which programs we need. run, and with
what program?
That's where scheduling comes in First, you determine the metrics, saying something like “the amount of time
until the end’. We will define ths metric as “the time interval between which a function enters the system unt
itis completed”. Second, you decide on a metrics that reduces metrics. We want our tasks to end ve evcw as
possible.CPU Scheduling Algorithm
n the CPU to use while
duling is to ensure that
g which process will ow
processes available
junction of the CPU sche
s at least selected one of the
70 binding processes
program is to
CPU scheduling is the process of decidin:
another process is suspended. The main ft
whenever the CPU remains idle, the OS ha:
in the ready-to-use li
In Multiprogramming, if the long-term sche
then most of the time, the CPU remains an idl
improve resource utilization. .
Af most operating systems change their status from performance to walting
always be a chance of failure in the system. So in order to minimize this excb® =
needs to schedule tasks in order to make full use of the CPU and avoid the possi
deadlock.
Ubjectives of Process Scheduling Algorithm:
* Utilization of CPU at maximum level.
+ Keep CPU as busy as possible.
+ Allocation of CPU should be fair:
+ Throughput should be Maximum, /.e. Numb:
exécution per time unit should be maximized.
«Minimum turnaround fime, ie. time taken by a process to
the least.” 7 . .
There should be a minimum waiting time and the process should not starve in the
ready queue. a - a :
«Minimum response time. It means that the time when a process produces the first
response should be as less a5 possible. so
what are the different terminologies to take care of in any CPU Scheduling algorithm?
ma) 5 vial Time: Time at which the process arrives in the ready queue.
Completion Time! Time at which process completes its execution.
Burst Time: Time required by a process for CPU execution. ;
Turn Around Time: Time Difference between completion time and arrival time.
ple I
duler selects multi n
f an effective
le. The function o|
.g then there may
the OS
bility of
er of processes that complete their
finish execution should be
Jurn Around Time = Completion Time ~ Arrival Time
Waiting Time(W.T)/Time Difference between turn around time and burst time
waiting Time = Turn Around Time - Burst Time
Things to take care while designing a CPU Scheduling algorithm? oe
Different CPU Scheduling algorithms have different structures and the choice of a
articular algorithm depends on a variety of factors. Many conditions have been raised to
compare CPU scheduling algorithms.The criteria include the following:
e CPU as busy as
m is to keep th z
-time system, it
+—-CPU utilization: The main purpose of any CPU algorith K
00 but in a real:
possible. Theoretically, CPU usage can range from 0 to
varies from 40 to 90 percent depending on the system load.
+ Throughput: The average CPU performance is the number o|
completed during each unit. This is called throughput. The output
on the length or duration of the processes. i
+ Turn round Time: For a particular process, the important conditions are how pee no
takes to perform that process. The time elapsed from the time of process delivery to
time of completion is known as the conversion time. Conversion time is the amount o
time spent waiting for memory access, waiting in line, using CPU, and waiting for 1/0.
+ Waiting Time: The Scheduling algorithm does not affect the time required to complete
the process once it has started performing. It only affects the waiting time of the process
i.e. the time spent in the waiting process in the ready queue. |
+ Response Time: In a collaborative system, turn around time is not the best option. The
process may produce something early and continue to computing the new results while
the previous results are released to the user. Therefore another method is the time taken
in the submission of the application process until the first response is issued. This
measure is called response time.
What are the different types of CPU Scheduling Algorithms?
There are mainly two types of scheduling methods:
f processes performed and
may vary depending
+ Preemptive Scheduling: Preemptive scheduling is used when a process switches from
running state to ready state or from the waiting state to the ready state.
+ Non-Preemptive Scheduling: Non-Preemptive scheduling is used when a process
terminates , or when a process switches from running state to waiting state.
7
adetthng| “| Reman Saas] Eee
seats cee fae
Different types of CPU Scheduling Algorithms
1, First Come First Serve:
FCFS considered to be the simplest of all operating system scheduling algorithms. First come first
serve scheduling algorithm states that the process that requests the CPU first is allocated the CPU first
and is implemented by using FIFO queue.
[cpu scheduling oCharacteristics of FCFS:
, hms.
* FCES supports non-preemptive and preemptive CPU scheduling algorit!
+ Tasks are always executed on a First-come, First-serve concept.
* _FCFS is easy to implement and use.
it time is quite high.
+ This algorithm is not much efficient in performance, and the walt time is quite hig)
Advantages of FCFS:
+ Easy to implement
+ First come, first serve method
Disadvantages of FCFS:
+ FCFS suffers from Convoy effect. .
+ The average waiting time is much higher than the other algorithms. _
+ FCFS is very simple and easy to implement and hence not much efficient.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article
On First come, First serve Scheduling.
2. Shortest Job First(SJF):
Shortest job first (SJF) is a scheduling process that selects the waiting process with the smallest
‘execution time to execute next. This scheduling method may or may not be preemptive. Significantly
reduces the average waiting time for other processes waiting to be executed. The full form of SIF is
Shortest Job First.
Characteristics of SJF:
ge + Shortest Job first has the advantage of having a minimum average waiting time among
all operating system scheduling algorithms.
+ Itis associated with each task as a unit of time to complete.
+ It may cause starvation if shorter processes keep coming. This problem can be solved using the
concept of ageing.
‘Advantages of Shortest Job first:
+ As SIF reduces the average waiting time thus, it is better than the first come first serve
scheduling algorithm.
«SIF is generally used for long term scheduling
Disadvantages of SJF:
«One of the demerit SJF has is starvation.
{Many times it becomes complicated to predict the length of the upcoming CPU request" ‘ to our detailed article
To learn about how to implement this CPU scheduling algorithm, please refer
on Shortest Job First.
3. Longest Job First(LIF):
rst (SJF), as the name
; ; it job fir aan
Longest Job First(LUF) scheduling process is just opposite of Sein largest burst time is
suggests this algorithm is based upon the fact that the process
processed first. Longest Job First is non-preemptive in nature.
Characteristics of LIF:
i the process
‘Among all the processes waiting in a waiting queue, CPU is always assigned to the P'
having largest burst time. ,
ro broken using ECFS ie. the process
If two processes have the same burst time then the tie is
that arrived first is processed first. :
UF CPU Scheduling can be of both preemptive and non-preemptive types.
Advantages of LIF:
+ No other task can schedule until the longest job or process executes completely.
«All the jobs or processes finish at the same time approximately.
Disadvantages of LIF:
+ Generally, the LIF algorithm gives a very high average waiting time and average turn-around
time for a given set of processes.
+ This may lead to convoy effect.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article
on the Longest job first scheduling.
4. Priority Scheduling:
Preemptive Priority CPU Scheduling Algorithm is a pre-emptive method of CPU scheduling
algorithm that works based on the priority of a process. In this algorithm, the editor sets the
functions to be as important, meaning that the most important process must be done first. In the case
of any conflict, that is, where there are more than one processor with equal value, then the most
important CPU planning algorithm works on the basis of the FCFS (First Come First Serve) algorithm,
Characteristics of Priority Scheduling:
«Schedules tasks based on priority.
When the higher priority work arrives while a task with less priority i
IS exer ic
priority work takes the place of the less priority one and Y 1S executed, the higher
‘The latter is suspended until the execution is complete.
Lower is the number assigned, higher is the priority level of a process,Advantages of Priority Scheduling:
The average waiting time is less than FCFS
+ Less complex
Disadvantages of Priority Scheduling:
ty CPU scheduling algorithm is
has to wait for a longer amount
ition problem.
* One of the most common demerits of the Preemptive prior
the Starvation Problem. This is the problem in which a process
Of time to get scheduled into the CPU. This condition is called the starva!
, iled article
To learn about how to implement this CPU scheduling algorithm, please refer to our detal
©n Priority Preemptive Scheduling algorithm.
5. Round robin:
lound Robin is a CPU scheduling algorithm where each process is cyclically assigned a fixed See
slot. It is the preemptive version of First come First Serve CPU Scheduling algorithm. Round Robin
CPU Algorithm generally focuses on Time Sharing techniqué.
Characteristics of Round robin:
+ It's simple, easy to use, and starvation-free as all processes get the balanced CPU allocation._
+ One of the most widely used methods in CPU scheduling as a coré. ce ca
+ Itis considered preemptive as the procésses are given to the CPU for a very limited time,
Advantages of Round robin:
+ Round robin seems to be fair as every process gets an equal share of CPU.
+ The newly created process is added to the end of the ready queue.
earn about how to implement this CPU scheduling algorithm, please refer to our detailed article
‘on the Round robin Scheduling algorithm,
6. Shortest Remaining Time First:
Shortest remaining time first is the preemptive version of the Shortest
discussed earlier where the processor is allocated to the job closest to
process with the smallest amount of time remaining until completion is select
job first which we have
completion. In SRTF the
‘ted to execute,
Characteristics of Shortest remaining time first:
+ SRTF algorithm makes the processing of the jobs faster than S, it ven if
saTE agente aks JF algorithm, given it's overheadSJF and consumes the CPU's
= The context switch is done a lot more times in ‘SRTF than in 4 diminishes its advantage
valuable time for processing. This adds up to its processing time an
of fast processing.
Advantages of SRTF:
+ In SRTF the short processes are handled very fast. ion wh
4 jen a process
* The system also requires very little overhead since it only makes a decision wt PI
completes or a new process is added.
Disadvantages of SRTF:
ation.
tinually added.
to our detailed article
+ Like the shortest job first, it also has the potential for process starv
Long processes may be held off indefinitely if short processes are cont
To learn about how to implement this CPU scheduling algorithm, please refer
on the shortest remaining time first.
7. Longest Remaining Time First:
The longest remaining time firstis a preemptive version of the longest job first scheduling
algorithm. This scheduling algorithm is used by the operating system to program incoming processes
for use in a systematic way. This algorithm schedules those processes first which have the longest
processing time remaining for completion.
Characteristics of longest remaining time first:
Among all the processes waiting in a waiting queue, the CPU is always assigned to the process
having the largest burst time.
If two processes have the same burst time then the tie is broken using FCFS ie. the process
that arrived first is processed first. -
UF CPU Scheduling can be of both preemptive and non-preemptive types.
Advantages of LRTF:
+ No other process can execute until the longest task executes completely.
+ All the jobs or processes finish at the same time approximately.
Disadvantages of LRTF:
«This algorithm gives a very high average waiting time and_average turn-around time for a
given set of processes.
«This may lead to a convoy effect.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article
on the longest remaining time first.Multiple Queue Scheduling:
5 has its own
Processes in the ready queue can be divided into different classes where each oes ad
Scheduling needs. For example, a common division is 2 foreground interactive) Pose ry
ackground (batch) provese These two classes have different scheduling nec
Situation Multilevel Queue Scheduling is used.
‘System processes |—> Quewe
Daten Processes [>Qveee
ee Low Priority
The description of the processes in the above diagram is as follows:
1c run, generally termed as System Process.
type of process in which there should be the
+ System Processes: The CPU itself has its process ts
+ Interactive Processes: An Interactive Process is a
same type of interaction.
«Batch Processes: Batch processing is generally a technique in the Operating system that
collects the programs and data together in the form of a batch before the processing starts.
Advantages of multilevel queue scheduling:
«The main merit of the multilevel queue is that it has a low scheduling overhead.
manygifsadvantages of multilevel queue sched
+ Starvation problem
+ Itis inflexible in nature
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article
on Multilevel Queue Schedulin
410, Multilevel Feedback Queue Scheduling:
el Feedback Queue Scheduling (MLFQ) CPU Scheduling is like Multilevel Queue
Multile k
jing but in this process can move between the queues. And thus, much more efficient than
schedull r
raultilevel queue scheduling.
Characteristics of Multilevel Feedback Queue Scheduling:jgned to a queue on
nly a
queues.
ing algorithm, processes are permane!
has the advantage of low
. 1 multilevel queue-scheduling
tthe: ot allowed to move between
entry to the system, and processes are n\ ,
is the processes are permanently assigned to the queue, this setup
scheduling overhead, |
But on the other hand disadvantage of being inflexible.
Advantages of Multilevel feedback queue scheduling:
+ Itis more flexible |
+ Itallows different processes to move between different queues
Disadvantages of Multilevel feedback queue scheduling:
+ Italso produces CPU overheads
+ Itis the most complex algorithm.
Multiple-Processor Scheduling
multiple-processor scheduling multiple CPU's are available and hence Load
Sharing becomes possible. However multiple processor scheduling is more complex as
compared to single processor scheduling. In multiple processor scheduling there are cases
when the processors are identical ie. HOMOGENEOUS, in terms of their functionality, we can
use any processor available to run any process in the queue.
Why is multiple-processor scheduling important?
Multiple-processor scheduling is important because it enables a computer system to
perform multiple tasks simultaneously, which can greatly improve overall system
performance and efficiency.
How does multiple-processor scheduling work?
Multiple-processor scheduling works by dividing tasks among multiple processors in a -
computer system, which allows tasks to be processed simultaneously and reduces the overall
time needed to complete them.
Approaches to Multiple-Processor Scheduling -
One approach is when all the scheduling decisions an i
Soomro es a esi deco nV procesing ar hand by 2
the user code. This is simple and reduces the need of data sharing, This enti eon
called Asymmetric Multiprocessing. A second approach uses symmetri me Scenarios
Multiprocessing where each processor is self scheduling, All Processes i F
common ready queue or each processor may have its own private queu rat beina
processes. The scheduling proceeds further by having the scheduler for chee
examine the ready queue and select a process to execute. SeProcessor Affinity -
ocessor On which it is
Processor Affinity means a processes has an affinity for the Pr ron ain effects On
Currently running. When a process runs on a specific processor there vere the cache fOr
the cache memory. The data most recently accessed by the process Forte ie satisfied in
the processor and as a result successive memory access by the process are 0! ante of the
the cache memory. Now if the process migrates to another processor, the con ee
Cache memory must be invalidated for the first processor and the cache fe! e SS
Processor must be repopulated. Because of the high cost of invalidating an‘ Ne ee
caches, most of the SMP(symmetric multiprocessing) systems try to avoid eee some
Processes from one processor to another and try to keep a process running on
Processor, This is known as PROCESSOR AFFINITY. There are two types of processor
affinity:
Soft Affinity - When an operating system has a policy of attempt
Tunning on the same processor but not guaranteeing it will do so,
soft affinity.
2. Hard Affinity - Hard Affinity allows a process to specify a subset of proce:
it may run, Some systems such as Linux implements soft affinity but also provide some
system calls like sched_setaffinity( that supports hard affinity.
ing to keep a process
this situation is called
ssors on which
Load Balancing —
Load Balancing is the phenomena which keeps the workload evenly distributed across all
processors in an SMP system. Load balancing is necessary only on systems where each
processor has its own private queue of process which are eligible to execute. Load balancing
is unnecessary because once a processor becomes idle it immediately extracts a runnable
matt) cess from the common run queue. On SMP(symmetric multiprocessing), it is important to
keep the workload balanced among all processors to fully utilize the benefits of having more
than one processor else one or more processor will sit idle while other processors have high
workloads along with lists of processors awaiting the CPU. There are two general
approaches to load balancing :
1. Push Migration - In push migration a task routinely checks the load on ea:
P ; ; ch
and if it finds an imbalance then it evenly distributes load on each processors precessor
the processes from overloaded to idle or less busy processors, SOving)
2. Pull Migration - Pull Migration occurs when an idle processor
busy processor for its execution. Pulls a waiting task from a‘Scheduling in Real Time Syste
Real-time systems are systems that carry real-time tasks. These tasks need to be performed
immediately with a certain degree of urgency. In particular, these tasks are related to control
of certain events (or) reacting to them. Real-time tasks can be classified as hard real-time
tasks and soft real-time tasks. -
A hard real-time task must be performed at a specified time which could otherwise lead to
huge losses. In soft real-time tasks, a specified deadline can be missed. This is because the
task can be rescheduled (or) can be completed after the specified time,
In real-time systems, the scheduler is considered as the most important component which is
typically a short-term task scheduler. The main focus of this scheduler is to reduce the
response time associated with each of the associated processes instead of handling the
deadline.
If a preemptive scheduler is used, the real-time task needs to wait until its corresponding
tasks time slice completes. In the case of a non-preemptive scheduler, even if the highest
priority is allocated to the task, it needs to wait until the completion of the current task. This
task can be slow (or) of the lower priority and can lead to a longer wait.