OPERATING SYSTEM
Recommended Books
2
Operating System
William Stalling
Operating System Concepts
Abraham Silberschatz
Contents
3
Introduction to Operating System
Operating System Structures
Processes
Threads
CPU Scheduling
Deadlocks
Main Memory
Virtual Memory
File System
Virtual Memory
Introduction
4
What is an Operating System?
Have you ever used one?
Why is it important?
Who uses an Operating System?
What is an Operating System
5
An operating system (OS) is software that acts as an
intermediary between computer hardware and the computer
user.
It provides a set of services and functionalities that enable the
hardware components to communicate with the software and
vice versa.
The primary goals of an operating system include:
Process Management: The OS oversees the execution of processes,
which are programs in execution. It manages tasks such as process
scheduling, creation, and termination.
6
Memory Management: The OS is responsible for managing the
computer's memory, ensuring that processes have the necessary space to
run and that memory is allocated and deallocated as needed.
File System Management: Operating systems provide a file system to
organize and store data. This includes managing files, directories, and
storage devices.
Device Management: The OS interacts with and manages various
hardware devices, such as printers, disk drives, and input/output devices,
facilitating communication between software and hardware components.
7
Security and Protection: Operating systems implement security measures
to protect the system and its data. This includes user authentication, access
control, and data encryption.
User Interface: Many operating systems provide a user interface, which
can be command-line based or graphical, allowing users to interact with
the computer system.
Networking: Operating systems support networking capabilities, allowing
computers to communicate with each other over networks. This includes
managing network connections, protocols, and configurations.
Popular Operating Systems
8
Microsoft Windows,
macOS,
Linux,
Unix.
Each type of operating system has its strengths and
is designed for specific use cases, ranging from
personal computers to servers and embedded
systems.
What is an Operating System
9
OS is a resource allocator
Manages all resources
Decides between conflicting requests for efficient and
fair resource use
OS is a control program
Controls execution of programs to prevent errors and
improper use of the computer
What is an Operating System
10
No universally accepted definition.
“Everything a vendor ships when you order an
operating system” is good approximation
But varies wildly
“The one program running at all times on the
computer” is the kernel. Everything else is either a
system program (ships with the operating system)
or an application program.
Kernel
11
The term "kernel" in the context of an operating
system refers to the core component that serves as
the central part of the operating system.
It is the essential part that manages the system's
resources and provides a bridge between the
computer hardware and the applications running on
it.
What is an Operating System
12
A program that acts as an intermediary between a
user of a computer and the computer hardware
Operating system goals:
Execute user programs and make solving user
problems easier
Make the computer system convenient to use
Use the computer hardware in an efficient manner
Computer System Structure
13
Components of Computer System
14
Computer Startup
15
bootstrap program is loaded at power-up or
reboot
Typically stored in ROM or EPROM, generally known
as firmware
Initializes all aspects of system
Loads operating system kernel and starts execution
Computer System Organization
16
Computer-system operation:
One or more CPUs, device controllers connect through
common bus providing access to shared memory.
Concurrent execution of CPUs and devices competing for
memory cycles.
Computer-System Operation
17
I/O devices and the CPU can execute concurrently.
Each device controller is in charge of a particular device type.
Each device controller has a local buffer.
CPU moves data from/to main memory to/from local buffers.
I/O is from the device to local buffer of controller.
Device controller informs CPU that it has finished its operation
by causing an interrupt.
Common Functions of Interrupts
18
Interrupt transfers control to the interrupt service routine
generally, through the interrupt vector, which contains the
addresses of all the service routines.
Interrupt architecture must save the address of the interrupted
instruction.
Incoming interrupts are disabled while another interrupt is being
processed to prevent a lost interrupt.
A trap is a software-generated interrupt caused either by an error
or a user request.
An operating system is interrupt driven.
Interrupt Handling
19
The operating system preserves the state of the
CPU by storing registers and the program counter.
Determines which type of interrupt has occurred:
polling
vectored interrupt system
Separate segments of code determine what action
should be taken for each type of interrupt.
20
21
22
How is the string displayed on the screen?
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
Operating System Services
49
An operating system provides an environment for
the execution of programs.
It makes certain services available to programs and
to the users of those programs.
50
Operating System Services
51
User interface - Almost all operating systems have
a user interface (UI)
Varies between Command-Line (CLI), Graphics User
Interface (GUI), Batch
Program execution - The system must be able to
load a program into memory and to run that
program, end execution, either normally or
abnormally (indicating error)
Operating System Services
52
I/O operations - A running program may require I/O,
which may involve a file or an I/O device
File-system manipulation - The file system is of
particular interest.
Programs need to read and write files and directories, create
and delete them, search them, list file Information,
permission management.
Operating System Services
53
Communications – Processes may exchange information, on
the same computer or between computers over a network
Communications may be via shared memory or through message
passing (packets moved by the OS)
Error detection – OS needs to be constantly aware of possible
errors
May occur in the CPU and memory hardware, in I/O devices, in user
program
For each type of error, OS should take the appropriate action to
ensure correct and consistent computing
Debugging facilities can greatly enhance the user’s and
programmer’s abilities to efficiently use the system
Operating System Services(for iteself)
54
A set of OS functions exists for ensuring the efficient
operation of the system itself via resource sharing
Resource allocation - When multiple users or multiple
jobs running concurrently, resources must be allocated
to each of them
Many types of resources - Some (such as CPU cycles, main
memory, and file storage) may have special allocation code,
others (such as I/O devices) may have general request and
release code
Accounting - To keep track of which users use how
much and what kinds of computer resources
55
Protection and security - The owners of information stored
in a multiuser or networked computer system may want to
control use of that information, concurrent processes should
not interfere with each other
Protection involves ensuring that all access to system resources is
controlled
Security of the system from outsiders requires user authentication,
extends to defending external I/O devices from invalid access
attempts
If a system is to be protected and secure, precautions must be
instituted throughout it. A chain is only as strong as its weakest link.
User and OS Interface
56
There are several ways for users to interface with
the operating system
Command Interpreter
Graphical User Interface
Touch Screen Interface
User and OS Interface
57
A command interpreter, often referred to as a shell
in Unix-like systems or the Command
Prompt/PowerShell in Windows, is a program that
acts as an interface between the user and the
operating system.
Its primary role is to interpret and execute
commands entered by the user.
User and OS Interface
58
User-friendly desktop metaphor interface
Usually mouse, keyboard, and monitor
Icons represent files, programs, actions, etc
Various mouse buttons over objects in the interface cause various
actions (provide information, options, execute function, open
directory (known as a folder)
Invented at Xerox PARC
Many systems now include both CLI and GUI interfaces
Microsoft Windows is GUI with CLI “command” shell
Apple Mac OS X as “Aqua” GUI interface with UNIX kernel
underneath and shells available
Solaris is CLI with optional GUI interfaces (Java Desktop, KDE)
59
User and OS Interface
60
Touch Screen Interfae- Because a either a
command-line interface or a mouse-and-keyboard
system is impractical for most mobile systems,
smartphones and handheld tablet computers
typically use a touch-screen interface. Here, users
interact by making gestures on the touch screen
System Calls
61
A system call is a programming interface provided by the
operating system that allows user-level processes or programs
to request services from the kernel or the core of the operating
system.
System calls provide a way for applications to interact with
the underlying hardware and access essential operating system
functionalities in a controlled and secure manner.
System Calls
62
key details about system calls:
Process Control: Creation, termination, and communication between
processes.
File Management: File manipulation, such as open, close, read, and
write operations.
Device Management: Control and communication with hardware
devices.
Information Maintenance: Gathering system information, such as
time, date, and system configuration.
Communication: Interprocess communication and networking
operations.
System Calls
63
Types of System Calls:
Process Control System Calls:
fork(): Create a new process.
exit(): Terminate the calling process.
wait(): Wait for the child process to exit.
System Calls
64
File Management System Calls:
open(): Open a file.
read(): Read data from a file.
write(): Write data to a file.
close(): Close a file.
System Calls
65
Device Management System Calls:
ioctl(): Input/output control for devices.
read(), write(): Used for communication with devices.
Information Maintenance System Calls:
time(), date(): Get system time and date.
getpid(): Get process ID.
getuid(): Get user ID.
System Calls
66
Communication System Calls:
socket(): Create a new communication endpoint.
send(), recv(): Send and receive messages over a
network.
System Calls
67
Protection System Calls:
get file permissions
set file permissions
68
User Mode and Kernel Mode
69
System Calls
70
Execution of System Calls:
When a user-level program makes a system call, it
triggers a software interrupt or trap.
The CPU switches from user mode to kernel mode to
give the operating system control.
The kernel executes the requested system call on
behalf of the user-level process.
After completion, control is returned to the user-level
process.
What goes into an OS
71
OS Structure(Monolithic Structure)
72
Types of Operating Systems
73
Single-User vs. Multi-User
Single-Tasking vs. Multi-Tasking
Batch Processing vs. Time-Sharing
Real-Time Operating Systems (RTOS)
Types of Operating Systems
74
Operating systems can be categorized into various
types based on their characteristics, functionality,
and intended use.
common types of operating systems:
75
Single-User, Single-Tasking Operating Systems:
Designed to support only one user and one task at a
time.
Examples: MS-DOS (Microsoft Disk Operating
System), early versions of Apple DOS.
Single-User, Multi-Tasking Operating Systems:
Supports a single user but allows the concurrent
execution of multiple tasks.
Examples: Microsoft Windows (desktop versions),
macOS.
76
Multi-User Operating Systems:
Supports multiple users concurrently.
Allows multiple users to access the system simultaneously,
each with their processes and tasks.
Examples: UNIX, Linux, mainframe operating systems.
Batch Processing Operating Systems:
Designed for executing a sequence of jobs or tasks without
user interaction.
Users submit jobs, and the system processes them in batches.
Common in business and scientific environments.
Examples: IBM OS/360, VMS (Virtual Memory System).
77
Time-Sharing or Multi-User/Multi-Tasking
Operating Systems:
Enables multiple users to interact with the system
concurrently.
Time is shared among users, and each user gets a small
time slice for their task.
Provides the illusion of simultaneous execution.
Examples: UNIX, Linux, modern versions of
Microsoft Windows.
78
Real-Time Operating Systems (RTOS):
Designed for systems that require immediate response to
external events.
Common in embedded systems, control systems, and
applications with strict timing requirements.
Examples: VxWorks, QNX, FreeRTOS.
VxWorks is often used in embedded systems, industrial automation, and
aerospace applications.
It ensures that tasks are executed within predetermined time constraints,
making it suitable for systems where timing precision is critical.
79
Network Operating Systems:
Specifically designed to support networked computing.
Facilitates communication and resource sharing among multiple
computers on a network.
Examples: Novell NetWare, Windows Server, Linux server
editions.
Distributed Operating Systems:
Extends the capabilities of network operating systems to support
distributed computing.
Distributes computing tasks across multiple interconnected
computers.
Examples: Google's Chrome OS, Amoeba.
80
Mobile Operating Systems:
Designed for mobile devices such as smartphones and tablets.
Emphasizes power management, touch input, and support for
mobile apps.
Examples: Android, iOS, HarmonyOS.
Embedded Operating Systems:
Tailored for use in embedded systems with specific functions
or dedicated applications.
Found in devices like routers, medical equipment, and
consumer electronics.
Examples: VxWorks (in some routers), Embedded Linux.
81
OPERATING SYSTEM
RESOURCE MANAGEMENT
Lecture#13-14
82
How OS Manages the
System Resources?
Resource management
83
Resource Sharing
CPU
Memory
Printer
Files
etc
84
The most important application of the os is for
resource management
OS is incharge of managing these resources in an
efficient manner
Among all the resources that are present in the
system like (memory, CPU, keyboard)CPU is the
critical resource
Sharing the CPU
85
Sharing the CPU
86
Who decides which
application should run
on the CPU ?
87
88
89
90
Other Shared Resources
91
Printer
Keyboard
Ram
Disk
Files on Disk
Networks
Multiprocessors
92
Multiple processor chips in a single system
Multiple cores in a single chip
Multiple threads in a single core
93
Race Conditions
94
Synchronization
95
Who should execute next?
96
Scheduling
Algorithm that executes to determines which app
should execute next
Need to be fair
Needs to able to prioritize some Apps over the other
Memory Management
97
Executing Programs(Process)
98
Executing Programs(Process)
99
Sharing RAM
100
Single Contiguous Model
101
No Sharing
One process occupies RAM at a time
When one process completes, another process is allocated
RAM
Drawback
Process memory size is restricted b RAM
Single Contiguous Model
102
No Sharing
One process occupies RAM at a time
When one process completes, another process is allocated
RAM
Drawback
Process memory size is restricted b RAM
Single Contiguous Model
103
No Sharing
One process occupies RAM at a time
When one process completes, another process is allocated
RAM
Drawback
Process memory size is restricted b RAM
Single Contiguous Model
104
No Sharing
One process occupies RAM at a time
When one process completes, another process is allocated
RAM
Drawback
Process memory size is restricted b RAM
Partition Model
105
Partition Model
(allocating and deallocating processes)
106
Partition Model
(allocating and deallocating processes)
107
Partition Model (fragmentation)
108
Partition Model
(finding the right fit)
109
Free space
New process Free space
Free space
Partition Model
(finding the right fit)
110
Free space
New process Free space
Free space
Partition Model
(finding the right fit)
111
Free space
New process Free space
Free space
Partition Model
(finding the right fit)
112
Free space
New process Free space
Free space
Partition Model (Deallocation)
113
Free space
New process Free space
Free space
Partition Model (Deallocation)
114
Free space
OS should detect where are three contiguous free blocks
to merge to make a one complete free block
New process Free space
Free space
Limitations
115
116
OPERATING SYSTEM
VIRTUAL MEMORY
Lecture#13-14
Virtual Memory
117
Virtual Memory
118
RAM is split into fixed size
partitions called page frames
Page frames typically = 4KBs
Note: in newer processors page size might
be of different size.
Virtual Memory
119
Process splits into the blocks of
equal size
block size = page frame size
Virtual Memory
120
Process splits into the blocks of equal
size
block size = page frame size
per process page table(stored in RAM)
Virtual Memory
121
Virtual Memory
122
Virtual Memory
123
Virtual Memory
124
Virtual Memory
125
Virtual Memory
126
Virtual Memory
127
Virtual Memory
128
Virtual Memory
129
Virtual Memory
130
Virtual Memory
131
Virtual Memory
132
Introduction to Process
133
Early computers allowed only one program to be
executed at a time.
This program had complete control of the system
and had access to all the system’s resources.
In contrast, contemporary computer systems allow
multiple programs to be loaded into memory and
executed concurrently.
Introduction to Process
134
A process, which is a program in execution.
A process is the unit of work in a modern
computing system.
The status of the current activity of a process is
represented by the value of the program counter
and the contents of the processor’s registers.
The memory layout of a process
135
136
Process state
137
As a process executes, it changes state
new: The process is being created
running: Instructions are being executed
waiting: The process is waiting for some event to
occur
ready: The process is waiting to be assigned to a
processor
terminated: The process has finished execution
Introduction to Process
138
Process Control Block (PCB)
139
Each process is represented in the operating system
by a process control block (PCB)
It contains many pieces of information associated
with a specific process, including these:
Process state: The state may be new, ready, running,
waiting, halted, and so on.
Program counter: The counter indicates the address
of the next instruction to be executed for this process.
Process Control Block (PCB)
140
CPU Registers: They include accumulators, index
registers, stack pointers, and general-purpose
registers
CPU-scheduling information: This information
includes a process priority, pointers to scheduling
queues
Memory-management information
Accounting information
I/O status information.
Process Control Block (PCB)
141
Information associated with each process
Process state
Program counter
CPU registers
CPU scheduling information
Memory-management information
Accounting information
I/O status information
Threads
142
A thread is light weight process.
The process model discussed so far has implied
that a process is a program that performs a single
thread of execution
For example, when a process is running a word-
processor program, a single thread of instructions is
being executed.
This single thread of control allows the process to
perform only one task at a time.
Example
143
Virtual Address map of Process
144
Virtual Address map of Process
145
Where Does the kernel
resides in the memory
146
147
148
149
Kernel Data about a Process
150
Kernel Data about a Process
151
152
The objective of multiprogramming is to have
some process running at all times so as to
maximize CPU utilization.
The objective of time sharing is to switch a CPU
core among processes so frequently that users can
interact with each program while it is running.
153
To meet these objectives, the process scheduler
selects an available process (possibly from a set of
several available processes) for program execution
on a core.
Each CPU core can run one process at a time.
154
For a system with a single CPU core, there will
never be more than one process running at a time
Whereas a multicore system can run multiple
processes at one time.
If there are more processes than cores, excess
processes will have to wait until a core is free and
can be rescheduled.
Process Scheduling
155
Process scheduling queues play a crucial role in the
operating system's task of managing and executing
processes efficiently.
These queues are data structures used to organize
and prioritize processes waiting to be executed by
the CPU.
The scheduling algorithm determines how
processes are selected from these queues for
execution.
Process Scheduling Queue
156
The common types of process scheduling queues:
Job Queue:
This queue holds all the processes in the system,
whether they are in main memory or secondary
storage.
Processes in this queue are waiting to be brought into
main memory for execution.
Process Scheduling Queue
157
Ready Queue:
Processes that are residing in main memory and are
ready to execute are placed in the ready queue.
The CPU scheduler selects processes from this queue
for execution based on the scheduling algorithm.
Process Scheduling Queue
158
Device Queue:
Processes waiting for a particular I/O device or
resource are placed in device queues.
These queues are associated with specific I/O devices
such as printers, disk drives, etc.
Once the I/O operation is complete, the process is
moved back to the ready queue.
Process Scheduling Queue
159
Waiting Queue:
Processes that are waiting for a particular event to
occur (other than I/O) are placed in the waiting queue.
For example, a process waiting for a signal or a
message.
Process Scheduling
160
Suspended Queue:
Processes that are temporarily taken out of main
memory and placed in secondary storage due to low
priority or resource limitations are in the suspended
queue.
They may be brought back to the ready queue later
when the system resources become available.
Queueing-diagram representation of process
scheduling
161
Schedulers
Short-term scheduler is invoked very frequently
(milliseconds) (must be fast)
Long-term scheduler is invoked very infrequently (seconds,
minutes) (may be slow)
The long-term scheduler controls the degree of
multiprogramming
Processes can be described as either:
I/O-bound process – spends more time doing I/O than
computations, many short CPU bursts
CPU-bound process – spends more time doing computations;
few very long CPU bursts
Addition of Medium Term Scheduling
Context Switch
When CPU switches to another process, the system
must save the state of the old process and load the
saved state for the new process via a context switch
Context of a process represented in the PCB
Context-switch time is overhead; the system does no
useful work while switching
Time dependent on hardware support
Operations on Processes
165
Process Creation
Process Termination
Process Creation
Parent process create children processes, which,
in turn create other processes, forming a tree of
processes
Generally, process identified and managed via a
process identifier (pid)
Resource sharing
Parent and children share all resources
Children share subset of parent’s resources
Parent and child share no resources
Process Creation (Cont.)
Address space
Child duplicate of parent
Child has a program loaded into it
UNIX examples
fork system call creates new process
exec system call used after a fork to replace the
process’ memory space with a new program
168
What is address space?
What Does Address Space Mean?
169
An address space is a range of valid addresses in
memory that are available for a program or process.
It is the memory that a program or process can
access.
The memory can be either physical or virtual and is
used for executing instructions and storing data.
A tree of processes on a typical Solaris
responsible for
responsible for managing clients that managing clients
directly log onto the system. that connect to the
system by using ssh
Process Creation
When a process creates a new process, two
possibilities for execution exist:
Parent and children execute concurrently
Parent waits until children terminate
Process Creation
There are also two address-space possibilities for
the new process:
The child process is a duplicate of the parent process (it
has the same program and data as the parent).
The child process has a new program loaded into it
Creating a process by Cloning
173
Cloning
Child process is exact replica of the parent
Fork system call
Creating a process by Cloning
174
Cloning
Child process is exact replica of the parent
Fork system call
Creating a process by Cloning
175
Creating a process by Cloning
176
Fork system call from OS perspective
177
Fork system call from OS perspective
178
Fork system call from OS perspective
179
Fork system call from OS perspective
180
Child Process in Ready Queue
181
Copying Page Table
182
Copying Page Table
183
Duplicating Page Tables
184
Copying Page Table
185
Example
186
Example
187
Process Termination
188
A process terminates when it finishes executing its
final statement and asks the operating system to
delete it by using the exit() system call.
Process Termination
189
A process terminates when it finishes executing its
final statement and asks the operating system to
delete it by using the exit() system call.
At that point, the process may return a status value
(typically an integer) to its parent process (via the
wait() system call).
Process Termination
190
All the resources of the process —including physical
and virtual memory, open files, and I/O buffers—are
deallocated and reclaimed by the operating system.
Process Termination
191
Termination can occur in other circumstances as
well:
A process can cause the termination of another process via
an appropriate system call
Process Termination
192
Termination can occur in other circumstances as
well:
A process can cause the termination of another process via
an appropriate system call
Usually, such a system call can be invoked only by the
parent of the process that is to be terminated.
Process Termination
193
Termination can occur in other circumstances as
well:
A process can cause the termination of another process via
an appropriate system call
Usually, such a system call can be invoked only by the
parent of the process that is to be terminated.
Otherwise, a user— or a misbehaving application—could
arbitrarily kill another user’s processes.
Process Termination
194
A parent may terminate the execution of one of its
children for a variety of reasons, such as these:
The child has exceeded its usage of some of the resources
that it has been allocated (To determine whether this has occurred,
the parent must have a mechanism to inspect the state of its children.)
Process Termination
195
A parent may terminate the execution of one of its
children for a variety of reasons, such as these:
The task assigned to the child is no longer required.
The parent is exiting, and the operating system does not allow
a child to continue if its parent terminates
196
197
198
Interprocess Communication
199
Interprocess communication (IPC) is a mechanism
that allows processes in an operating system to
communicate with each other.
This communication is essential for
coordination,
synchronization,
data exchange
Interprocess Communication
200
Processes executing concurrently in the operating
system may be either independent processes or
cooperating processes.
Interprocess Communication
201
Processes executing concurrently in the operating
system may be either independent processes or
cooperating processes.
A process is independent if it does not share data with
any other processes executing in the system.
Interprocess Communication
202
A process is cooperating if it can affect or be
affected by the other processes executing in the
system.
Interprocess Communication
203
There are several reasons for providing an
environment that allows process cooperation:
Information Sharing
Computational Speedup
Modularity
Interprocess Communication
204
Information Sharing:
Enable multiple applications to access and share the
same piece of information concurrently.
Example: Consider the functionality of copying and
pasting. Multiple applications may need access to the same
clipboard data.
To facilitate this, an environment supporting process
cooperation is required to allow concurrent access to shared
information.
Interprocess Communication
205
Computation Speedup:
Achieve faster execution of a particular task by
breaking it into subtasks and executing them in
parallel.
Multiple processing cores are necessary to achieve
parallel execution.
Example: If a computationally intensive task can be
divided into smaller subtasks, each subtask can be
executed concurrently on separate processing cores,
resulting in a speedup of the overall computation.
Interprocess Communication
206
Modularity:
Construct the system in a modular fashion by dividing
system functions into separate processes or threads.
This promotes a more organized and maintainable
system architecture.
Example: In a complex software system, different
modules or components may perform distinct functions.
By implementing these functions as separate processes
or threads, the system becomes more modular, making it
easier to understand, develop, and maintain.
Interprocess Communication
207
There are two fundamental models of IPC
Shared Memory
Message Passing
Interprocess Communication
208
Shared Memory Model: In the shared-memory
model, a region of memory that is shared by the
cooperating processes is established.
Processes can then exchange information by
reading and writing data to the shared region.
Interprocess Communication
209
Message-passing model: In the message-passing
model communication takes place by means of
messages exchanged between the cooperating
processes.
Interprocss Communication
210
211
The shared memory model is a method of Interprocess
Communication (IPC) where multiple processes share a
region of memory.
This shared memory region allows these processes to
communicate and exchange data more efficiently.
Processes can read and write to the shared memory space,
providing a fast and direct means of communication.
How the shared memory model works:
212
Memory Allocation:
A section of memory is allocated by an operating system or a
controlling process. This section is designated for sharing among
multiple processes.
Attaching Processes:
Processes that want to communicate with each other attach themselves
to the shared memory region. This attachment is typically done using
system calls or library functions provided by the operating system.
Data Exchange:
Processes can now read from and write to the shared memory region.
Any changes made by one process in the shared memory space are
immediately visible to other processes attached to the same region.
213
Synchronization:
Since multiple processes can access the shared
memory simultaneously, synchronization mechanisms
used to prevent conflicts and ensure data consistency.
Detachment and Cleanup:
When processes are done with the shared memory,
they can detach from it.
Advantages of Shared Memory Model:
214
Speed: Shared memory provides a fast means of
communication since processes can directly read
and write to the shared space without the need for
additional message passing.
Simplicity: The shared memory model is relatively
straightforward to implement and understand.
215
Efficiency: It is efficient for large data transfers or
frequent communication between processes.
Challenges of Shared Memory Model:
216
Synchronization: Proper synchronization is
crucial to avoid race conditions and ensure data
consistency.
217
Communication Limited to Same Machine:
Shared memory is typically used for communication
between processes on the same machine.
If communication is needed across networked machines,
other IPC mechanisms like sockets may be more
appropriate.
IPC(Message Passing)
218
Message passing provides a mechanism to allow
processes to communicate and to synchronize their
actions without sharing the same address space.
It is particularly useful in a distributed
environment, where the communicating processes
may reside on different computers connected by a
network.
219
For example:
An Internet chat program could be designed so that
chat participants communicate with one another by
exchanging messages.
220
THREADS AND CONCURRENCY
Self Task:
221
Suppose one person take 10 days to complete a
task. Than how many days shall be required by the
10 persons to complete the same task?
Consider this Example
222
Consider this Example
223
Consider this Example
224
225
226
Can we execute this program in all these four
processors to achieve parallelism?
227
Speeding up with multiple processes
Introduction to threads
228
229
Self test:
230
Significant Overheads: can we do better?
Thread
231
A Thread is a lightweight process
Thread Model
232
Thread Model
233
234
Introduction to threads
235
A thread is a basic unit of CPU utilization.
It comprises of
A Thread ID,
A program counter (PC),
A register set,
A stack.
Introduction to threads
236
It shares resources with other threads belonging to
the same process like
code section
data section
other operating-system resources (open files)
Introduction to threads
237
A traditional/heavyweight process has a single
thread of control.
If a process has multiple threads of control, it can
perform more than one task at a time
The term "thread of control" refers to the sequence of
instructions executed by the processor.
In a single-threaded process, only one set of instructions is
executed at any given time.
Process and Threads
238
Example of threads
239
A web browser might have one thread display
images or text while another thread retrieves data
from the network.
Example of threads
240
A word processor may have
a thread for displaying graphics,
another thread for responding to keystrokes from the user,
a third thread for performing spelling and grammar checking in the
background.
241
242
Advantages of multithreaded programming can be
broken down into four major categories
Responsiveness
Resource sharing
Economy
Utilization of multiprocessor architectures
Types of threads
243
There are two primary types of threads:
user-level threads
kernel-level threads
Thread management done by the user level thread library.
Kernel knows nothing about the threads
Kernel threads
Threads directly supported by the kernel
Known as light weight processes
Types of threads
244
There are two primary types of threads:
user-level threads
These threads are managed by a user-level thread library
and are not directly supported by the operating system
kernel.
The kernel sees the entire process as a single entity with one
thread.
User-level threads are faster to create and manage but may
not take full advantage of multiprocessor systems since they
rely on a single kernel-level thread for execution.
Types of threads
245
There are two primary types of threads:
Kernel-level threads
These threads are managed by the operating system kernel.
The kernel is aware of each individual thread within a
process and can schedule them independently.
Kernel-level threads tend to be slower to create and
manage, but they can take better advantage of
multiprocessor systems.
Types of threads
246
User-level threads offer speed and simplicity but may not
fully exploit the potential of multiprocessor systems due to the
lack of kernel-level awareness of individual threads.
Kernel-level threads provide better control, allowing the
operating system to schedule threads independently. This
makes them more suitable for exploiting the capabilities of
multiprocessor systems but may involve slightly higher
overhead.
User Level Threads
247
Kernel Level Threads
248
249
Ultimately, there must exist a relationship between
user thread and kernel thread
Relationship b/w User & kernel thread
250
Three common ways to establish relationship
between the threads
1. Many-to-One Model
2. One-to-One Model
3. Many-to-Many Model
Threading Models
251
Many-to-One threading
models
Maps Many user level threads
to one kernel level thread
Thread management is done by
the thread library
Threading Models
252
Disadvantage of Many-to-
One Model
The entire process will block if
a thread makes a blocking
system call
Because only one thread can
access the kernel at a time,
multiple threads are unable to
run in parallel on
multiprocessors.
Threading Models
253
One-to-One Model
The one-to-one model maps
each user thread to a kernel
thread.
It provides more concurrency
than the many-to-one model
by allowing another thread to
run when a thread makes a
blocking system call.
Threading Models
254
Many-to-Many Model
Multiplexes many user level
threads to a smaller or equal
number of kernel level threads
Number of kernel level threads
may be specific to either a
particular application or a
particular machine.
255
THREADING ISSUES
Lecture#29-31
Threading Issues
256
Issues to consider while designing multithreaded
programs
The fork() and exec() system calls
Signal Handling
Thread cancellation
Thread local storage
Scheduler activations
Threading Issues
257
Issues to consider while designing multithreaded
programs
The fork(): the fork is used for creating a child process
for a parent process with a different process ID
exec(): the exec sysetm call is used for replacing the
contents of a process with another process but
retaining the same process ID
258
How fork() and exec() system calls will behave
when we have threading involved?
Threading issues
259
The semantics of the fork() and exec() system calls
change in a multithreaded program.
Threading issues
260
Issue
If one thread in a process calls
fork(), does the new process
duplicate all threads, or is the new
process single- threaded?
Threading issues
261
Solution:
Some UNIX systems have chosen
to have two versions of fork(),
one that duplicates all threads
another that duplicates only the
thread that invoked the fork()
system call.
262
Which of the version is to
choose and when?
263
Which of the version is to choose and when?
Note: if a thread invokes the exec() system call, the
program specified in the parameter to exec() will
replace the entire process including all threads.
If exec() is called immediately after forking
Then duplicating all threads is unnecessary, as the
program specified in the parameters to exec()
will replace the process.
In this instance, duplicating only the calling
thread is appropriate.
264
Which of the version is to choose and when?
Note: if a thread invokes the exec() system call, the
program specified in the parameter to exec() will
replace the entire process including all threads.
If the separate process doesnot call exec() system call after
forking
Then separate process should duplicate all threads
Thread Cancellation
265
Thread cancellation refers to the process of
terminating the execution of a thread before it
completes its task.
Reasons for Thread Cancellation
Task completion
Timeouts
Error conditions
Resource management
User interruption
Threading Issues(Cancellation)
266
Thread cancellation is the task of terminating a
thread before it has completed
Example: if multiple threads are concurrently
searching through a database and one thread
returns the result, remaining threads might be
cancelled.
Threading Issues
267
Thread cancellation is the task of terminating a
thread before it has completed
Example: When a user presses a button on a
web browser that stops a web page from
loading any further, all threads loading the
page are cancelled.
Threading Issues
268
A thread that is to be canceled is often referred to as the target
thread.
Cancellation of a target thread may occur in two different
scenarios:
Asynchronous cancellation:
1. The thread is terminated immediately upon cancellation
request, regardless of its current state or what it might be
doing.
Note: This can lead to resource leaks or inconsistent program
states.
Threading Issues
269
A thread that is to be canceled is often referred to as the target
thread. Cancellation of a target thread may occur in two
different scenarios:
Deferred Cancellation:
1. The cancellation request is held pending until the thread
reaches a predefined cancellation point.
2. At these points, the thread can be safely canceled, allowing
for cleanup and resource release before termination.
Thread Cancellation
270
Where the difficulties lies with
thread cancellation?
Thread Cancellation
271
In Situations where:
Asynchronous cancellation:
Resources have been allocated to a canceled thread
Thread is canceled while in the midst of updating data it is sharing
with other threads
Often,the operating system will reclaim system resources from a
canceled thread but will not reclaim all resources.
Therefore, canceling a thread asynchronously may not free a
necessary system-wide resource.
Thread Cancellation
272
With deferred cancellation:
one thread indicates that a target thread is to be
canceled,
But cancellation occurs only after the target thread has
checked a flag to determine whether or not it should be
canceled.
This allows a thread to check whether it should be
cancelled at a point at when it can be canceled safely.
273
274
275
276
CPU SCHEDULING
Lecture#
CPU Scheduling
277
CPU scheduling refers to the switching between
processes that are being executed.
This switching ensures that CPU utilization is
maximized so that the computer is more productive.
CPU Scheduling
278
CPU Scheduling is a process that allows one
process to use the CPU while another process is
delayed (in standby) due to unavailability of any
resources such as I / O etc, thus making full use of
the CPU.
The purpose of CPU Scheduling is to make the
system more efficient, faster, and fairer.
Types of Scheduling
279
280
281
CPU Burst: It refers to the amount of time a process
actively uses the CPU without performing any I/O
operations. During a CPU burst, the process executes
instructions and performs calculations.
I/O Burst: It refers to the period during which a
process is waiting for an I/O operation to complete.
During an I/O burst, the process is not actively using
the CPU but is waiting for some input/output
operation to finish, such as reading from or writing
to a disk, network, or other peripheral devices
282
283
284
285
First Come First Serve (FCFS)
286
The First Come First Serve (FCFS) algorithm
assigns the CPU to the process that arrives first in
the ready queue.
This means that the process that requests the CPU
for its execution first will get the CPU allocated
first.
This is managed through the FIFO queue.
287
288
289
290
291
292
293
294
SJF(Without preemption)
295
296
297
298
299
300
301
Round Robin Scheduling
302
Round Robin Scheduling
303
Round Robin scheduling algorithm is specially
designed for time sharing systems
Similar to FCFS scheduling, but preemption is
added to switch between processes
A small unit of time, called a time quantum or time
slice, is defined (generally from 1o to 1oo
milliseconds)
304
305
Round Robin Scheduling
306
Duration of Timeslice
307
Round Robin Scheduling
308
Priority based Scheduling
309
A priority is associated with each process, and the
CPU is allocated to the process with the highest
priority.
Equal priority processes are scheduled in FCFS
order.
310
Priority based Scheduling
311
Starvation
312
Note: Assuming after every 15th CPU cycle P1,P2,P3 & P4 are arriving
Dealing with Starvation
313
Scheduler adjusts priority of processes to ensure
that they all eventually execute
Several techniques are possible. For Example:
When a process starts it is given a base priority
After every time slot increases the priority of all other
process except the process that is currently running.
This ensure that even low priority process will eventually
execute
After a process executes, its priority is reset.
Types of Priorities
314
315
316
317
318
DEADLOCKS
Lecture#42-43
Deadlock
319
Deadlock in OS refers to a situation where
more than one or two processes or threads
are not able to proceed because each is
waiting for the other to release a resource.
OR
It’s a state where a group of processes
become stuck in a way that they can’t make
any progress.
Deadlock
320
Deadlock usually occurs in systems where multiple
processes compete for limited resources such as
CPU time,
memory
input /output devices.
Deadlock
321
Deadlocks
322
Deadlocks
323
Resource Allocation Graph
324
It is a directed graphs
Used to model resource allocations in the system to
determine the whether the deadlock occurs in the
system or not.
Resource Allocation Graph
325
It is a directed graphs
Used to model resource allocations in the system to
determine the whether the deadlock occurs in the
system or not.
Resource Allocation Graph
326
It is a directed graphs
Used to model resource allocations in the system to
determine the whether the deadlock occurs in the
system or not.
Resource Allocation Graph
327
It is a directed graphs
Used to model resource allocations in the system to
determine the whether the deadlock occurs in the
system or not.
Necessary conditions for Deadlock
328
There are four necessary conditions for a Deadlock
to occur
1. Mutual exclusion
2. Hold and wait
3. No pre-emption
4. Circular wait
Necessary conditions for Deadlock
329
There are four necessary conditions for a Deadlock to occur
1. Mutual exclusion: only one process can use it at a time.
2. Hold and wait: The process must hold at least one
resource while waiting to acquire additional resources.
3. No pre-emption: Resources cannot be pre-empted or
forcibly taken away from a process.
4. Circular wait: There must be a circular chain of one or
more processes, each of which is waiting for a
resource held by another process in the chain.
Deadlock Conditions (Mutual Exclusion)
330
only one process can use it at a time.
Each resource is either available or currently
assigned to exactly one process
Deadlock Conditions (Hold and Wait)
331
The process must hold at least one resource while
waiting to acquire additional resources.
Deadlock Conditions (No pre-emption)
332
Resources cannot be pre-
empted or forcibly taken
away from a process.
They must be explicitly
release by the process
holding them
Deadlock Conditions(Circular wait)
333
There must be a circular chain
of one or more processes,
each of which is waiting for a
resource held by another
process in the chain.
Handling Deadlock
334
Handling Deadlock (Prevention)
335
Deadlock prevention strategies aim to eliminate
one or more of the necessary conditions for
Deadlock.
This approach ensures that Deadlock cannot occur
in the system.
One common method involves resource allocation
graphs or resource allocation policies to track and
manage resource allocation.
Handling Deadlock (detection and recovery)
336
Deadlock detection is a process of periodically
checking the system for the existing Deadlocks.
Once the Deadlock is detected, the system can take
action to resolve it.
Handling Deadlock (avoidance)
337
Deadlock avoidance uses resource allocation
algorithms and request protocols to dynamically
allocate resources in a way that prevents the system
from entering a Deadlock state.
The key is to make sure that resource requests do
not lead to situations where all necessary
conditions for Deadlock are met.
Handling Deadlock (avoidance)
338
Deadlock avoidance uses resource allocation
algorithms and request protocols to dynamically
allocate resources in a way that prevents the system
from entering a Deadlock state.
The key is to make sure that resource requests do
not lead to situations where all necessary
conditions for Deadlock are met.
339
SYNCHRONIZATION
Lecture#43-45
Motivating Scenario
340
Motivating Scenario
341
Motivating Scenario
342
Motivating Scenario
343
Motivating Scenario
344
Race Condition
345
Race Condition in Multicore
346
Critical Section Solution
347
Solution to Critical Section problem
348
Most solutions to the critical section problem
utilize locks implemented on software.
Solutions include
test_and_set
compare_and_swap
Mutex locks
Semaphores
Condition variables
Solution to Critical Section problem
349
Most solutions to the critical section problem utilize locks implemented on
software.
solutions include:
test_and_set: Uses a shared boolean variable lock and the test_and_set instruction
that executes atomically and sets the passed parameter value to true.
compare_and_swap: Same as test_and_set, except that the passed parameter
value is set to true if it is equal to expected in the parameter of
the compare_and_swap instruction.
Mutex locks: Software method that provides the acquire() and release() functions
that execute atomically.
Semaphores: More sophisticated methods that utilize
the wait() and signal() operations that execute atomically on Semaphore S, which
is an integer variable.
Condition variables: Utilizes a queue of processes that are waiting to enter the
critical section.
Locks and Unlocks
350
351
Using interrupt (applicable in single core system
only)
352
353