KEMBAR78
OS Class Notes | PDF | Operating System | Thread (Computing)
0% found this document useful (0 votes)
19 views74 pages

OS Class Notes

An Operating System (OS) is system software that serves as an intermediary between users and computer hardware, managing resources and providing services for programs. Various OS structures include simple batch systems, multiprogrammed systems, time-shared systems, and real-time systems, each with unique characteristics and applications. Key components of an OS include process management, memory management, file management, and system calls, which facilitate interaction between user programs and the OS.

Uploaded by

navya.mettu92
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views74 pages

OS Class Notes

An Operating System (OS) is system software that serves as an intermediary between users and computer hardware, managing resources and providing services for programs. Various OS structures include simple batch systems, multiprogrammed systems, time-shared systems, and real-time systems, each with unique characteristics and applications. Key components of an OS include process management, memory management, file management, and system calls, which facilitate interaction between user programs and the OS.

Uploaded by

navya.mettu92
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 74

UNIT I: Introduction to Operating Systems

Definition

An Operating System (OS) is system software that acts as an intermediary between the user and
the computer hardware. It manages hardware resources and provides services for computer
programs.

Types of OS Structures

Simple Batch Systems

 Jobs are grouped and executed in batches without interaction.


 No direct user interaction.

Diagram:

+--------+ +--------+ +--------+


| Job 1 | --> | Job 2 | --> | Job 3 |
+--------+ +--------+ +--------+
Processor executes one job at a time.

Multiprogrammed Systems

 Multiple programs in memory at the same time.


 CPU is utilized efficiently.

Diagram:

+-----------+
| Program A |
+-----------+
| Program B |
+-----------+
| Program C |
+-----------+
Memory contains multiple programs.

Time-Shared Systems

 Each user gets a small time slot.


 Enables multi-user interaction.

Personal Computer Systems

 Designed for single-user systems (Windows, macOS).

Parallel Systems

 Multiple CPUs work in parallel.

1
 Increases speed and reliability.

Distributed Systems

 Multiple computers communicate over a network.


 Each system has its own memory and CPU.

Real-Time Systems

 Strict timing constraints.


 Used in embedded systems, medical devices, etc.

System Components

1. Process Management
2. Memory Management
3. Storage Management
4. File System Management
5. I/O System Management
6. Protection and Security

Operating System Services

1. Program execution
2. I/O operations
3. File system manipulation
4. Communication
5. Error detection
6. Resource allocation
7. Security

System Calls

System calls provide the interface between a process and the operating system.

Types of system calls:

1. Process control: fork(), exit(), wait()


2. File management: open(), read(), write()
3. Device management: ioctl()
4. Information maintenance: stat()
5. Communication: pipe(), shmget(), msgsnd()

Process Concepts and Scheduling

Process

A process is a program in execution.

2
Process States

 New
 Ready
 Running
 Waiting
 Terminated

Diagram: Process State Transition

+-----+ +-------+
|New | -----> | Ready |
+-----+ +---+---+
|
v
+------+ +-------+
|Running| -->|Waiting|
+--+---+ +---+---+
| |
v v
+--------+ +--------+
| Terminated |
+------------+

Operations on Processes

 Creation and Termination (fork(), exit())


 Scheduling
 Blocking and Waking
 Interprocess Communication

Cooperating Processes

 Can share data and resources.


 Requires synchronization mechanisms.

Threads

 Lightweight processes.
 Share code, data, and OS resources.
 Used for multitasking within a process.

3
UNIT II: CPU Scheduling and Deadlocks

CPU Scheduling

Scheduling Criteria:

 CPU Utilization
 Throughput
 Turnaround Time
 Waiting Time
 Response Time

Scheduling Algorithms:

1. First Come First Serve (FCFS)


2. Shortest Job Next (SJN)
3. Priority Scheduling
4. Round Robin (RR)
5. Multilevel Queue Scheduling

Multiple-Processor Scheduling:

 Asymmetric vs Symmetric multiprocessing


 Load sharing

System Call Interface for Process Management

 fork() – Creates a new process


 exit() – Terminates a process
 wait() and waitpid() – Wait for process termination
 exec() – Replaces the current process image

Deadlocks

System Model:

 Processes compete for resources.

Deadlock Characterization:

 Mutual Exclusion
 Hold and Wait
 No Preemption
 Circular Wait

Methods for Handling Deadlocks:

 Prevention: Prevent one of the four conditions


 Avoidance: Banker's algorithm
 Detection and Recovery

4
UNIT III: Process Synchronization and IPC

The Critical Section Problem

Solutions must ensure mutual exclusion, progress, and bounded waiting.

Synchronization Mechanisms

Hardware Support: TestAndSet, CompareAndSwap

Semaphores: Counting and Binary

Classical Problems:

 Producer-Consumer
 Readers-Writers
 Dining Philosophers

Critical Regions and Monitors

High-level abstractions for synchronization

Interprocess Communication (IPC)

IPC on Single System:

 Shared memory
 Message passing

IPC on Distributed Systems:

 Pipes, FIFOs, Message Queues


 Example Diagram:
 Process A <--pipe--> Process B

5
UNIT IV: Memory Management

Logical vs Physical Address Space

 Logical: generated by CPU


 Physical: actual address in memory

Memory Management Techniques

1. Swapping: Moving processes between main memory and disk


2. Contiguous Allocation
3. Paging: Fixed-size pages and frames
4. Segmentation: Logical division of memory
5. Segmentation with Paging: Combines both methods

Virtual Memory
Demand Paging: Pages loaded only when needed

Page Replacement Algorithms:

 FIFO
 LRU
 Optimal

6
UNIT V: File System Interface and Operations

File Access Methods

Sequential, Direct, and Indexed

Directory Structures

Single-Level, Two-Level, Tree, Acyclic Graph

Protection and File System Structure

 Access control
 File system layers: Application, Logical, Physical

Allocation Methods

Contiguous, Linked, Indexed

Free-space Management

Bitmaps, Free lists

System Calls for File Operations

 open(), create(), read(), write(), close()


 lseek(), stat(), ioctl()

7
UNIT-1
Introduction to operating system:
An operating system (OS) acts as an intermediary between computer
hardware and the user, managing the computer's resources and providing a
platform for running applications.

Examples OS:

Windws,Android,IOS( iphone operating sysyem)

Structure of OS:
The operating system can be implemented with the help of various structures. The
structure of the OS depends mainly on how the various standard components of the
operating system are interconnected and merge into the kernel.

Types of Operating Systems Structures


Depending on this, we have the following structures in the operating system:
 Simple Structure
 Monolithic Structure
 Micro-Kernel Structure
 Hybrid-Kernel Structure
 Exo-Kernel Structure
 Layered Structure
 Modular Structure
 Virtual Machines

8
1.Simple Structure:
Simple structure operating systems do not have well-defined structures and are
small, simple, and limited. The interfaces and levels of functionality are not well
separated. MS-DOS is an example of such an operating system. In MS-DOS,
application programs are able to access the basic I/O routines. These types of
operating systems cause the entire system to crash if one of the user programs
fails.

2.Monolithic Structure:
A monolithic structure is a type of operating system architecture where the
entire operating system is implemented as a single large process in kernel mode.
Essential operating system services, such as process management, memory
management, file systems, and device drivers, are combined into a single code
block.

3.Micro-Kernel Structure:
Micro-Kernel structure designs the operating system by removing all non-
essential components from the kernel and implementing them as system and user
programs. This results in a smaller kernel called the micro-kernel. Advantages of
this structure are that all new services need to be added to user space and does not
require the kernel to be modified. Thus it is more secure and reliable as if a service
fails, then rest of the operating system remains untouched. Mac OS is an example
of this type of OS.

4.Hybrid-Kernel Structure:

9
Hybrid-Kernel structure is nothing but just a combination of both monolithic-
kernel structure and micro-kernel structure. Basically, it combines properties of
both monolithic and micro-kernel and make a more advance and helpful approach.
It implement speed and design of monolithic and modularity and stability of micro-
kernel structure.

5.Exo-Kernel Structure:
Exokernel is an operating system developed at MIT to provide application-level
management of hardware resources. By separating resource management from
protection, the exokernel architecture aims to enable application-specific
customization. Due to its limited operability, exokernel size typically tends to be
minimal.

6.Layered Structure:
An OS can be broken into pieces and retain much more control over the system.
In this structure, the OS is broken into a number of layers (levels). The bottom
layer (layer 0) is the hardware, and the topmost layer (layer N) is the user interface.
These layers are so designed that each layer uses the functions of the lower-level
layers. This simplifies the debugging process, if lower-level layers are debugged
and an error occurs during debugging, then the error must be on that layer only, as
the lower-level layers have already been debugged. The main disadvantage of this
structure is that at each layer, the data needs to be modified and passed on which
adds overhead to the system. Moreover, careful planning of the layers is necessary,
as a layer can use only lower-level layers. UNIX is an example of this structure.

7.Modular Structure:
It is considered as the best approach for an OS. It involves designing of a
modular kernel. The kernel has only a set of core components and other services

10
are added as dynamically loadable modules to the kernel either during runtime or
boot time. It resembles layered structure due to the fact that each kernel has defined
and protected interfaces, but it is more flexible than a layered structure as a module
can call any other module. For example Solaris OS is organized as shown in the
figure.

Structure- Simple Batch:

There is no direct interaction between the user and the computer in a simple
batched system. The user must submit a job (written on cards or tape) to a
computer operator. The computer operator then places a batch of several
jobs on an input device.

Multiprogramming in operating system:

11
A multiprogramming operating system allows multiple programs to reside in
main memory and share the CPU, switching between them to maximize resource
utilization and throughput. This means the CPU is not idle when one program is
waiting for I/O operations, but instead switches to another program that is ready to
execute.

Key Concepts:

Multiple Programs in Memory:


Unlike older systems where only one program ran at a time,
multiprogramming allows several programs to be loaded into main memory
concurrently.

CPU Switching:
The operating system rapidly switches the CPU between these different
programs, giving the illusion of simultaneous execution.

Resource Optimization:
Multiprogramming aims to minimize idle time of the CPU and other
resources by ensuring that when one program is waiting for I/O, another program
can utilize the CPU.

Throughput Improvement:
By keeping the CPU busy, multiprogramming leads to higher throughput,
meaning more tasks can be completed in a given time.

Example:

12
Imagine you are working on a document (program 1) while also downloading a
file (program 2) and listening to music (program 3). In a multiprogramming system,
the CPU would switch between these programs, allowing you to work on the
document, download the file, and listen to music seemingly simultaneously.

Time shared:
A time-sharing operating system allows multiple users to access a single
computer system simultaneously by rapidly switching between their processes. This
creates the illusion that each user has the computer's full resources at their disposal,
even though they are sharing the CPU time and other resources.

Key Concepts:
Multitasking:
Time-sharing is a form of multitasking where the CPU switches between
different processes (belonging to different users) very rapidly.

Time Slicing:
The CPU time is divided into small intervals called time slices or quanta, and
each process gets a turn to execute for a short time slice.

Interactive Response:
The primary goal of time-sharing is to provide a good interactive response
time for each user, making the system feel responsive even with multiple users.

Resource Allocation:
The operating system manages and allocates resources like CPU time,
memory, and I/O devices to different users and processes.

Parallel system in Operating System:

13
A parallel system in an operating system utilizes multiple processors or cores
to execute tasks concurrently, thereby enhancing performance and
responsiveness. These systems are designed to manage and coordinate the activities
of these multiple processing units to achieve faster processing speeds and handle
complex computations more efficiently.

Distributed systems:

A Distributed Operating System (DOS) manages a network of independent


computers as a single cohesive system. It allows multiple computers to work
together, presenting a unified interface to users and applications, and sharing
resources like CPU, memory, and storage.

Types of Distributed Operating System


There are many types of Distributed Operating System, some of them are as
follows:

1. Client-Server Systems

2. Peer-to-Peer(P2P) Systems

14
3. Middleware

4. Three-Tier

5. N-Tier

Real time systems in operating systems:


A Real-Time Operating System (RTOS) is a specialized operating system
designed to handle time-sensitive tasks with strict deadlines, prioritizing
predictability and determinism over overall system efficiency.

Examples of RTOS Use Cases:


 Industrial Automation: Controlling robots, machinery, and automated
production lines in factories.
 Automotive Systems: Managing engine control, braking systems, and other
critical functions in vehicles.
 Aerospace: Controlling flight systems, navigation, and other mission-critical
applications in aircraft and spacecraft.
 Medical Devices: Ensuring the precise and timely operation of life-support
systems and other medical equipment.
 Consumer Electronics: Controlling digital watches, gaming consoles, and
digital cameras.
 Traffic Control Systems: Managing traffic flow and ensuring traffic signals
operate correctly.

Applications of Real-Time Systems:


 Aerospace (flight control systems)
 Automotive (anti-lock braking systems, airbag systems
 Medical Equipment (patient monitoring, imaging systems)
 Industrial Automation (robotics, CNC machines)

15
 Telecommunication (network switches).

Components of Operating System:


An Operating system is an interface between users and the hardware of a computer
system. It is a system software that is viewed as an organized collection of software
consisting of procedures and functions, providing an environment for the execution
of programs. The operating system manages system software and computer
hardware resources. It allows computing resources to be used in an efficient way.
Programs interact with computer hardware with the help of operating system. A user
can interact with the operating system by making system calls or using OS
commands.

Important Components of the Operating System:


 Process Management
 File Management
 Command Interpreter
 System Calls
 Signals
 Network Management
 Security Management
 I/O Device Management
 Secondary Storage Management
 Main Memory Management

🔹 1. Process Management

Purpose: Manages processes in the system — creation, execution, and


termination.

Functions:

Scheduling (e.g., Round Robin, FCFS)

Context switching

Inter-process communication (IPC)

Deadlock prevention

16
✅ Example: Running multiple apps like browser, games, or media player
simultaneously.

🔹 2. File Management

Purpose: Organizes and controls access to files and directories on storage.

Functions:

File creation, deletion, reading, and writing

Directory navigation

File permission control

✅ Example: Opening, editing, and saving a document in Microsoft Word.

🔹 3. Command Interpreter

Purpose: Acts as a user interface — interprets user commands and executes


them.

Types:

CLI (e.g., Bash, Command Prompt)

GUI (e.g., Windows Explorer)

✅ Example: Typing mkdir test in terminal to create a directory.

🔹 4. System Calls

Purpose: Interface between user programs and OS kernel to request services.

Types of calls:

Process control (e.g., fork, exec)

File operations (e.g., open, read, write)

17
Device handling

✅ Example: A program requesting memory from OS using malloc().

🔹 5. Signals

Purpose: A signal is a notification sent to a process to trigger predefined


behavior.

Common signals:

SIGINT – interrupt (Ctrl+C)

SIGKILL – terminate immediately

SIGSEGV – segmentation fault

✅ Example: Pressing Ctrl+C to terminate a running program in terminal.

🔹 6. Network Management

Purpose: Manages network connections and data transfer.

Functions:

Handling communication protocols (TCP/IP)

Managing network interfaces

Monitoring data packets

✅ Example: Accessing a webpage or sending an email.

🔹 7. Security Management

Purpose: Protects system resources and data from unauthorized access.

Functions:

Authentication and authorization

18
Data encryption

Malware protection

✅ Example: Password protection or firewall blocking malicious activity.

🔹 8. I/O Device Management

Purpose: Controls and coordinates all input and output devices.

Functions:

Communication with devices via drivers

Buffering and spooling

Device allocation

✅ Example: Printing a document or using a USB drive.

🔹 9. Secondary Storage Management

Purpose: Manages hard drives, SSDs, and external storage.

Functions:

Space allocation

Disk scheduling

Formatting and partitioning

✅ Example: Saving a video file to the D: drive.

🔹 10. Main Memory Management

Purpose: Manages RAM allocation and deallocation.

Functions:

19
Memory partitioning

Paging and segmentation

Virtual memory management

✅ Example: Running large programs by swapping unused pages to disk.

1.Process Management
A process is a program in execution. It consists of the followings:
 Executable program
 Program data
 Stack and stack pointer
 Program counter and other CPU registers
 Details of opened files
A process can be suspended temporarily and the execution of another process can
be taken up. A suspended process can be restarted later. Before suspending a
process, its details are saved in a table called the process table so that it can be
executed later on. An operating system supports two system calls to manage
processes Create and Kill -
 Create a system call used to create a new process.
 Kill system call used to delete an existing process.

Five State Process Model


A process can create a number of child processes. Processes can communicate
among themselves either using shared memory or by message-passing techniques.
Two processes running on two different computers can communicate by sending
messages over a network.

2.Files Management

20
Files are used for long-term storage. Files are used for both input and output.
Every operating system provides a file management service. This file management
service can also be treated as an abstraction as it hides the information about the
disks from the user. The operating system also provides a system call for file
management. The system call for file management includes:
 File creation
 File deletion
 Read and Write operations
Files are stored in a directory. System calls provide to put a file in a directory or to
remove a file from a directory. Files in the system are protected to maintain the
privacy of the user. Below shows the Hierarchical File Structure directory.

3.Command Interpreter
There are several ways for users to interface with the operating system. One
of the approaches to user interaction with the operating system is through
commands. Command interpreter provides a command-line interface. It allows the
user to enter a command on the command line prompt (cmd). The command
interpreter accepts and executes the commands entered by a user. For example, a
shell is a command interpreter under UNIX. The commands to be executed are
implemented in two ways:
 The command interpreter itself contains code to be executed.
 The command is implemented through a system file. The necessary system
file is loaded into memory and executed.

4,System Calls
System calls provide an interface to the services made by an operating
system. The user interacts with the operating system programs through System
calls. These calls are normally made available as library functions in high-level
languages such as C, Java, Python etc. It provides a level of abstraction as the user
is not aware of the implementation or execution of the call made. Details of the

21
operating system is hidden from the user. Different hardware and software services
can be availed through system calls.

System calls are available for the following operations:


 Process Management
 Memory Management
 File Operations
 Input / Output Operations

5,Signals
Signals are used in the operating systems to notify a process that a particular
event has occurred. Signals are the software or hardware interrupts that suspend the
current execution of the task. Signals are also used for inter-process
communication. A signal follows the following pattern :
 A signal is generated by the occurrence of a particular event it can be the
clicking of the mouse, the execution of the program successfully or an error
notifying, etc.
 A generated signal is delivered to a process for further execution.
 Once delivered, the signal must be handled.
 A signal can be synchronous and asynchronous which is handled by a default
handler or by the user-defined handler.
The signal causes temporarily suspends the current task it was processing, saves its
registers on the stack, and starts running a special signal handling procedure, where
the signal is assigned to it.

6.Network Management:
The complexity of networks and services has created modern challenges for IT
professionals and users. Network management is a set of processes and procedures
that help organizations to optimize their computer networks. Mainly, it ensures that
users have the best possible experience while using network applications and
services.
Network management is a fundamental concept of computer networks.
Network Management Systems is a software application that provides network
administrators with information on components in their networks. It ensures the
quality of service and availability of network resources. It also examines the
operations of a network, reconstructs its network configuration, modifies it for
improving performance of tasks.

7.Security Management

22
The security mechanisms in an operating system ensure that authorized
programs have access to resources, and unauthorized programs have no access to
restricted resources. Security management refers to the various processes where the
user changes the file, memory, CPU, and other hardware resources that should have
authorization from the operating system.

8.I/O Device Management


The I/O device management component is an I/O manager that hides the
details of hardware devices and manages the main memory for devices using cache
and spooling. This component provides a buffer cache and general device driver
code that allows the system to manage the main memory and the hardware devices
connected to it. It also provides and manages custom drivers for particular
hardware devices.
The purpose of the I/O system is to hide the details of hardware devices from
the application programmer. An I/O device management component allows highly
efficient resource utilization while minimizing errors and making programming
easy on the entire range of devices available in their systems.

9.Secondary Storage Management:


Broadly, the secondary storage area is any space, where data is stored
permanently and the user can retrieve it easily. Your computer’s hard drive is the
primary location for your files and programs. Other spaces, such as CD-
ROM/DVD drives, flash memory cards, and networked devices, also provide
secondary storage for data on the computer. The computer’s main memory (RAM)
is a volatile storage device in which all programs reside, it provides only temporary
storage space for performing tasks. Secondary storage refers to the media devices
other than RAM (e.g. CDs, DVDs, or hard disks) that provide additional space for
permanent storing of data and software programs which is also called non-volatile
storage.

10.Main Memory Management


Main memory is a flexible and volatile type of storage device. It is a large
sequence of bytes and addresses used to store volatile data. Main memory is also
called Random Access Memory (RAM), which is the fastest computer storage
available on PCs. It is costly and low in terms of storage as compared to secondary
storage devices. Whenever computer programs are executed, it is temporarily
stored in the main memory for execution. Later, the user can permanently store the
data or program in the secondary storage device.

Operating System Services

23
An operating system is software that acts as an intermediary between the user
and computer hardware. It is a program with the help of which we are able to run
various applications. It is the one program that is running all the time. Every
computer must have an operating system to smoothly execute other programs.

Services of Operating System

The Operating System (OS) provides essential services that make it easier for users
and applications to interact with hardware efficiently and securely.

1. Program Execution

Loads a program into memory and runs it

Handles process creation, scheduling, and termination.


✅ Example: Running MS Word or a Python script.

2. Input/Output (I/O) Operations

Manages input and output devices.

Provides a uniform interface to perform I/O operations like read/write.


✅ Example: Reading a file from disk or printing a document.

24
3. Communication Between Processes

Enables Inter-Process Communication (IPC) using shared memory, message


passing, pipes, etc.
✅ Example: A browser plugin communicating with the main browser.

4. File Management

Manages files and directories: creation, reading, writing, deletion, and


permissions.
✅ Example: Opening and editing a text document.

5. Memory Management

Allocates and deallocates memory to processes.

Supports virtual memory and paging.


✅ Example: Running multiple applications without running out of RAM.

6. Process Management
Handles process creation, suspension, resumption, and termination.

Schedules CPU time to processes fairly.


✅ Example: Running a background music player while typing.

7. Security and Privacy


Protects data and resources from unauthorized access.

Implements authentication, access control, and encryption.


✅ Example: Login system and file access restrictions.

8. Resource Management

Manages CPU, memory, disk, and I/O devices.

Allocates and schedules resources to ensure efficiency.


✅ Example: Allocating CPU time to different apps.

25
9. User Interface

Provides interfaces for user interaction: CLI (Command Line) or GUI


(Graphical).
✅ Example: Windows Explorer or Terminal.

10. Networking

Facilitates data exchange between devices via network protocols.

Manages network connections and routing.


✅ Example: Browsing the internet or accessing shared drives.

11. Error Handling

Detects and reports system or hardware errors.

Takes recovery actions where possible.


✅ Example: Blue screen or file system recovery after crash.

12. Time Management

Maintains system time and schedules processes based on timers.

Supports real-time applications with precise timing.


✅ Example: Alarms, reminders, and process scheduling.

System Calls in Operating Systems

✅What is a System Call?


 A System Call is a programmatic way for a user-level process to request a
service from the operating system’s kernel.

26
 It acts as an interface between the process and the OS.

Why are System Calls Needed?

User programs cannot directly access hardware (e.g., CPU, disk, memory) for
security and safety. So, they must invoke system calls to:

 Open a file
 Allocate memory
 Create a new process
 Perform I/O
 Communicate with other processes

27
Types of System Calls
🔹 Type 🔹 Description ✅ Examples
1. Process Control Create/terminate processes fork(), exit(), exec()
open(), read(), write(),
2. File Management File operations
close()
Request or release I/O
3. Device Management ioctl(), read(), write()
devices
4. Information
Get/set system data or time getpid(), alarm(), sleep()
Maintenance
pipe(), shmget(), send(),
5. Communication Process communication
recv()
6. Protection Access permission checks chmod(), umask()

How System Calls Work (Simple Diagram)

+-----------------------------+

| User Program (App) |

+-----------------------------+

makes request

+-----------------------------+

| System Call API | ← Interface

+-----------------------------+

traps to kernel

28
+-----------------------------+

| Operating System Kernel |

+-----------------------------+

Operating System Kernel :The Kernel is the core component of an operating


system. It acts as a bridge between applications and hardware by managing system
resources like CPU, memory, and I/O devices.

Examples of System Calls in Linux


Purpose System Call
Create a process fork()
Execute a new program execve()
Read from file read()
Write to file write()
Open file open()
Close file close()
Terminate process exit()
Get process ID getpid()

Thread:

A thread is the smallest unit of CPU execution within a process. A process can
have one or more threads that share:

 Code
 Data
 Open files and resources

A thread is often called a lightweight process.

Difference Between Process and Thread

Feature Process Thread


Definition Independent program in execution Lightweight sub-unit of process

29
Types of Threads:

1.User-Level Threads (ULT)

 Managed by user-level libraries (not the OS).


 Faster to create and manage.
 If one thread blocks, all do.

2.Kernel-Level Threads (KLT)

 Managed by the OS kernel.


 More overhead, but better performance with multicore systems.
 One thread can block independently.

3.Hybrid (Combined Model)

 Uses both ULT and KLT.


 Examples: Windows, Linux use hybrid threading models.

Multithreading Models

1.Many-to-One

 Multiple user threads map to a single kernel thread.


 Blocking problem present.

2.One-to-One

 Each user thread maps to a kernel thread.


 More concurrency, but uses more resources.

3.Many-to-Many

 Many user threads mapped to many kernel threads.


 Efficient and flexible.

Thread Libraries

POSIX Threads (Pthreads) – Portable thread library in UNIX/Linux.

Windows Threads – Thread APIs provided by Windows OS.

Java Threads – Built-in support in Java language.

Thread vs Process Diagram

30
Process
└── Thread 1
└── Thread 2
└── Thread 3(All share memory & resources)

Multiple Processes
├── Process A (own memory)
└── Process B (own memory)

31
UNIT-2
CPU Scheduling in Operating Systems:
CPU scheduling is a process used by the operating system to decide which task
or process gets to use the CPU at a particular time. This is important because a
CPU can only handle one task at a time, but there are usually many tasks that need
to be processed. The following are different purposes of a CPU scheduling time.
 Maximize the CPU utilization
 Minimize the response and waiting time of the process.

Terminologies Used in CPU Scheduling:


 Arrival Time: The time at which the process arrives in the ready queue.
 Completion Time: The time at which the process completes its execution.
 Burst Time: Time required by a process for CPU execution.
 Turn Around Time: Time Difference between completion time and arrival
time.
Turn Around Time = Completion Time – Arrival Time
5.Waiting Time(W.T): Time Difference between turn around time and burst
time.
Waiting Time = Turn Around Time – Burst Time

How to Designing a CPU Scheduling Algorithm:

Different CPU Scheduling algorithms have different structures and the choice
of a particular algorithm depends on a variety of factors.

1. CPU Utilization: The main purpose of any CPU algorithm is to keep the CPU
as busy as possible. Theoretically, CPU usage can range from 0 to 100 but in a real-
time system, it varies from 40 to 90 percent depending on the system load.

2. Throughput: The average CPU performance is the number of processes


performed and completed during each unit. This is called throughput. The output may
vary depending on the length or duration of the processes.

32
3. Turn Round Time: For a particular process, the important conditions are how
long it takes to perform that process. The time elapsed from the time of process
delivery to the time of completion is known as the conversion time. Conversion time
is the amount of time spent waiting for memory access, waiting in line, using CPU
and waiting for I/O.

4. Waiting Time: The Scheduling algorithm does not affect the time required to
complete the process once it has started performing. It only affects the waiting time
of the process i.e. the time spent in the waiting process in the ready queue.

5. Response Time: In a collaborative system, turn around time is not the best
option. The process may produce something early and continue to computing the new
results while the previous results are released to the user. Therefore another method is
the time taken in the submission of the application process until the first response is
issued. This measure is called response time.

Different Types of CPU Scheduling Algorithms:


There are mainly two types of scheduling methods:
Preemptive Scheduling: Preemptive scheduling is used when a process switches
from running state to ready state or from the waiting state to the ready state.

Non-Preemptive Scheduling: Non-Preemptive scheduling is used when a process


terminates , or when a process switches from running state to waiting state.

CPU Scheduling Algorithms


Let us now learn about these CPU scheduling algorithms in operating systems one
by one:
 FCFS - First Come, First Serve
 SJF - Shortest Job First
 SRTF - Shortest Remaining Time First

33
 Round Robin
 Priority Scheduling
 HRRN - Highest Response Ratio Next
 Multiple Queue Scheduling
 Multilevel Feedback Queue Scheduling

FCFS CPU Scheduling Algorithm:


What is FCFS?

First-Come, First-Served (FCFS) is the simplest CPU scheduling algorithm.


It works like a queue:

The process that arrives first is executed first.


It is non-preemptive: once a process starts executing, it runs to completion.

Characteristics of FCFS:

 Non-preemptive
 Simple to implement using a FIFO (First-In-First-Out) queue
 Poor performance for short processes if long processes come first (known as the
convoy effect)

Important Terms:

1. Burst Time (BT): Time required by a process to execute on CPU.


2. Arrival Time (AT): Time when process arrives in the ready queue.
3. Waiting Time (WT): Time process waits in the ready queue.
WT = Start Time - Arrival Time
4. Turnaround Time (TAT): Total time taken by a process.
TAT = Completion Time - Arrival Time

34
FCFS Scheduling Example

Process Arrival Time (AT) Burst Time (BT)


P1 0 5
P2 1 3
P3 2 8

Let's calculate Gantt Chart, WT, and TAT.

Gantt Chart:

CopyEdit
| P1 | P2 | P3 |
0 5 8 16

Calculations:

Process AT BT Start Time Completion Time WT TAT


P1 0 5 0 5 0 5
P2 1 3 5 8 4 7
P3 2 8 8 16 6 14

Average Times:

Average Waiting Time (AWT) = (0 + 4 + 6) / 3 = 3.33 units

Average Turnaround Time (ATAT) = (5 + 7 + 14) / 3 = 8.67 units

FCFS scheduling Program:


#include <stdio.h>

#include <stdlib.h>

#include <string.h>

typedef struct {

35
char name[16];

int at; // arrival time

int bt; // burst time

int st; // start time

int ct; // completion time

int wt; // waiting time

int tat; // turnaround time

} Process;

int cmp_by_at(const void *a, const void *b) {

Process *p = (Process*)a;

Process *q = (Process*)b;

if (p->at != q->at) return p->at - q->at;

return strcmp(p->name, q->name);

int main() {

int n = 3; // you can change n or read from input

Process p[10];

// Example data from your problem

strcpy(p[0].name, "P1"); p[0].at = 0; p[0].bt = 5;

strcpy(p[1].name, "P2"); p[1].at = 1; p[1].bt = 3;

strcpy(p[2].name, "P3"); p[2].at = 2; p[2].bt = 8;

// If you want to read from user, replace the above with input code.

36
// Sort processes by arrival time (FCFS)

qsort(p, n, sizeof(Process), cmp_by_at);

int current_time = 0;

double total_wt = 0.0, total_tat = 0.0;

// Compute start, completion, WT, TAT

for (int i = 0; i < n; ++i) {

if (p[i].at > current_time) {

// CPU is idle until p[i]. arrives

p[i].st = p[i].at;

} else {

p[i].st = current_time;

p[i].ct = p[i].st + p[i].bt;

p[i].tat = p[i].ct - p[i].at;

p[i].wt = p[i].tat - p[i].bt;

current_time = p[i].ct;

total_wt += p[i].wt;

total_tat += p[i].tat;

// Print Gantt chart (simple textual)

printf("Gantt Chart:\n|");

for (int i = 0; i < n; ++i) {

printf(" %s |", p[i].name);

37
}

printf("\n");

// Print timeline row of times under the chart

// Print times aligned under the separators:

// we'll print the start time of first and completion of each process

printf("%d", p[0].st);

for (int i = 0; i < n; ++i) {

// crude spacing: print spaces proportional to burst time (for readability)

int spaces = p[i].bt;

for (int s = 0; s < spaces; ++s) putchar(' ');

printf("%d", p[i].ct);

printf("\n\n");

// Detailed table

printf("Process\tAT\tBT\tStart\tCompletion\tWT\tTAT\n");

for (int i = 0; i < n; ++i) {

printf("%s\t%d\t%d\t%d\t%d\t\t%d\t%d\n",

p[i].name, p[i].at, p[i].bt, p[i].st, p[i].ct, p[i].wt, p[i].tat);

printf("\nAverage Waiting Time (AWT) = %.2f units\n", total_wt / n);

printf("Average Turnaround Time (ATAT) = %.2f units\n", total_tat / n);

return 0;

38
Expected Output:
Gantt Chart:

| P1 | P3 | P2 | P4 |

0 7 8 12 16

Process AT BT Start CT WT TAT

P1 0 7 0 7 0 7

P2 2 4 8 12 6 10

P3 4 1 7 8 3 4

P4 5 4 12 16 7 11

Average Waiting Time = 4.00

Average Turnaround Time = 8.00

Example of FCFS CPU Scheduling:

To understand the First Come, First Served (FCFS) scheduling algorithm


effectively, we'll use two examples -

 one where all processes arrive at the same time,


 another where processes arrive at different times.
We'll create Gantt charts for both scenarios and calculate the turnaround time and
waiting time for each process.

Scenario 1: Processes with Same Arrival Time

Consider the following table of arrival time and burst time for three processes P1,
P2 and P3
Process Arrival Time Burst Time

39
Process Arrival Time Burst Time

p1 0 5

p2 0 3

p3 0 8

Step-by-Step Execution:
1. P1 will start first and run for 5 units of time (from 0 to 5).
2. P2 will start next and run for 3 units of time (from 5 to 8).
3. P3 will run last, executing for 8 units (from 8 to 16).

Gant Chart:

Now, let's calculate average waiting time and turn around time:
Turnaround Time = Completion Time - Arrival Time
Waiting Time = Turnaround Time - Burst Time
AT : Arrival Time
BT : Burst Time or CPU Time

40
TAT : Turn Around Time
WT : Waiting Time

Processes AT BT CT TAT WT

P1 0 5 5 5-0 = 5 5-5 = 0

P2 0 3 8 8-0 = 8 8-3 = 5

16-0 = 16-8 =
P3 0 8 16
16 8

 Average Turn around time = 9.67


 Average waiting time = 4.33

Scenario 2: Processes with Different Arrival Times

Consider the following table of arrival time and burst time for three processes P1,
P2 and P3
Process Burst Time (BT) Arrival Time (AT)

P1 5 ms 2 ms

P2 3 ms 0 ms

P3 4 ms 4 ms

Step-by-Step Execution:
 P2 arrives at time 0 and runs for 3 units, so its completion time is:
Completion Time of P2=0+3=3
 P1 arrives at time 2 but has to wait for P2 to finish. P1 starts at time 3 and
runs for 5 units. Its completion time is:
Completion Time of P1=3+5=8

41
 P3 arrives at time 4 but has to wait for P1 to finish. P3 starts at time 8 and
runs for 4 units. Its completion time is:
Completion Time of P3=8+4=12
Gantt Chart:

Now, lets calculate average waiting time and turn around time:
Process Completion Time Turnaround Time (TAT Waiting Time (WT =
(CT) = CT - AT) TAT - BT)

P2 3 ms 3 ms 0 ms

P1 8 ms 6 ms 1 ms

P3 12 ms 8 ms 4 ms

 Average Turnaround time = 5.67


 Average waiting time = 1.67

SJF – Shortest Job First CPU Scheduling Algorithm:


What is SJF?

Shortest Job First (SJF) is a non-preemptive scheduling algorithm that selects


the process with the smallest burst time (execution time) from the ready queue.

42
The process with the least CPU time required is executed next.

Types of SJF

Non-Preemptive SJF

Once a process starts, it runs till completion.

Preemptive SJF (also known as Shortest Remaining Time First - SRTF)

The currently running process can be interrupted if a new process


arrives with a shorter burst time.

Key Terms

1. Burst Time (BT): CPU time required by a process.


2. Arrival Time (AT): Time when process enters the ready queue.
3. Waiting Time (WT): Time process waits in queue.
WT = Start Time - Arrival Time
4. Turnaround Time (TAT): Total time from arrival to completion.
TAT = Completion Time - Arrival Time

Example: Non-Preemptive SJF

Process Arrival Time (AT) Burst Time (BT)


P1 0 7
P2 2 4
P3 4 1
P4 5 4

Gantt Chart Construction:

1. P1 starts at time 0 → ends at 7

43
2. P2, P3, P4 are now in queue at time 7
→ P3 has shortest BT = 1 → runs next

3. After P3, remaining: P2 and P4 → both BT = 4


→ P2 arrived earlier → goes next

4. Then P4

| P1 | P3 | P2 | P4 |
0 7 8 12 16

Calculations:

Process AT BT Start Completion WT TAT


P1 0 7 0 7 0 7
P2 2 4 8 12 6 10
P3 4 1 7 8 3 4
P4 5 4 12 16 7 11

Averages:

Average Waiting Time (AWT) = (0 + 6 + 3 + 7) / 4 = 4

Average Turnaround Time (ATAT) = (7 + 10 + 4 + 11) / 4 = 8

SJF – Shortest Job First scheduling Program:


#include <stdio.h>

#include <limits.h>

typedef struct {

char name[5];

int at, bt; // arrival time, burst time

int st, ct; // start time, completion time

44
int wt, tat; // waiting time, turnaround time

int done; // flag to check completion

} Process;

int main() {

int n = 4; // Number of processes

Process p[] = {

{"P1", 0, 7, 0, 0, 0, 0, 0},

{"P2", 2, 4, 0, 0, 0, 0, 0},

{"P3", 4, 1, 0, 0, 0, 0, 0},

{"P4", 5, 4, 0, 0, 0, 0, 0}

};

int completed = 0, current_time = 0;

double total_wt = 0, total_tat = 0;

printf("Gantt Chart:\n");

while (completed < n) {

int idx = -1;

int min_bt = INT_MAX;

// Select process with shortest BT from those that have arrived

for (int i = 0; i < n; i++) {

if (!p[i].done && p[i].at <= current_time) {

if (p[i].bt < min_bt) {

min_bt = p[i].bt;

idx = i;

45
}

else if (p[i].bt == min_bt && p[i].at < p[idx].at) {

// tie-break: earlier arrival

idx = i;

if (idx == -1) {

// No process ready, CPU idle

current_time++;

} else {

p[idx].st = current_time;

p[idx].ct = p[idx].st + p[idx].bt;

p[idx].tat = p[idx].ct - p[idx].at;

p[idx].wt = p[idx].tat - p[idx].bt;

p[idx].done = 1;

completed++;

current_time = p[idx].ct;

total_wt += p[idx].wt;

total_tat += p[idx].tat;

printf("| %s ", p[idx].name); // For Gantt chart

46
printf("|\n");

// Print completion times for Gantt chart

current_time = 0;

for (int i = 0; i < n; i++) {

printf("%d ", p[i].st);

printf("%d\n\n", p[n-1].ct); // last completion time

// Print process table

printf("Process\tAT\tBT\tStart\tCT\tWT\tTAT\n");

for (int i = 0; i < n; i++) {

printf("%s\t%d\t%d\t%d\t%d\t%d\t%d\n",

p[i].name, p[i].at, p[i].bt, p[i].st, p[i].ct, p[i].wt, p[i].tat);

printf("\nAverage Waiting Time = %.2f\n", total_wt / n);

printf("Average Turnaround Time = %.2f\n", total_tat / n);

return 0;

Expected Output:
Gantt Chart:

| P1 | P2 | P3 | P4 | P2 | P1 |

Process AT BT CT TAT WT

P1 0 8 16 16 8

47
P2 1 4 8 7 3

P3 2 2 4 2 0

P4 3 1 5 2 1

Average Waiting Time = 3.00

Average Turnaround Time = 6.75

SRTF – Shortest Remaining Time First (Preemptive SJF)

Shortest Remaining Time First (SRTF) is the preemptive version of the Shortest
Job First (SJF) algorithm.

The process with the shortest remaining burst time is selected for execution
next.
If a new process arrives with a shorter remaining time, it preempts the
currently running process.

Characteristics:

 Preemptive
 Based on remaining burst time
 Allows interruption of current process
 More complex than non-preemptive SJF

Important Terms:

Remaining Time (RT): Time left for a process to complete.

Waiting Time (WT): Time spent in the ready queue.

Turnaround Time (TAT): Completion time − Arrival time

48
Example:

Process Arrival Time (AT) Burst Time (BT)


P1 0 8
P2 1 4
P3 2 2
P4 3 1

Step-by-Step Execution:

Let’s construct the Gantt chart by time unit:

Time 0–1: P1 starts (remaining = 7)

Time 1: P2 arrives (BT=4 < P1’s remaining 7) → P1 preempted

Time 1–2: P2 runs (remaining = 3)

Time 2: P3 arrives (BT=2 < P2’s remaining 3) → P2 preempted

Time 2–3: P3 runs (remaining = 1)

Time 3: P4 arrives (BT=1 = P3’s remaining 1) → no preemption

Time 3–4: P3 completes

Time 4–5: P4 runs → completes

Time 5–8: P2 resumes and completes

Time 8–16: P1 resumes and completes

Gantt Chart:

| P1 | P2 | P3 | P4 | P2 | P1 |
0 1 2 3 4 8 16

Calculations:

49
Process AT BT Completion Time TAT = CT − AT WT = TAT − BT
P1 0 8 16 16 8
P2 1 4 8 7 3
P3 2 2 4 2 0
P4 3 1 5 2 1

Averages:

Average Waiting Time (AWT) = (8 + 3 + 0 + 1) / 4 = 3

Average Turnaround Time (ATAT) = (16 + 7 + 2 + 2) / 4 = 6.75

SRTF – Shortest Remaining Time First (Preemptive SJF) Program:

#include <stdio.h>

#include <limits.h>

typedef struct {

char name[5];

int at, bt; // arrival time, burst time

int rt; // remaining time

int ct; // completion time

int tat, wt; // turnaround, waiting

int done;

} Process;

int main() {

int n = 4;

Process p[] = {

{"P1", 0, 8, 0, 0, 0, 0, 0},

50
{"P2", 1, 4, 0, 0, 0, 0, 0},

{"P3", 2, 2, 0, 0, 0, 0, 0},

{"P4", 3, 1, 0, 0, 0, 0, 0}

};

for (int i = 0; i < n; i++) {

p[i].rt = p[i].bt; // initially, remaining time = burst time

int completed = 0, current_time = 0;

double total_wt = 0, total_tat = 0;

printf("Gantt Chart:\n");

while (completed < n) {

int idx = -1;

int min_rt = INT_MAX;

// Select process with smallest remaining time

for (int i = 0; i < n; i++) {

if (p[i].at <= current_time && p[i].rt > 0) {

if (p[i].rt < min_rt) {

min_rt = p[i].rt;

idx = i;

else if (p[i].rt == min_rt && p[i].at < p[idx].at) {

idx = i; // tie-break by arrival

51
}

if (idx == -1) {

current_time++;

continue;

printf("| %s ", p[idx].name);

// Run for 1 time unit

p[idx].rt--;

current_time++;

if (p[idx].rt == 0) {

p[idx].ct = current_time;

p[idx].tat = p[idx].ct - p[idx].at;

p[idx].wt = p[idx].tat - p[idx].bt;

total_wt += p[idx].wt;

total_tat += p[idx].tat;

completed++;

printf("|\n\n");

// Print results

printf("Process\tAT\tBT\tCT\tTAT\tWT\n");

for (int i = 0; i < n; i++) {

52
printf("%s\t%d\t%d\t%d\t%d\t%d\n",

p[i].name, p[i].at, p[i].bt, p[i].ct, p[i].tat, p[i].wt);

printf("\nAverage Waiting Time = %.2f\n", total_wt / n);

printf("Average Turnaround Time = %.2f\n", total_tat / n);

return 0;

Expected Output:

Gantt Chart:

| P1 | P2 | P3 | P4 | P2 | P1 |

Process AT BT CT TAT WT

P1 0 8 16 16 8

P2 1 4 8 7 3

P3 2 2 4 2 0

P4 3 1 5 2 1

Average Waiting Time = 3.00

Average Turnaround Time = 6.75

| P1 | P2 | P3 | P4 | P2 | P1 |

0 1 2 4 5 8 16

53
Round Robin (RR) CPU Scheduling Algorithm:
Round Robin is a preemptive scheduling algorithm where:

 Each process gets a fixed time slice or time quantum (TQ).


 If a process does not finish in its time quantum, it is preempted and added to the
end of the ready queue.

✅ Designed for time-sharing systems to give fairness and


responsiveness.

Characteristics

 Preemptive
 Based on time quantum
 Suitable for interactive systems
 Every process gets equal share of CPU time

Key Terms

1. Time Quantum (TQ): Fixed time CPU is allocated to each process (e.g., 2 ms)
2. Turnaround Time (TAT): Completion Time − Arrival Time
3. Waiting Time (WT): TAT − Burst Time

Example

Process Arrival Time (AT) Burst Time (BT)


P1 0 5
P2 1 4
P3 2 6

Time Quantum (TQ) = 2

54
Gantt Chart Execution:

Time 0–2: P1 runs (BT = 5 → 3 left)

Time 2–4: P2 runs (BT = 4 → 2 left)

Time 4–6: P3 runs (BT = 6 → 4 left)

Time 6–8: P1 runs (3 → 1 left)

Time 8–10: P2 runs (2 → done)

Time 10–12: P3 runs (4 → 2 left)

Time 12–13: P1 runs (1 → done)

Time 13–15: P3 runs (2 → done)

Gantt Chart:

| P1 | P2 | P3 | P1 | P2 | P3 | P1 | P3 |
0 2 4 6 8 10 12 13 15

Completion Time (CT):

Process CT AT BT TAT = CT − AT WT = TAT − BT


P1 13 0 5 13 8
P2 10 1 4 9 5
P3 15 2 6 13 7

Averages:

Average Waiting Time (AWT) = (8 + 5 + 7) / 3 = 6.67

Average Turnaround Time (ATAT) = (13 + 9 + 13) / 3 = 11.67

55
Round Robin (RR) CPU Scheduling Algorithm program:
#include <stdio.h>

#include <stdbool.h>

typedef struct {

char name[5];

int at, bt; // arrival, burst

int rt; // remaining time

int ct, tat, wt;

} Process;

int main() {

int n = 3, tq = 2;

Process p[] = {

{"P1", 0, 5, 5, 0, 0, 0},

{"P2", 1, 4, 4, 0, 0, 0},

{"P3", 2, 6, 6, 0, 0, 0}

};

int time = 0, completed = 0;

int queue[100], front = 0, rear = 0;

bool inQueue[n];

for (int i = 0; i < n; i++) inQueue[i] = false;

// Add initial processes

for (int i = 0; i < n; i++)

56
if (p[i].at == 0)

queue[rear++] = i;

inQueue[i] = true;

printf("Gantt Chart:\n");

while (completed < n) {

if (front == rear) { // No ready process, jump time

int nextAT = 1e9;

for (int i = 0; i < n; i++) {

if (p[i].rt > 0 && p[i].at < nextAT)

nextAT = p[i].at;

time = nextAT;

for (int i = 0; i < n; i++) {

if (!inQueue[i] && p[i].at <= time) {

queue[rear++] = i;

inQueue[i] = true;

continue;

57
int idx = queue[front++];

printf("| %s ", p[idx].name);

int exec_time = (p[idx].rt < tq) ? p[idx].rt : tq;

p[idx].rt -= exec_time;

time += exec_time;

// Add any new arrivals during execution

for (int i = 0; i < n; i++) {

if (!inQueue[i] && p[i].rt > 0 && p[i].at <= time) {

queue[rear++] = i;

inQueue[i] = true;

if (p[idx].rt > 0) {

queue[rear++] = idx; // Put back into queue

} else {

p[idx].ct = time;

p[idx].tat = p[idx].ct - p[idx].at;

p[idx].wt = p[idx].tat - p[idx].bt;

completed++;

printf("|\n");

double total_wt = 0, total_tat = 0;

58
printf("\nProcess\tCT\tAT\tBT\tTAT\tWT\n");

for (int i = 0; i < n; i++) {

printf("%s\t%d\t%d\t%d\t%d\t%d\n",

p[i].name, p[i].ct, p[i].at, p[i].bt, p[i].tat, p[i].wt);

total_wt += p[i].wt;

total_tat += p[i].tat;

printf("\nAverage Waiting Time = %.2f\n", total_wt / n);

printf("Average Turnaround Time = %.2f\n", total_tat / n);

return 0;

Expected Output
Gantt Chart:

| P1 | P2 | P3 | P1 | P2 | P3 | P1 | P3 |

Process CT AT BT TAT WT

P1 13 0 5 13 8

P2 10 1 4 9 5

P3 15 2 6 13 7

Average Waiting Time = 6.67

Average Turnaround Time = 11.67

| P1 | P2 | P3 | P1 | P2 | P3 | P1 | P3 |

0 2 4 6 8 10 12 13 15

59
Priority Scheduling – CPU Scheduling Algorithm:
In Priority Scheduling, each process is assigned a priority, and the CPU is
allocated to the process with the highest priority.

 Processes with lower numeric values usually have higher priority (e.g.,
priority 1 > priority 3).
 If two processes have the same priority, it may use FCFS as a tie-breaker.

Types of Priority Scheduling:

Non-Preemptive Priority Scheduling


Once a process starts, it runs until completion — even if a higher-priority
process arrives.

Preemptive Priority Scheduling


A higher-priority process can preempt the currently running one.

Key Terms:

 Priority (P): A number indicating the importance of a process.


 Burst Time (BT): CPU time needed by the process.
 Arrival Time (AT): When the process enters the ready queue.
 Waiting Time (WT): Total time a process spends waiting.
WT = Start Time - Arrival Time
 Turnaround Time (TAT): Total time from arrival to completion.
TAT = Completion Time - Arrival Time

Example (Non-Preemptive)

Process Arrival Time (AT) Burst Time (BT) Priority (Lower = Higher)
P1 0 10 3
P2 1 1 1
P3 2 2 4

60
Process Arrival Time (AT) Burst Time (BT) Priority (Lower = Higher)
P4 3 1 5
P5 4 5 2

Step-by-Step Execution:

Time = 0 → Only P1 is available → run it.

Time = 10 → P2, P3, P4, P5 have arrived → pick P2 (priority 1).

Then P5 (priority 2)

Then P3 (priority 4)

Then P4 (priority 5)

Gantt Chart:

| P1 | P2 | P5 | P3 | P4 |
0 10 11 16 18 19

Completion Table:

Process AT BT Priority CT TAT = CT − AT WT = TAT − BT


P1 0 10 3 10 10 0
P2 1 1 1 11 10 9
P3 2 2 4 18 16 14
P4 3 1 5 19 16 15
P5 4 5 2 16 12 7

Averages:

Average Waiting Time (AWT) = (0 + 9 + 14 + 15 + 7) / 5 = 9

Average Turnaround Time (ATAT) = (10 + 10 + 16 + 16 + 12) / 5 = 12.8

61
Priority Scheduling C program:
#include <stdio.h>

struct Process {

int id, at, bt, priority;

int ct, tat, wt;

int completed;

};

int main() {

int n = 5;

struct Process p[] = {

{1, 0, 10, 3, 0, 0, 0, 0},

{2, 1, 1, 1, 0, 0, 0, 0},

{3, 2, 2, 4, 0, 0, 0, 0},

{4, 3, 1, 5, 0, 0, 0, 0},

{5, 4, 5, 2, 0, 0, 0, 0}

};

int time = 0, completed = 0;

float totalWT = 0, totalTAT = 0;

printf("Gantt Chart:\n");

while (completed < n) {

int idx = -1;

int minPriority = 9999;

// Find process with highest priority (lowest number) that has arrived

62
for (int i = 0; i < n; i++) {

if (!p[i].completed && p[i].at <= time) {

if (p[i].priority < minPriority) {

minPriority = p[i].priority;

idx = i;

if (idx == -1) {

time++;

continue;

// Print Gantt chart step

printf("| P%d ", p[idx].id);

time += p[idx].bt;

p[idx].ct = time;

p[idx].tat = p[idx].ct - p[idx].at;

p[idx].wt = p[idx].tat - p[idx].bt;

totalWT += p[idx].wt;

totalTAT += p[idx].tat;

p[idx].completed = 1;

completed++;

63
printf("|\n");

printf("\nProcess\tAT\tBT\tPriority\tCT\tTAT\tWT\n");

for (int i = 0; i < n; i++) {

printf("P%d\t%d\t%d\t%d\t\t%d\t%d\t%d\n",

p[i].id, p[i].at, p[i].bt, p[i].priority,

p[i].ct, p[i].tat, p[i].wt);

printf("\nAverage Waiting Time = %.2f", totalWT / n);

printf("\nAverage Turnaround Time = %.2f\n", totalTAT / n);

return 0;

If you run this, you’ll get the same Gantt chart and averages you calculated:

AWT = 9.00

ATAT = 12.80

HRRN – Highest Response Ratio Next:


Highest Response Ratio Next (HRRN) is a non-preemptive scheduling
algorithm that selects the process with the highest response ratio for execution.

It improves over SJF by reducing the chance of starvation for longer


processes.

64
Response Ratio
Formula:

Where:

Waiting Time (WT) = Current Time − Arrival Time − Executed Time

Burst Time (BT) = Execution time needed

Characteristics:

Non-preemptive

 Dynamically adjusts based on waiting time


 Helps balance short and long processes
 Avoids starvation

Example

Process Arrival Time (AT) Burst Time (BT)


P1 0 8
P2 1 4
P3 2 2
P4 3 1

Step-by-Step Execution:

At time 0, only P1 has arrived → P1 runs first

Step 1: Time = 0 → 8

P1 runs (CT = 8)

65
Now available processes: P2, P3, P4

Step 2: Time = 8

Calculate Response Ratio for remaining processes:

Process WT = 8 − AT BT RR = (WT + BT)/BT


P2 7 4 (7+4)/4 = 2.75
P3 6 2 (6+2)/2 = 4.0
P4 5 1 (5+1)/1 = 6.0 → Selected

Step 3: Time = 8 → 9

P4 runs (CT = 9)

Step 4: Time = 9

Remaining: P2, P3

Process WT BT RR
P2 8 4 3.0
P3 7 2 4.5 → Selected

Step 5: Time = 9 → 11

P3 runs (CT = 11)

Step 6: Time = 11 → 15

P2 runs (CT = 15)

Gantt Chart:

| P1 | P4 | P3 | P2 |
0 8 9 11 15

🔹 Completion Table:

Process AT BT CT TAT = CT − AT WT = TAT − BT


P1 0 8 8 8 0

66
Process AT BT CT TAT = CT − AT WT = TAT − BT
P2 1 4 15 14 10
P3 2 2 11 9 7
P4 3 1 9 6 5

Averages:

Average Waiting Time (AWT) = (0 + 10 + 7 + 5) / 4 = 5.5

Average Turnaround Time (ATAT) = (8 + 14 + 9 + 6) / 4 = 9.25

HRRN – Highest Response Ratio Next: c Program


#include <stdio.h>

struct Process {

int id, at, bt, ct, tat, wt;

int completed;

};

int main() {

int n = 4; // number of processes

struct Process p[4] = {

{1, 0, 8, 0, 0, 0, 0},

{2, 1, 4, 0, 0, 0, 0},

{3, 2, 2, 0, 0, 0, 0},

{4, 3, 1, 0, 0, 0, 0}

};

int time = 0, completed = 0;

67
float totalWT = 0, totalTAT = 0;

printf("Gantt Chart:\n");

while (completed < n) {

int idx = -1;

float maxRR = -1.0;

// Find process with highest response ratio

for (int i = 0; i < n; i++) {

if (!p[i].completed && p[i].at <= time) {

int waiting = time - p[i].at;

float rr = (float)(waiting + p[i].bt) / p[i].bt;

if (rr > maxRR) {

maxRR = rr;

idx = i;

if (idx == -1) {

time++; // No process arrived yet

continue;

// Print Gantt chart step

printf("| P%d ", p[idx].id);

68
// Execute process

time += p[idx].bt;

p[idx].ct = time;

p[idx].tat = p[idx].ct - p[idx].at;

p[idx].wt = p[idx].tat - p[idx].bt;

totalWT += p[idx].wt;

totalTAT += p[idx].tat;

p[idx].completed = 1;

completed++;

printf("|\n");

// Output results

printf("\nProcess\tAT\tBT\tCT\tTAT\tWT\n");

for (int i = 0; i < n; i++) {

printf("P%d\t%d\t%d\t%d\t%d\t%d\n", p[i].id, p[i].at, p[i].bt,

p[i].ct, p[i].tat, p[i].wt);

printf("\nAverage Waiting Time: %.2f", totalWT / n);

printf("\nAverage Turnaround Time: %.2f\n", totalTAT / n);

return 0;

Output:
Gantt Chart:

69
| P1 | P4 | P3 | P2 |

Process AT BT CT TAT WT

P1 0 8 8 8 0

P2 1 4 15 14 10

P3 2 2 11 9 7

P4 3 1 9 6 5

Average Waiting Time: 5.50

Average Turnaround Time: 9.25

Multiple Queue Scheduling:


Multiple Queue Scheduling is a CPU scheduling technique where processes are
permanently assigned to different queues, each with its own scheduling algorithm
and priority level.

Each queue can represent a type of process (e.g., system processes, interactive,
batch).

Structure:

There are multiple ready queues

Each queue has its own scheduling algorithm (e.g., FCFS, RR, SJF)

No movement of processes between queues

CPU is assigned to the queues based on fixed priority or time sharing

70
Example Setup:

Queue Process Type Scheduling Algorithm


Q1 System Processes Round Robin (RR)
Q2 Interactive Priority Scheduling
Q3 Batch Jobs FCFS

How CPU is Assigned:

Fixed Priority: Highest-priority queue always gets CPU first

Time Slicing: CPU time is divided among queues

Multilevel Feedback Queue Scheduling:


Multilevel Feedback Queue (MLFQ) is an improved version of multiple queue
scheduling.
Processes can move between queues based on their behavior and execution history.

Designed to give short processes quick response and long processes


gradual attention.

Structure:

 Multiple queues with different priorities


 Each queue can have different scheduling algorithm (usually Round Robin or
FCFS)
 Processes can move up or down queues

Example Setup:

Queue Priority Level Scheduling Time Quantum


Q1 High RR 5 ms
Q2 Medium RR 10 ms
Q3 Low FCFS -

Promotion & Demotion Rules:

71
If a process finishes within its quantum → stays or goes up

If not → moved down to a lower queue

If a process waits too long, it's promoted → avoids starvation

Difference Between MQ & MLFQ

Feature Multiple Queue (MQ) Multilevel Feedback Queue (MLFQ)


Process Movement No (static) Yes (dynamic)
Starvation Handling No Yes (through promotion)
Flexibility Low High
Complexity Low High

Multiprocessor Scheduling in Operating Systems:


When a system has more than one processor (CPU), it is known as a
multiprocessor system. Multiprocessor scheduling refers to how an operating system
schedules processes across multiple CPUs so that the overall system performance is
optimized.

A multi-processor is a system that has more than one processor but shares the
same memory, bus, and input/output devices. The bus connects the processor to the
RAM, to the I/O devices, and to all the other components of the computer.

Types of Multiprocessor Systems

72
Tightly Coupled Systems: Shared memory, common OS.

Loosely Coupled Systems: Distributed memory, separate OS.

Approaches to Multiprocessor Scheduling

1. Asymmetric Multiprocessing (AMP)

 One processor handles all OS-related tasks (e.g., I/O, system calls).
 The other processors execute only user processes.
 Simple to implement but not scalable.

Example:

CPU 0 handles OS tasks.

CPU 1, 2, 3 execute only user processes.

2. Symmetric Multiprocessing (SMP)

 All processors are equal.


 Each CPU runs the OS scheduler and can execute user or system processes.
 Requires careful synchronization of shared resources.

Example:

Any CPU can perform any task, including system calls.

3. Processor Affinity

To avoid the overhead of cache invalidation when processes move between


CPUs, OS may prefer to run a process on the same CPU where it was previously
run.

Types of Affinity:

Soft Affinity: Tries to keep the process on the same CPU but may migrate.

Hard Affinity: Restricts the process to run only on a specific CPU.

Load Balancing:

73
Ensures that all processors have roughly equal work.
It can be:

Push Migration: A task pushes processes from overloaded CPUs to


underloaded ones.

Pull Migration: An idle CPU pulls tasks from overloaded ones.

Processor Scheduling Algorithms:

Multiprocessor systems may still use traditional algorithms like:

 FCFS
 Round Robin
 Priority Scheduling
 SJF
But applied across multiple CPUs with added complexities.

74

You might also like