OS Class Notes
OS Class Notes
Definition
An Operating System (OS) is system software that acts as an intermediary between the user and
the computer hardware. It manages hardware resources and provides services for computer
programs.
Types of OS Structures
Diagram:
Multiprogrammed Systems
Diagram:
+-----------+
| Program A |
+-----------+
| Program B |
+-----------+
| Program C |
+-----------+
Memory contains multiple programs.
Time-Shared Systems
Parallel Systems
1
Increases speed and reliability.
Distributed Systems
Real-Time Systems
System Components
1. Process Management
2. Memory Management
3. Storage Management
4. File System Management
5. I/O System Management
6. Protection and Security
1. Program execution
2. I/O operations
3. File system manipulation
4. Communication
5. Error detection
6. Resource allocation
7. Security
System Calls
System calls provide the interface between a process and the operating system.
Process
2
Process States
New
Ready
Running
Waiting
Terminated
+-----+ +-------+
|New | -----> | Ready |
+-----+ +---+---+
|
v
+------+ +-------+
|Running| -->|Waiting|
+--+---+ +---+---+
| |
v v
+--------+ +--------+
| Terminated |
+------------+
Operations on Processes
Cooperating Processes
Threads
Lightweight processes.
Share code, data, and OS resources.
Used for multitasking within a process.
3
UNIT II: CPU Scheduling and Deadlocks
CPU Scheduling
Scheduling Criteria:
CPU Utilization
Throughput
Turnaround Time
Waiting Time
Response Time
Scheduling Algorithms:
Multiple-Processor Scheduling:
Deadlocks
System Model:
Deadlock Characterization:
Mutual Exclusion
Hold and Wait
No Preemption
Circular Wait
4
UNIT III: Process Synchronization and IPC
Synchronization Mechanisms
Classical Problems:
Producer-Consumer
Readers-Writers
Dining Philosophers
Shared memory
Message passing
5
UNIT IV: Memory Management
Virtual Memory
Demand Paging: Pages loaded only when needed
FIFO
LRU
Optimal
6
UNIT V: File System Interface and Operations
Directory Structures
Access control
File system layers: Application, Logical, Physical
Allocation Methods
Free-space Management
7
UNIT-1
Introduction to operating system:
An operating system (OS) acts as an intermediary between computer
hardware and the user, managing the computer's resources and providing a
platform for running applications.
Examples OS:
Structure of OS:
The operating system can be implemented with the help of various structures. The
structure of the OS depends mainly on how the various standard components of the
operating system are interconnected and merge into the kernel.
8
1.Simple Structure:
Simple structure operating systems do not have well-defined structures and are
small, simple, and limited. The interfaces and levels of functionality are not well
separated. MS-DOS is an example of such an operating system. In MS-DOS,
application programs are able to access the basic I/O routines. These types of
operating systems cause the entire system to crash if one of the user programs
fails.
2.Monolithic Structure:
A monolithic structure is a type of operating system architecture where the
entire operating system is implemented as a single large process in kernel mode.
Essential operating system services, such as process management, memory
management, file systems, and device drivers, are combined into a single code
block.
3.Micro-Kernel Structure:
Micro-Kernel structure designs the operating system by removing all non-
essential components from the kernel and implementing them as system and user
programs. This results in a smaller kernel called the micro-kernel. Advantages of
this structure are that all new services need to be added to user space and does not
require the kernel to be modified. Thus it is more secure and reliable as if a service
fails, then rest of the operating system remains untouched. Mac OS is an example
of this type of OS.
4.Hybrid-Kernel Structure:
9
Hybrid-Kernel structure is nothing but just a combination of both monolithic-
kernel structure and micro-kernel structure. Basically, it combines properties of
both monolithic and micro-kernel and make a more advance and helpful approach.
It implement speed and design of monolithic and modularity and stability of micro-
kernel structure.
5.Exo-Kernel Structure:
Exokernel is an operating system developed at MIT to provide application-level
management of hardware resources. By separating resource management from
protection, the exokernel architecture aims to enable application-specific
customization. Due to its limited operability, exokernel size typically tends to be
minimal.
6.Layered Structure:
An OS can be broken into pieces and retain much more control over the system.
In this structure, the OS is broken into a number of layers (levels). The bottom
layer (layer 0) is the hardware, and the topmost layer (layer N) is the user interface.
These layers are so designed that each layer uses the functions of the lower-level
layers. This simplifies the debugging process, if lower-level layers are debugged
and an error occurs during debugging, then the error must be on that layer only, as
the lower-level layers have already been debugged. The main disadvantage of this
structure is that at each layer, the data needs to be modified and passed on which
adds overhead to the system. Moreover, careful planning of the layers is necessary,
as a layer can use only lower-level layers. UNIX is an example of this structure.
7.Modular Structure:
It is considered as the best approach for an OS. It involves designing of a
modular kernel. The kernel has only a set of core components and other services
10
are added as dynamically loadable modules to the kernel either during runtime or
boot time. It resembles layered structure due to the fact that each kernel has defined
and protected interfaces, but it is more flexible than a layered structure as a module
can call any other module. For example Solaris OS is organized as shown in the
figure.
There is no direct interaction between the user and the computer in a simple
batched system. The user must submit a job (written on cards or tape) to a
computer operator. The computer operator then places a batch of several
jobs on an input device.
11
A multiprogramming operating system allows multiple programs to reside in
main memory and share the CPU, switching between them to maximize resource
utilization and throughput. This means the CPU is not idle when one program is
waiting for I/O operations, but instead switches to another program that is ready to
execute.
Key Concepts:
CPU Switching:
The operating system rapidly switches the CPU between these different
programs, giving the illusion of simultaneous execution.
Resource Optimization:
Multiprogramming aims to minimize idle time of the CPU and other
resources by ensuring that when one program is waiting for I/O, another program
can utilize the CPU.
Throughput Improvement:
By keeping the CPU busy, multiprogramming leads to higher throughput,
meaning more tasks can be completed in a given time.
Example:
12
Imagine you are working on a document (program 1) while also downloading a
file (program 2) and listening to music (program 3). In a multiprogramming system,
the CPU would switch between these programs, allowing you to work on the
document, download the file, and listen to music seemingly simultaneously.
Time shared:
A time-sharing operating system allows multiple users to access a single
computer system simultaneously by rapidly switching between their processes. This
creates the illusion that each user has the computer's full resources at their disposal,
even though they are sharing the CPU time and other resources.
Key Concepts:
Multitasking:
Time-sharing is a form of multitasking where the CPU switches between
different processes (belonging to different users) very rapidly.
Time Slicing:
The CPU time is divided into small intervals called time slices or quanta, and
each process gets a turn to execute for a short time slice.
Interactive Response:
The primary goal of time-sharing is to provide a good interactive response
time for each user, making the system feel responsive even with multiple users.
Resource Allocation:
The operating system manages and allocates resources like CPU time,
memory, and I/O devices to different users and processes.
13
A parallel system in an operating system utilizes multiple processors or cores
to execute tasks concurrently, thereby enhancing performance and
responsiveness. These systems are designed to manage and coordinate the activities
of these multiple processing units to achieve faster processing speeds and handle
complex computations more efficiently.
Distributed systems:
1. Client-Server Systems
2. Peer-to-Peer(P2P) Systems
14
3. Middleware
4. Three-Tier
5. N-Tier
15
Telecommunication (network switches).
🔹 1. Process Management
Functions:
Context switching
Deadlock prevention
16
✅ Example: Running multiple apps like browser, games, or media player
simultaneously.
🔹 2. File Management
Functions:
Directory navigation
🔹 3. Command Interpreter
Types:
🔹 4. System Calls
Types of calls:
17
Device handling
🔹 5. Signals
Common signals:
🔹 6. Network Management
Functions:
🔹 7. Security Management
Functions:
18
Data encryption
Malware protection
Functions:
Device allocation
Functions:
Space allocation
Disk scheduling
Functions:
19
Memory partitioning
1.Process Management
A process is a program in execution. It consists of the followings:
Executable program
Program data
Stack and stack pointer
Program counter and other CPU registers
Details of opened files
A process can be suspended temporarily and the execution of another process can
be taken up. A suspended process can be restarted later. Before suspending a
process, its details are saved in a table called the process table so that it can be
executed later on. An operating system supports two system calls to manage
processes Create and Kill -
Create a system call used to create a new process.
Kill system call used to delete an existing process.
2.Files Management
20
Files are used for long-term storage. Files are used for both input and output.
Every operating system provides a file management service. This file management
service can also be treated as an abstraction as it hides the information about the
disks from the user. The operating system also provides a system call for file
management. The system call for file management includes:
File creation
File deletion
Read and Write operations
Files are stored in a directory. System calls provide to put a file in a directory or to
remove a file from a directory. Files in the system are protected to maintain the
privacy of the user. Below shows the Hierarchical File Structure directory.
3.Command Interpreter
There are several ways for users to interface with the operating system. One
of the approaches to user interaction with the operating system is through
commands. Command interpreter provides a command-line interface. It allows the
user to enter a command on the command line prompt (cmd). The command
interpreter accepts and executes the commands entered by a user. For example, a
shell is a command interpreter under UNIX. The commands to be executed are
implemented in two ways:
The command interpreter itself contains code to be executed.
The command is implemented through a system file. The necessary system
file is loaded into memory and executed.
4,System Calls
System calls provide an interface to the services made by an operating
system. The user interacts with the operating system programs through System
calls. These calls are normally made available as library functions in high-level
languages such as C, Java, Python etc. It provides a level of abstraction as the user
is not aware of the implementation or execution of the call made. Details of the
21
operating system is hidden from the user. Different hardware and software services
can be availed through system calls.
5,Signals
Signals are used in the operating systems to notify a process that a particular
event has occurred. Signals are the software or hardware interrupts that suspend the
current execution of the task. Signals are also used for inter-process
communication. A signal follows the following pattern :
A signal is generated by the occurrence of a particular event it can be the
clicking of the mouse, the execution of the program successfully or an error
notifying, etc.
A generated signal is delivered to a process for further execution.
Once delivered, the signal must be handled.
A signal can be synchronous and asynchronous which is handled by a default
handler or by the user-defined handler.
The signal causes temporarily suspends the current task it was processing, saves its
registers on the stack, and starts running a special signal handling procedure, where
the signal is assigned to it.
6.Network Management:
The complexity of networks and services has created modern challenges for IT
professionals and users. Network management is a set of processes and procedures
that help organizations to optimize their computer networks. Mainly, it ensures that
users have the best possible experience while using network applications and
services.
Network management is a fundamental concept of computer networks.
Network Management Systems is a software application that provides network
administrators with information on components in their networks. It ensures the
quality of service and availability of network resources. It also examines the
operations of a network, reconstructs its network configuration, modifies it for
improving performance of tasks.
7.Security Management
22
The security mechanisms in an operating system ensure that authorized
programs have access to resources, and unauthorized programs have no access to
restricted resources. Security management refers to the various processes where the
user changes the file, memory, CPU, and other hardware resources that should have
authorization from the operating system.
23
An operating system is software that acts as an intermediary between the user
and computer hardware. It is a program with the help of which we are able to run
various applications. It is the one program that is running all the time. Every
computer must have an operating system to smoothly execute other programs.
The Operating System (OS) provides essential services that make it easier for users
and applications to interact with hardware efficiently and securely.
1. Program Execution
24
3. Communication Between Processes
4. File Management
5. Memory Management
6. Process Management
Handles process creation, suspension, resumption, and termination.
8. Resource Management
25
9. User Interface
10. Networking
26
It acts as an interface between the process and the OS.
User programs cannot directly access hardware (e.g., CPU, disk, memory) for
security and safety. So, they must invoke system calls to:
Open a file
Allocate memory
Create a new process
Perform I/O
Communicate with other processes
27
Types of System Calls
🔹 Type 🔹 Description ✅ Examples
1. Process Control Create/terminate processes fork(), exit(), exec()
open(), read(), write(),
2. File Management File operations
close()
Request or release I/O
3. Device Management ioctl(), read(), write()
devices
4. Information
Get/set system data or time getpid(), alarm(), sleep()
Maintenance
pipe(), shmget(), send(),
5. Communication Process communication
recv()
6. Protection Access permission checks chmod(), umask()
+-----------------------------+
+-----------------------------+
makes request
+-----------------------------+
+-----------------------------+
traps to kernel
28
+-----------------------------+
+-----------------------------+
Thread:
A thread is the smallest unit of CPU execution within a process. A process can
have one or more threads that share:
Code
Data
Open files and resources
29
Types of Threads:
Multithreading Models
1.Many-to-One
2.One-to-One
3.Many-to-Many
Thread Libraries
30
Process
└── Thread 1
└── Thread 2
└── Thread 3(All share memory & resources)
Multiple Processes
├── Process A (own memory)
└── Process B (own memory)
31
UNIT-2
CPU Scheduling in Operating Systems:
CPU scheduling is a process used by the operating system to decide which task
or process gets to use the CPU at a particular time. This is important because a
CPU can only handle one task at a time, but there are usually many tasks that need
to be processed. The following are different purposes of a CPU scheduling time.
Maximize the CPU utilization
Minimize the response and waiting time of the process.
Different CPU Scheduling algorithms have different structures and the choice
of a particular algorithm depends on a variety of factors.
1. CPU Utilization: The main purpose of any CPU algorithm is to keep the CPU
as busy as possible. Theoretically, CPU usage can range from 0 to 100 but in a real-
time system, it varies from 40 to 90 percent depending on the system load.
32
3. Turn Round Time: For a particular process, the important conditions are how
long it takes to perform that process. The time elapsed from the time of process
delivery to the time of completion is known as the conversion time. Conversion time
is the amount of time spent waiting for memory access, waiting in line, using CPU
and waiting for I/O.
4. Waiting Time: The Scheduling algorithm does not affect the time required to
complete the process once it has started performing. It only affects the waiting time
of the process i.e. the time spent in the waiting process in the ready queue.
5. Response Time: In a collaborative system, turn around time is not the best
option. The process may produce something early and continue to computing the new
results while the previous results are released to the user. Therefore another method is
the time taken in the submission of the application process until the first response is
issued. This measure is called response time.
33
Round Robin
Priority Scheduling
HRRN - Highest Response Ratio Next
Multiple Queue Scheduling
Multilevel Feedback Queue Scheduling
Characteristics of FCFS:
Non-preemptive
Simple to implement using a FIFO (First-In-First-Out) queue
Poor performance for short processes if long processes come first (known as the
convoy effect)
Important Terms:
34
FCFS Scheduling Example
Gantt Chart:
CopyEdit
| P1 | P2 | P3 |
0 5 8 16
Calculations:
Average Times:
#include <stdlib.h>
#include <string.h>
typedef struct {
35
char name[16];
} Process;
Process *p = (Process*)a;
Process *q = (Process*)b;
int main() {
Process p[10];
// If you want to read from user, replace the above with input code.
36
// Sort processes by arrival time (FCFS)
int current_time = 0;
p[i].st = p[i].at;
} else {
p[i].st = current_time;
current_time = p[i].ct;
total_wt += p[i].wt;
total_tat += p[i].tat;
printf("Gantt Chart:\n|");
37
}
printf("\n");
// we'll print the start time of first and completion of each process
printf("%d", p[0].st);
printf("%d", p[i].ct);
printf("\n\n");
// Detailed table
printf("Process\tAT\tBT\tStart\tCompletion\tWT\tTAT\n");
printf("%s\t%d\t%d\t%d\t%d\t\t%d\t%d\n",
return 0;
38
Expected Output:
Gantt Chart:
| P1 | P3 | P2 | P4 |
0 7 8 12 16
P1 0 7 0 7 0 7
P2 2 4 8 12 6 10
P3 4 1 7 8 3 4
P4 5 4 12 16 7 11
Consider the following table of arrival time and burst time for three processes P1,
P2 and P3
Process Arrival Time Burst Time
39
Process Arrival Time Burst Time
p1 0 5
p2 0 3
p3 0 8
Step-by-Step Execution:
1. P1 will start first and run for 5 units of time (from 0 to 5).
2. P2 will start next and run for 3 units of time (from 5 to 8).
3. P3 will run last, executing for 8 units (from 8 to 16).
Gant Chart:
Now, let's calculate average waiting time and turn around time:
Turnaround Time = Completion Time - Arrival Time
Waiting Time = Turnaround Time - Burst Time
AT : Arrival Time
BT : Burst Time or CPU Time
40
TAT : Turn Around Time
WT : Waiting Time
Processes AT BT CT TAT WT
P1 0 5 5 5-0 = 5 5-5 = 0
P2 0 3 8 8-0 = 8 8-3 = 5
16-0 = 16-8 =
P3 0 8 16
16 8
Consider the following table of arrival time and burst time for three processes P1,
P2 and P3
Process Burst Time (BT) Arrival Time (AT)
P1 5 ms 2 ms
P2 3 ms 0 ms
P3 4 ms 4 ms
Step-by-Step Execution:
P2 arrives at time 0 and runs for 3 units, so its completion time is:
Completion Time of P2=0+3=3
P1 arrives at time 2 but has to wait for P2 to finish. P1 starts at time 3 and
runs for 5 units. Its completion time is:
Completion Time of P1=3+5=8
41
P3 arrives at time 4 but has to wait for P1 to finish. P3 starts at time 8 and
runs for 4 units. Its completion time is:
Completion Time of P3=8+4=12
Gantt Chart:
Now, lets calculate average waiting time and turn around time:
Process Completion Time Turnaround Time (TAT Waiting Time (WT =
(CT) = CT - AT) TAT - BT)
P2 3 ms 3 ms 0 ms
P1 8 ms 6 ms 1 ms
P3 12 ms 8 ms 4 ms
42
The process with the least CPU time required is executed next.
Types of SJF
Non-Preemptive SJF
Key Terms
43
2. P2, P3, P4 are now in queue at time 7
→ P3 has shortest BT = 1 → runs next
4. Then P4
| P1 | P3 | P2 | P4 |
0 7 8 12 16
Calculations:
Averages:
#include <limits.h>
typedef struct {
char name[5];
44
int wt, tat; // waiting time, turnaround time
} Process;
int main() {
Process p[] = {
{"P1", 0, 7, 0, 0, 0, 0, 0},
{"P2", 2, 4, 0, 0, 0, 0, 0},
{"P3", 4, 1, 0, 0, 0, 0, 0},
{"P4", 5, 4, 0, 0, 0, 0, 0}
};
printf("Gantt Chart:\n");
min_bt = p[i].bt;
idx = i;
45
}
idx = i;
if (idx == -1) {
current_time++;
} else {
p[idx].st = current_time;
p[idx].done = 1;
completed++;
current_time = p[idx].ct;
total_wt += p[idx].wt;
total_tat += p[idx].tat;
46
printf("|\n");
current_time = 0;
printf("Process\tAT\tBT\tStart\tCT\tWT\tTAT\n");
printf("%s\t%d\t%d\t%d\t%d\t%d\t%d\n",
return 0;
Expected Output:
Gantt Chart:
| P1 | P2 | P3 | P4 | P2 | P1 |
Process AT BT CT TAT WT
P1 0 8 16 16 8
47
P2 1 4 8 7 3
P3 2 2 4 2 0
P4 3 1 5 2 1
Shortest Remaining Time First (SRTF) is the preemptive version of the Shortest
Job First (SJF) algorithm.
The process with the shortest remaining burst time is selected for execution
next.
If a new process arrives with a shorter remaining time, it preempts the
currently running process.
Characteristics:
Preemptive
Based on remaining burst time
Allows interruption of current process
More complex than non-preemptive SJF
Important Terms:
48
Example:
Step-by-Step Execution:
Gantt Chart:
| P1 | P2 | P3 | P4 | P2 | P1 |
0 1 2 3 4 8 16
Calculations:
49
Process AT BT Completion Time TAT = CT − AT WT = TAT − BT
P1 0 8 16 16 8
P2 1 4 8 7 3
P3 2 2 4 2 0
P4 3 1 5 2 1
Averages:
#include <stdio.h>
#include <limits.h>
typedef struct {
char name[5];
int done;
} Process;
int main() {
int n = 4;
Process p[] = {
{"P1", 0, 8, 0, 0, 0, 0, 0},
50
{"P2", 1, 4, 0, 0, 0, 0, 0},
{"P3", 2, 2, 0, 0, 0, 0, 0},
{"P4", 3, 1, 0, 0, 0, 0, 0}
};
printf("Gantt Chart:\n");
min_rt = p[i].rt;
idx = i;
51
}
if (idx == -1) {
current_time++;
continue;
p[idx].rt--;
current_time++;
if (p[idx].rt == 0) {
p[idx].ct = current_time;
total_wt += p[idx].wt;
total_tat += p[idx].tat;
completed++;
printf("|\n\n");
// Print results
printf("Process\tAT\tBT\tCT\tTAT\tWT\n");
52
printf("%s\t%d\t%d\t%d\t%d\t%d\n",
return 0;
Expected Output:
Gantt Chart:
| P1 | P2 | P3 | P4 | P2 | P1 |
Process AT BT CT TAT WT
P1 0 8 16 16 8
P2 1 4 8 7 3
P3 2 2 4 2 0
P4 3 1 5 2 1
| P1 | P2 | P3 | P4 | P2 | P1 |
0 1 2 4 5 8 16
53
Round Robin (RR) CPU Scheduling Algorithm:
Round Robin is a preemptive scheduling algorithm where:
Characteristics
Preemptive
Based on time quantum
Suitable for interactive systems
Every process gets equal share of CPU time
Key Terms
1. Time Quantum (TQ): Fixed time CPU is allocated to each process (e.g., 2 ms)
2. Turnaround Time (TAT): Completion Time − Arrival Time
3. Waiting Time (WT): TAT − Burst Time
Example
54
Gantt Chart Execution:
Gantt Chart:
| P1 | P2 | P3 | P1 | P2 | P3 | P1 | P3 |
0 2 4 6 8 10 12 13 15
Averages:
55
Round Robin (RR) CPU Scheduling Algorithm program:
#include <stdio.h>
#include <stdbool.h>
typedef struct {
char name[5];
} Process;
int main() {
int n = 3, tq = 2;
Process p[] = {
{"P1", 0, 5, 5, 0, 0, 0},
{"P2", 1, 4, 4, 0, 0, 0},
{"P3", 2, 6, 6, 0, 0, 0}
};
bool inQueue[n];
56
if (p[i].at == 0)
queue[rear++] = i;
inQueue[i] = true;
printf("Gantt Chart:\n");
nextAT = p[i].at;
time = nextAT;
queue[rear++] = i;
inQueue[i] = true;
continue;
57
int idx = queue[front++];
p[idx].rt -= exec_time;
time += exec_time;
queue[rear++] = i;
inQueue[i] = true;
if (p[idx].rt > 0) {
} else {
p[idx].ct = time;
completed++;
printf("|\n");
58
printf("\nProcess\tCT\tAT\tBT\tTAT\tWT\n");
printf("%s\t%d\t%d\t%d\t%d\t%d\n",
total_wt += p[i].wt;
total_tat += p[i].tat;
return 0;
Expected Output
Gantt Chart:
| P1 | P2 | P3 | P1 | P2 | P3 | P1 | P3 |
Process CT AT BT TAT WT
P1 13 0 5 13 8
P2 10 1 4 9 5
P3 15 2 6 13 7
| P1 | P2 | P3 | P1 | P2 | P3 | P1 | P3 |
0 2 4 6 8 10 12 13 15
59
Priority Scheduling – CPU Scheduling Algorithm:
In Priority Scheduling, each process is assigned a priority, and the CPU is
allocated to the process with the highest priority.
Processes with lower numeric values usually have higher priority (e.g.,
priority 1 > priority 3).
If two processes have the same priority, it may use FCFS as a tie-breaker.
Key Terms:
Example (Non-Preemptive)
Process Arrival Time (AT) Burst Time (BT) Priority (Lower = Higher)
P1 0 10 3
P2 1 1 1
P3 2 2 4
60
Process Arrival Time (AT) Burst Time (BT) Priority (Lower = Higher)
P4 3 1 5
P5 4 5 2
Step-by-Step Execution:
Then P5 (priority 2)
Then P3 (priority 4)
Then P4 (priority 5)
Gantt Chart:
| P1 | P2 | P5 | P3 | P4 |
0 10 11 16 18 19
Completion Table:
Averages:
61
Priority Scheduling C program:
#include <stdio.h>
struct Process {
int completed;
};
int main() {
int n = 5;
{2, 1, 1, 1, 0, 0, 0, 0},
{3, 2, 2, 4, 0, 0, 0, 0},
{4, 3, 1, 5, 0, 0, 0, 0},
{5, 4, 5, 2, 0, 0, 0, 0}
};
printf("Gantt Chart:\n");
// Find process with highest priority (lowest number) that has arrived
62
for (int i = 0; i < n; i++) {
minPriority = p[i].priority;
idx = i;
if (idx == -1) {
time++;
continue;
time += p[idx].bt;
p[idx].ct = time;
totalWT += p[idx].wt;
totalTAT += p[idx].tat;
p[idx].completed = 1;
completed++;
63
printf("|\n");
printf("\nProcess\tAT\tBT\tPriority\tCT\tTAT\tWT\n");
printf("P%d\t%d\t%d\t%d\t\t%d\t%d\t%d\n",
return 0;
If you run this, you’ll get the same Gantt chart and averages you calculated:
AWT = 9.00
ATAT = 12.80
64
Response Ratio
Formula:
Where:
Characteristics:
Non-preemptive
Example
Step-by-Step Execution:
Step 1: Time = 0 → 8
P1 runs (CT = 8)
65
Now available processes: P2, P3, P4
Step 2: Time = 8
Step 3: Time = 8 → 9
P4 runs (CT = 9)
Step 4: Time = 9
Remaining: P2, P3
Process WT BT RR
P2 8 4 3.0
P3 7 2 4.5 → Selected
Step 5: Time = 9 → 11
Step 6: Time = 11 → 15
Gantt Chart:
| P1 | P4 | P3 | P2 |
0 8 9 11 15
🔹 Completion Table:
66
Process AT BT CT TAT = CT − AT WT = TAT − BT
P2 1 4 15 14 10
P3 2 2 11 9 7
P4 3 1 9 6 5
Averages:
struct Process {
int completed;
};
int main() {
{1, 0, 8, 0, 0, 0, 0},
{2, 1, 4, 0, 0, 0, 0},
{3, 2, 2, 0, 0, 0, 0},
{4, 3, 1, 0, 0, 0, 0}
};
67
float totalWT = 0, totalTAT = 0;
printf("Gantt Chart:\n");
maxRR = rr;
idx = i;
if (idx == -1) {
continue;
68
// Execute process
time += p[idx].bt;
p[idx].ct = time;
totalWT += p[idx].wt;
totalTAT += p[idx].tat;
p[idx].completed = 1;
completed++;
printf("|\n");
// Output results
printf("\nProcess\tAT\tBT\tCT\tTAT\tWT\n");
return 0;
Output:
Gantt Chart:
69
| P1 | P4 | P3 | P2 |
Process AT BT CT TAT WT
P1 0 8 8 8 0
P2 1 4 15 14 10
P3 2 2 11 9 7
P4 3 1 9 6 5
Each queue can represent a type of process (e.g., system processes, interactive,
batch).
Structure:
Each queue has its own scheduling algorithm (e.g., FCFS, RR, SJF)
70
Example Setup:
Structure:
Example Setup:
71
If a process finishes within its quantum → stays or goes up
A multi-processor is a system that has more than one processor but shares the
same memory, bus, and input/output devices. The bus connects the processor to the
RAM, to the I/O devices, and to all the other components of the computer.
72
Tightly Coupled Systems: Shared memory, common OS.
One processor handles all OS-related tasks (e.g., I/O, system calls).
The other processors execute only user processes.
Simple to implement but not scalable.
Example:
Example:
3. Processor Affinity
Types of Affinity:
Soft Affinity: Tries to keep the process on the same CPU but may migrate.
Load Balancing:
73
Ensures that all processors have roughly equal work.
It can be:
FCFS
Round Robin
Priority Scheduling
SJF
But applied across multiple CPUs with added complexities.
74