KEMBAR78
Operating Systems Notes Unit 1: Introduction & Basics: 1. What Is An Operating System (OS) ? | PDF | Process (Computing) | Operating System
0% found this document useful (0 votes)
16 views83 pages

Operating Systems Notes Unit 1: Introduction & Basics: 1. What Is An Operating System (OS) ?

The document provides an overview of operating systems, detailing their definitions, functions, types, and structures. It explains key concepts such as processes, threads, CPU scheduling, and inter-process communication, along with various scheduling algorithms. Additionally, it discusses the importance of process synchronization and the critical section problem in operating systems.

Uploaded by

mukeshchevula
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views83 pages

Operating Systems Notes Unit 1: Introduction & Basics: 1. What Is An Operating System (OS) ?

The document provides an overview of operating systems, detailing their definitions, functions, types, and structures. It explains key concepts such as processes, threads, CPU scheduling, and inter-process communication, along with various scheduling algorithms. Additionally, it discusses the importance of process synchronization and the critical section problem in operating systems.

Uploaded by

mukeshchevula
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 83

Operating Systems Notes

Unit 1: Introduction & Basics

1. What is an Operating System (OS)?

De nition:
An Operating System (OS) is system software that acts as an intermediary between the
user/application and the computer hardware. It manages hardware resources and
provides a set of services for ef cient execution of programs.

Analogy:
The OS is like a manager in a company. It allocates resources (hardware), schedules
work (processes), and ensures everyone (apps) follows the rules.

2. Functions and Goals of an OS

Primary Functions

1. Process Management
Handles creation, scheduling, and termination of processes (programs in
execution).

2. Memory Management
Allocates and deallocates main memory for processes; keeps track of each byte.

3. File System Management


Organizes, stores, retrieves, names, shares, and protects les.

4. Device Management
Manages device communication via drivers (e.g., printers, disks).

5. I/O System Management


Coordinates and assigns I/O devices and operations.
fi
fi
fi
6. Security and Protection
Controls access to resources, provides authentication and authorization.

7. User Interface
Provides interaction tools (CLI, GUI).

Goals of an OS

• Convenience: Make computing easier for users.


• Ef ciency: Manage resources to maximize throughput and minimize response
time.
• Ability to Evolve: Adapt to new hardware, user needs, and technology.
• Fairness: Prevent resource monopolization.

3. Types of Operating Systems

1. Batch Operating System

• Users do not interact directly.


• Jobs are collected in batches and executed together.
• Example: Early IBM mainframes.
Advantage: Ef cient for large jobs without user interaction.
Disadvantage: No real-time interaction; slow for urgent tasks.

2. Time-Sharing (Multitasking) OS

• Allows multiple users/programs to share CPU simultaneously via time slices.


• Users interact via terminals.
• Example: Unix, Linux.
Advantage: Responsive, allows many users.
Disadvantage: Overhead due to context switching.

3. Real-Time OS (RTOS)
fi
fi
• Processes tasks within a guaranteed time frame.
• Hard real-time: Strict deadlines (e.g., pacemakers).
• Soft real-time: Missed deadlines tolerated (e.g., video streaming).
• Example: VxWorks, RTLinux.
Advantage: Predictable response times.
Disadvantage: Limited multitasking; expensive.

4. Distributed OS

• Manages a group of independent computers, making them appear as a single


system.
• Resources are shared, computation can be distributed.
• Example: Amoeba, Google’s internal systems.
Advantage: Resource sharing, reliability, scalability.
Disadvantage: Complex implementation, security issues.

5. Embedded OS

• Runs on dedicated hardware (not general-purpose).


• Used in devices like microwaves, washing machines, routers.
• Example: Embedded Linux, FreeRTOS.
Advantage: Small, ef cient, custom-built for hardware.
Disadvantage: Limited features, less exible.

6. Mobile OS

• Designed for mobile devices with constraints on power, UI, and sensors.
• Example: Android, iOS.
Advantage: Optimized for touch, mobility, connectivity.
Disadvantage: Limited customization (user-level).
fi
fl
System Calls

De nition:
A system call is a programmatic way in which a process requests a service from the
kernel of the OS.

Why?
User programs cannot directly access hardware; they use system calls to interact safely.

Analogy:
Like submitting a form to a government of ce; you can't directly access the of ce
resources.

Types of System Calls

• Process Control: fork(), exit(), wait(), exec()


• File Management: open(), read(), write(), close()
• Device Management: ioctl(), read(), write()
• Information Maintenance: getpid(), alarm(), sleep()
• Communication: pipe(), shmget(), mmap(), sockets
Example in C (Linux):

#include <unistd.h>
int main() {
write(1, "Hello, World!\n", 14); // write system call: fd 1
is stdout
return 0;
}
fi
fi
fi
OS Structure

The structure of an OS de nes how its components interact and are organized for
ef ciency, reliability, and scalability.

1. Monolithic Kernel

• All OS services run in kernel mode in a single large process.


• Services interact through procedure calls.
• Example: Early Unix, MS-DOS.
Advantages: Fast, simple design.
Disadvantages: Hard to maintain/debug, not modular.
fi
fi
2. Layered Approach

• OS is divided into layers; each layer uses services of the lower one.
• Example layers: Hardware, Kernel, Device Drivers, User Interface.
• Example: THE OS, early UNIX versions.
Advantages: Simpler debugging, modularity.
Disadvantages: Can reduce ef ciency, strict layer order sometimes limiting.

3. Microkernel

• Only minimal core functions (communication, basic I/O) run in kernel mode.
• Other services ( le systems, drivers) run in user mode as servers.
• Example: QNX, MINIX, modern Windows NT kernel.
Advantages: Modular, easy to extend, stable (fault isolation).
Disadvantages: More context switches, sometimes slower IPC.
fi
fi
4. Modules

• OS is divided into modules that can be loaded/unloaded at runtime.


• Each module provides a set of related functions.
• Example: Linux kernel modules.
Advantages: Flexibility, better maintainability, easier upgrades.
Disadvantages: Slightly increased complexity.

5. Hybrid Kernel

• Combines monolithic and microkernel features.


• Core functions in the kernel; others can be modules or services.
• Example: Windows NT, macOS.
Advantages: Flexible, can optimize performance and reliability.
Disadvantages: Can become complex.
OS Type Example Use-case
Batch IBM Mainframe Payroll, scienti c jobs
Time-Sharing Unix, Linux Multi-user terminals
Real-Time VxWorks, RTLinux Robotics, medical devices
Distributed Amoeba, Plan 9 Cloud, grid computing
FreeRTOS,
Embedded IoT, appliances, vehicles
TinyOS
Mobile Android, iOS Smartphones, tablets
fi
Structure Example Key Feature
Monolithic Unix, MS-DOS All-in-one kernel
Layered THE, early UNIX Service separation
Microkernel MINIX, QNX Minimal kernel, user servers
Modules Linux Dynamically loadable parts
Hybrid Windows NT, macOS Mix of modular and monolith
Unit 2: Processes & Threads

1. Process Concept

What is a Process?
A process is an active (running) program. It consists of the program code and its current
activity (such as the values of the program counter, registers, and variables).

Analogy:
A process is like a chef cooking a meal:

• The recipe = code


• The chef's state = current execution
Key Elements of a Process:

• Program code (text section)


• Program counter (next instruction to execute)
• Stack (function calls, local variables)
• Data section (global variables)
• Heap (dynamically allocated memory)

2. Process States

Typical Process States:

1. New: Process is being created.


2. Ready: Waiting to be assigned to a processor.
3. Running: Instructions are being executed.
4. Waiting (Blocked): Waiting for an event (e.g., I/O).
5. Terminated: Process has nished execution.
fi
State Transition Diagram:

Process Control Block (PCB)

A PCB is a data structure maintained by the OS for every process.

Contains:

• Process ID (PID)
• Process state
• Program counter
• CPU registers
• CPU scheduling info (priority, pointers to scheduling queues)
• Memory management info (base, limit registers, page tables)
• Accounting info (CPU used, time limits)
• I/O status info (list of open les/devices)

3. Process Scheduling

Scheduling Queues:

• Job Queue: All processes in the system.


• Ready Queue: Processes in memory, ready to execute.
fi
• Device (I/O) Queue: Processes waiting for an I/O device.

Queue Diagram:

Context Switching

De nition:
Saving the state of the currently running process and loading the state of the next
process.

Why:
Allows the OS to switch the CPU from one process/thread to another.

Steps:

1. Save current process state (to PCB)


2. Load next process state (from PCB)
Overhead:
Context switch time is "pure overhead"—no useful work is done during the switch.
fi
4. Threads

What is a Thread?
A thread (lightweight process) is the smallest sequence of programmed instructions that
can be managed independently by a scheduler. Multiple threads can exist within one
process, sharing code, data, and resources, but each has its own stack and program
counter.

Analogy:
Process = apartment, Threads = people living in it (share the same resources).

User vs Kernel Threads

• User-Level Threads:

◦ Managed in user space (by a user-level library)


◦ Kernel is unaware of them
◦ Fast to create/manage
◦ Disadvantage: If one thread blocks, all threads in the process block
• Kernel-Level Threads:

◦ Managed directly by the OS kernel


◦ Kernel schedules/manages threads
◦ More overhead, but one thread can block without affecting others

Multithreading Models

1. Many-to-One: Many user threads mapped to one kernel thread

◦ Fast, but poor concurrency


2. One-to-One: Each user thread maps to a kernel thread

◦ Good concurrency, but resource-intensive


3. Many-to-Many: Many user threads mapped to a smaller or equal number of
kernel threads

◦ Combines bene ts of both above

5. Inter-process Communication (IPC)

Processes often need to communicate and synchronize with each other.

IPC Methods:

1. Shared Memory:

◦ Two or more processes share a memory region


◦ Fast communication, but requires synchronization
◦ Example: Producer writes to shared memory, consumer reads from it
2. Message Passing:

◦ Processes communicate by sending/receiving messages via OS functions


◦ Can be direct or indirect (mailboxes/ports)
◦ Slower than shared memory, but easier for distributed systems
3. Pipes:

◦ Unidirectional/bidirectional data channel between processes


◦ Anonymous Pipes: Parent-child communication
◦ Named Pipes (FIFOs): Unrelated processes
4. Sockets:

◦ Endpoints for communication between processes over a network


◦ Used in client-server models (e.g., web servers)
fi
6. Process Synchronization

Multiple processes may need to access shared data/resources. Synchronization


prevents data inconsistency.

Critical Section Problem

• Critical Section: Segment of code where shared data is accessed


• Problem: Ensure that when one process is in its critical section, no other
process is
Solution Requirements:

1. Mutual Exclusion: Only one process in the critical section at a time


2. Progress: If no process is in the critical section, those outside cannot prevent
entry
3. Bounded Waiting: Limit on number of times others can enter before a process is
granted access

Race Condition

Occurs when two or more processes access shared data and try to change it
simultaneously.
Outcome depends on the order of execution (not predictable).

Example:
Two processes incrementing a shared variable simultaneously may overwrite each
other's results.

Mutual Exclusion

Techniques to prevent race conditions by ensuring only one process at a time can
access the critical section.
• Software Solutions: Peterson's Algorithm, Bakery Algorithm
• Hardware Solutions: Disabling interrupts, Test-and-Set, Compare-and-Swap
instructions
Synchronization Tools:

• Semaphores: Integer variables used for signaling


(wait() and signal() operations)
• Mutex (Mutual Exclusion Lock): Lock/unlock to enter/leave critical section
• Monitors: High-level synchronization construct (e.g., in Java)

Sample Code: Critical Section with Semaphore (Pseudo-C)

semaphore mutex = 1;

Process {
while (true) {
wait(mutex); // Enter critical section
// Critical section code
signal(mutex); // Exit critical section
// Remainder section
}
}

Concept Description

Process Program in execution

Thread Lightweight process, unit of CPU utilization

PCB Data structure holding process info


Context
Switching CPU from one process/thread to another
Switching
Communication between processes (Shared Memory,
IPC
Message Passing)
Critical
Code accessing shared resources
Section
Mutual
Only one process in critical section at a time
Exclusion
Race
Unpredictable result due to concurrent access
Condition
Unit 3: CPU Scheduling

1. CPU Scheduling Concepts

CPU scheduling is the process of deciding which process in the ready queue should
be assigned to the CPU next. The main goals are ef cient CPU utilization and fair
allocation of CPU time.

• Ready Queue: Holds all processes ready to execute.


• Scheduler: OS component that selects the next process to run.

2. Preemptive vs Non-preemptive Scheduling

Non-Preemptive Scheduling

• Once a process gets the CPU, it runs until it nishes or yields (e.g., for I/O).
• No interruption by OS.
• Examples: FCFS, Non-Preemptive SJF.
Example:
P1 (burst 5), P2 (burst 3)
Order: P1 → P2 (P2 waits until P1 nishes)

Preemptive Scheduling

• OS can interrupt and move a running process back to the ready queue.
• Improves responsiveness and fairness.
• Examples: Round Robin, SRTF, Preemptive Priority.
Example:
P1 (burst 5), P2 arrives at time 2 (burst 2)
With preemptive SJF, P2 can preempt P1.
fi
fi
fi
3. Dispatcher & Scheduling Criteria

Dispatcher

• Gives CPU control to the selected process.


• Performs context switching, switches to user mode, and jumps to the user
program.
Dispatcher Latency: Time taken to stop one process and start another.

Scheduling Criteria

• CPU Utilization: Keep CPU busy (40%-90%).


• Throughput: Number of processes completed per unit time.
• Turnaround Time: Submission to completion time.
• Waiting Time: Time spent waiting in the ready queue.
• Response Time: Time from submission to rst response.
• Fairness: All processes get a chance; no starvation.

4. Scheduling Algorithms
fi
1. First-Come, First-Served (FCFS)

• Non-preemptive; scheduled in order of arrival.


Example:

Process Burst Time


P1 4
P2 3
P3 2

Gantt Chart:
| P1 | P2 | P3 |
0 4 7 9

Waiting Times:
P1: 0, P2: 4, P3: 7
Average: (0+4+7)/3 = 3.67

• Simple, but can cause convoy effect.

2. Shortest Job First (SJF)

• Picks process with shortest burst time.


• Can be preemptive (SRTF) or non-preemptive.
Example (Non-preemptive):

Process Burst Time


P1 6
P2 8
P3 7
P4 3

Order: P4 → P1 → P3 → P2
Gantt Chart:
| P4 | P1 | P3 | P2 |
0 3 9 16 24

• Optimal average waiting time, but risk of starvation.

3. Shortest Remaining Time First (SRTF)

• Preemptive SJF; always runs process with least remaining time.


Example:

Process Arrival Burst


P1 0 8
P2 1 4
P3 2 9
P4 3 5

• Scheduler may preempt running process when a shorter job arrives.

4. Priority Scheduling

• Each process has a priority; lower number = higher priority (may vary by OS).
• Can be preemptive or non-preemptive.
Example:
Process Burst Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

Order: P2 → P5 → P1 → P3 → P4

• Starvation possible; aging can prevent it.

5. Round Robin (RR)

• Each process gets a xed time quantum (e.g., 2ms).


• After quantum expires, process goes to end of ready queue.
Example:

Process Burst Time


P1 5
P2 4
P3 2

Time Quantum: 2

Order:
P1 (2), P2 (2), P3 (2), P1 (3), P2 (2), P1 (1)
fi
Gantt Chart:
| P1 | P2 | P3 | P1 | P2 | P1 |
0 2 4 6 9 11 12

• Good for fairness and response time; context switch overhead if quantum is too
small.

6. Multilevel Queue Scheduling

• Multiple queues (e.g., system, interactive, batch), each with its own algorithm.
• Fixed priority: lower queues run only if higher queues are empty.
Example:

Queue Scheduling Algo


System RR
Interactive FCFS
Batch SJF

7. Multilevel Feedback Queue

• Like multilevel queue, but processes can move between queues based on
behavior.
• Favors short jobs, but all processes eventually progress.
Example Policy:

• New processes enter highest priority queue.


• If they use up their quantum, move to lower queue.
• If they yield early, stay or move up.

5. Performance Metrics
• CPU Utilization: % of time CPU is busy (maximize).
• Throughput: Processes completed per unit time (maximize).
• Turnaround Time: Completion - arrival (minimize).
• Waiting Time: Time in ready queue (minimize).
• Response Time: First scheduled - arrival (minimize).
• Fairness: All processes get CPU time.

6. Gantt Charts

Visual timeline of process execution.

Example:

Process Arrival Burst


P1 0 4
P2 1 3
P3 2 1

FCFS Gantt Chart:


| P1 | P2 | P3 |
0 4 7 8

7. Sample Code: Round Robin (Pseudocode)

int quantum = 2;
while (there are unfinished processes) {
for each process in ready_queue {
if (remaining_burst > 0) {
execute for min(quantum, remaining_burst);
remaining_burst -= quantum;
if (remaining_burst == 0)
mark as finished;
}
}
}

Summary Table: Scheduling Algorithms

Algorith Preem Criteria Starvation Real Life


m ptive Optimized Possible Usage
Batch
FCFS No None Yes
systems
Avg. Waiting Batch
SJF No/Yes Yes
Time systems
Avg. Waiting
SRTF Yes Yes Rare
Time
Real-time
Priority No/Yes Priority Yes
systems
Response/ Time-
RR Yes No
Fairness sharing
Multilevel
No/Yes Varies Yes OS kernels
Q
Multilevel Turnaround/
Yes No OS kernels
FB Q Fairness
Unit 4: Synchronization & Deadlocks

1. Synchronization Tools

Modern operating systems use several synchronization primitives to control access to


shared resources and prevent race conditions.

1.1 Locks

A lock is a mechanism to ensure only one thread or process can access a resource at a
time.

Typical use: Lock before entering the critical section, unlock after exiting.

Example (pseudocode):

lock.acquire();
// critical section
lock.release();

1.2 Semaphores

A semaphore is an integer variable with two atomic operations: wait() (P)


and signal() (V).

• Counting Semaphore: Integer can range over unrestricted values (for resources
with multiple instances).
• Binary Semaphore (Mutex): Value is 0 or 1 (like a lock).
Operations:

wait(S): // also called P(S)


while S <= 0; // busy wait
S = S - 1;

signal(S): // also called V(S)


S = S + 1;
Semaphores are used for signaling between processes and resource management.

1.3 Mutex (Mutual Exclusion Lock)

A mutex is a special kind of binary semaphore used for mutual exclusion. Only the
process that locked it can unlock it.

Typical usage: Thread libraries and modern programming languages


(e.g., pthread_mutex in C, std::mutex in C++).

1.4 Monitors

A monitor is a high-level abstraction that combines data, procedures, and


synchronization. Only one process can execute monitor procedures at a time (implicit
mutual exclusion).

Common in languages like Java (the synchronized keyword).

1.5 Condition Variables

Condition variables are used inside monitors to block a process until a particular
condition is true.

Operations:

• wait(): Releases monitor lock and waits.


• signal(): Wakes up one waiting process.
• broadcast(): Wakes up all waiting processes.
Example (Java-style):

synchronized(lock) {
while (!condition) lock.wait();
// do work
lock.notify(); // or lock.notifyAll();
}

2. Classical Synchronization Problems

Famous OS problems to test understanding of process synchronization and solutions.

2.1 Producer-Consumer Problem (Bounded Buffer)

Problem:
Producer generates data and puts it into a buffer. Consumer takes data from the buffer.
Need to synchronize access to the shared buffer (avoid buffer over ow/under ow).

Solution using Semaphores:

semaphore full = 0;
semaphore empty = N; // buffer size
semaphore mutex = 1;

Producer:
while (true) {
// produce item
wait(empty);
wait(mutex);
// add item to buffer
fl
fl
signal(mutex);
signal(full);
}

Consumer:
while (true) {
wait(full);
wait(mutex);
// remove item from buffer
signal(mutex);
signal(empty);
// consume item
}

2.2 Readers-Writers Problem

Problem:
Multiple readers can read at the same time. Writers need exclusive access (no reading
or writing by others).

Goals:

• No reader should be kept waiting unless a writer has permission.


• No writer should wait longer than necessary.
Solution (First Readers-Writers):

semaphore mutex = 1;
semaphore wrt = 1;
int readcount = 0;
Reader:
wait(mutex);
readcount++;
if (readcount == 1) wait(wrt);
signal(mutex);

// reading

wait(mutex);
readcount--;
if (readcount == 0) signal(wrt);
signal(mutex);

Writer:
wait(wrt);
// writing
signal(wrt);

2.3 Dining Philosophers Problem

Problem:
N philosophers sit around a table, each needs two forks to eat (one from left and one
from right). Each fork is shared by neighbors.

Challenge:
Prevent deadlock (all pick up left fork and wait for right), and starvation.

Simple Solution (may cause deadlock):


semaphore fork[N] = {1, 1, ..., 1};

Philosopher i:
wait(fork[i]); // pick up left
wait(fork[(i+1)%N]); // pick up right

// eat

signal(fork[i]);
signal(fork[(i+1)%N]);

To prevent deadlock:

• Allow only (N-1) philosophers to pick up forks at the same time, or


• Make odd/even philosophers pick up forks in opposite order.

3. Deadlocks

A deadlock is a situation where a set of processes are blocked, each waiting for a
resource held by another.

3.1 Necessary Conditions for Deadlock (Coffman Conditions)

All four must hold simultaneously:

1. Mutual Exclusion: At least one resource is held in a non-sharable mode.


2. Hold and Wait: A process holds at least one resource and waits to acquire
others.
3. No Preemption: Resources cannot be forcibly taken away.
4. Circular Wait: A closed chain of processes exists, each waiting for a resource
held by the next.

3.2 Resource Allocation Graph (RAG)

Used to model resources and processes.

• Processes: Circles
• Resources: Squares
• Edges:
◦ Request edge: Process → Resource
◦ Assignment edge: Resource → Process
Deadlock exists if there is a cycle in the graph.

Example:
P1 → R1 → P2 → R2 → P1 (cycle; deadlock!)

4. Handling Deadlocks

4.1 Deadlock Prevention

Goal: Eliminate at least one of the necessary conditions.

• Mutual Exclusion: Not always possible.


• Hold and Wait: Require processes to request all resources at once.
• No Preemption: If a process holds some resources and requests another, take
away all resources (preempt).
• Circular Wait: Impose a resource ordering and force processes to request
resources in order.

4.2 Deadlock Avoidance


Requires knowledge of future resource requests.

Banker's Algorithm: Safely allocate resources so that the system never enters an
unsafe (potential deadlock) state.

Banker's Algorithm Steps:

1. Each process must declare the maximum resources it needs.


2. System simulates granting the request:
◦ If resulting state is safe (can nish all processes in some order), grant.
◦ Otherwise, process waits.

4.3 Deadlock Detection

Allow the system to enter deadlock and then detect it.

• Use algorithms (like cycle detection in RAG) to nd deadlocks.


• OS can periodically run deadlock detection.

4.4 Deadlock Recovery

Once a deadlock is detected:

• Terminate processes: Abort one or more processes to break the cycle.


• Resource preemption: Take resources from some processes and give to
others.
Must ensure system stability and avoid starvation

Tool/Concept Purpose / Use-case


Lock/Mutex Mutual exclusion for shared resources
Semaphore Signaling, counting, resource management
fi
fi
Monitor/Cond. Var. High-level synchronization, condition waiting
Prod-Cons Problem Buffer, consumer/producer sync
Readers-Writers Concurrent reading, exclusive writing
Dining Philosophers Resource allocation, deadlock prevention
Deadlock Processes stuck, each waiting for another
Banker's Algorithm Deadlock avoidance (safe/unsafe state)
RAG Visualize process-resource relations/cycles
Unit 5: Memory Management

1. Memory Hierarchy & Allocation

Memory Hierarchy

• Registers (fastest, smallest)


• Cache (L1, L2, L3)
• Main Memory (RAM)
• Secondary Storage (SSD/HDD, slower but larger)
• Tertiary Storage (tapes, archival, slowest)
Why?
Faster memory is more expensive and smaller. The OS tries to keep active data in
faster memory.

Memory Allocation

The OS allocates memory to processes for ef cient and safe execution.

2. Contiguous & Non-contiguous Allocation

Contiguous Memory Allocation

Each process is allocated a single continuous block of memory.

Methods:

• Fixed Partitioning: Memory divided into xed-sized regions.


• Variable Partitioning: Regions allocated dynamically as per process size.
Problems:

• Fragmentation:
◦ External: Free memory scattered across; can't be used ef ciently.
fi
fi
fi
◦ Internal: Allocated memory block is larger than needed, wasting space.

Non-contiguous Memory Allocation

Process is allocated memory in multiple blocks, not necessarily adjacent.

• Reduces fragmentation
• Improves utilization

3. Paging, Segmentation, Paging + Segmentation

Paging

• Physical memory is divided into xed-size blocks called frames.


• Logical memory is divided into blocks of the same size called pages.
• A page table maps logical pages to physical frames.
• Eliminates external fragmentation.
fi
• Drawback: Can cause internal fragmentation (last page may not be fully used).
Example:

• Page size: 4 KB
• Logical address: 13,500
• Page number = 13,500 / 4,096 = 3
• Offset = 13,500 % 4,096 = 1,212

Segmentation

• Memory is divided into variable-sized segments (e.g., code, data, stack).


• Each address has: Segment Number + Offset
• Segment Table stores base and limit for each segment.
Advantage: Logical view for programmers (matches program structure).
Drawback: Suffers from external fragmentation.
Paging + Segmentation

• Logical address space is divided into segments; each segment is paged.


• Each segment has its own page table.
Purpose: Combines exibility of segmentation with ef ciency of paging.

4. Address Translation, TLB

Address Translation (Paging Example)

1. Logical Address = (Page Number, Offset)


2. MMU (Memory Management Unit):
◦ Looks up page table for frame number.
◦ Physical Address = (Frame Number, Offset)

TLB (Translation Lookaside Buffer)

• A small, fast cache in the MMU.


• Stores recent page table entries to speed up address translation.
• TLB Hit: Address translation is fast.
• TLB Miss: Must access the page table in memory (slower).

5. Virtual Memory

• Allows execution of processes that may not be completely in main memory.


• Illusion: Each process has large, private memory.
• Implemented using: Paging or segmentation with demand paging.

Demand Paging

• Pages are loaded into memory only when needed (on demand).
• If a page is not in memory, a page fault occurs, and the OS loads it from disk.
fl
fi
Advantages:

• Reduces memory usage.


• Enables large programs to run on small RAM.

6. Page Replacement Algorithms

When memory is full and a new page is needed, OS must replace an old one.

1. FIFO (First-In, First-Out)

◦ Oldest page in memory is replaced.


◦ Simple, but may lead to Belady’s anomaly (more frames → more page
faults).
2. LRU (Least Recently Used)

◦ Replaces page that hasn’t been used for the longest time.
◦ Good approximation of optimal; requires tracking usage order.
3. Optimal

◦ Replaces the page that will not be used for the longest time in the future.
◦ Not implementable in practice (requires future knowledge).
◦ Used for comparison.
4. Second Chance (Clock)

◦ Like FIFO, but gives a second chance to pages with a reference bit set.
◦ Pages are kept in a circular queue (clock hand).
5. LFU (Least Frequently Used)

◦ Replaces the page with the lowest access frequency.


6. MFU (Most Frequently Used)

◦ Replaces the page with the highest access frequency.

Example: LRU Page Replacement


Suppose page reference string: 1, 2, 3, 2, 4, 1, 5, 2
Frames: 3

Fault
Time Pages in Memory
?
1 1 Yes
2 1, 2 Yes
3 1, 2, 3 Yes
4 2, 3, 1 No
5 3, 1, 4 Yes
6 1, 4, 3 No
7 4, 3, 5 Yes
8 3, 5, 2 Yes

7. Thrashing & Working Set Model

Thrashing

• Thrashing: Excessive paging (page faults) where CPU spends more time
swapping pages than executing processes.
• Happens when processes do not have enough frames.
• Symptoms: High page fault rate, low CPU utilization.

Working Set Model

• Working Set: The set of pages a process is actively using during a time window.
• Goal: Allocate enough frames to each process to cover its working set and
prevent thrashing.
• The OS tracks recent page references and tries to keep those pages in memory.
Concept Description
Contiguous Single block per process, can cause
Allocation fragmentation
Fixed-size blocks, eliminates external
Paging
fragmentation
Segmentation Variable-size segments, logical separation
TLB Fast cache for address translation
Virtual Memory Allows execution of larger processes via paging
Demand Paging Pages loaded on demand, page faults handled
FIFO, LRU,
Page replacement algorithms
Optimal, ...
Thrashing Excessive page faults, low performance
Working Set Model Tracks "in use" pages to avoid thrashing
Unit 6: File Systems

1. File Concepts

What is a File?

A le is a collection of related data stored on secondary storage (disk, SSD, etc.),


identi ed by a name and extension.

Examples:

• report.docx (document le)


• main.cpp (source code le)
• image.jpg (image le)

File Attributes

Files have metadata (attributes) managed by the OS:

Attribute Description
Name Human-readable identi er
Type Format/extension (e.g., .txt, .exe)
Location Where the le is stored on disk
Size Number of bytes in the le
Protection Permissions (read, write, execute)
Time Creation, modi cation, access times
Owner User or process that owns the le
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
File Operations

The OS provides system calls to operate on les:

• Create: Make a new le


• Delete: Remove an existing le
• Open: Prepare a le for use
• Close: Finish using a le
• Read: Get data from a le
• Write: Save data to a le
• Seek: Move le pointer to a location
• Append: Add data at the end

File Types

• Text les: ASCII/Unicode characters


• Binary les: Data not in human-readable format (images, executables)
• Directories: Special les containing le lists

Access Methods

1. Sequential Access:
Data read/written in order, from start to end (like playing a cassette tape).
Example: Reading a log le.

2. Direct (Random) Access:


Access any part of the le directly by specifying offset (like jumping to a page in a
book).
Example: Database records.

3. Indexed Access:
Uses an index (like a table of contents) to nd blocks/records.
Fast lookup, common in databases.
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
2. Directory Structure

A directory (folder) organizes les and may contain subdirectories.

1. Single-level Directory

All les in a single directory.


Simple but may cause naming con icts and is hard to manage with many les.

Root
file1
file2
file3
2. Two-level Directory

Separate directory for each user.


Solves naming con icts; still limited hierarchy.

Root
User1
f1
f2
User2
f1
f3
3. Tree-structured Directory

Allows arbitrary hierarchy of directories and subdirectories.


Most common in modern OS.












fi

















fl
fi
fl
fi
Root
bin
usr
local
share
home
user1
user2
4. Acyclic Graph Directory

Allows les/directories to be shared via links.


No cycles (no directory is its own ancestor).

Example:
User1 and User2 can both access a shared "project" directory via a link.

5. General Graph Directory

Allows links to form cycles.


Complex; requires algorithms to detect and break cycles to avoid in nite loops.

3. File System Implementation













fi












fi
File Control Block (FCB)

A File Control Block is a data structure that stores metadata about a le (attributes,
permissions, pointers to data blocks).

Allocation Methods

1. Contiguous Allocation

Each le occupies a set of contiguous blocks.


Fast access, simple.
Drawback: External fragmentation, hard to extend les.

Disk Blocks: [file1][file1][file1][file2][file2][free][free]

2. Linked Allocation

Each le is a linked list of disk blocks; each block contains a pointer to the next.
No external fragmentation.
Drawback: Slow direct access, pointer overhead.
fi
fi
fi
fi
file1: Block1 Block5 Block9 null

3. Indexed Allocation

Each le has an index block containing addresses of its data blocks.


Direct access, no external fragmentation.
Drawback: Index block overhead.

Index Block: [4][7][9][10][12]


Data: [ ][ ][ ][ ][ ]
fi



Free-space Management

OS must track free blocks to allocate new les ef ciently.

• Bitmap (Bit Vector):


1 bit per block: 0 = free, 1 = allocated. Ef cient search for free space.

• Linked List:
Free blocks linked together; OS traverses the list to nd free space.

• Grouping:
First free block points to a set of free blocks, and so on. Faster than linked list.

4. Directory & File System Mounting, Sharing, Protection

Directory & File System Mounting

Mounting: Making a le system accessible at a directory mount point.


Allows multiple storage devices (e.g., USB, external HDD) to be accessed as part of the
le system.

Example:
Mounting a USB drive at /media/usb.

File Sharing

Users and processes can share les via links, shared directories, or network le
systems (NFS).
Shared les require protection and consistency management.

File Protection

Controls who can read/write/execute les.


fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
• Access Control Lists (ACLs): List of users/roles and their permissions.
• Unix-style: Read, write, execute permissions for owner, group, others
(e.g., rwxr-x--x).

Summary Table

Concept Description
File Attributes Name, type, size, permissions, timestamps
Access Methods Sequential, direct (random), indexed
Directory Structure Single, two-level, tree, acyclic/general graph
FCB Stores metadata for each le
Allocation Methods Contiguous, linked, indexed
Free-space Mgmt Bitmap, linked list, grouping
Mounting Attach FS at a directory mount point
Sharing & Protection File sharing, access control (ACLs, Unix perms)
fi
Unit 7: I/O Systems & Disk Management

1. I/O Hardware

I/O Devices

An I/O device is any hardware used to input data to or output data from a computer.

Types:

• Input Devices: Keyboard, mouse, scanner


• Output Devices: Monitor, printer, speaker
• Storage Devices: Hard disk, SSD, CD/DVD
Characteristics:

• Vary in speed, data transfer method, and complexity


• May be block-oriented (disk) or character-oriented (keyboard)

Device Drivers

A device driver is a software module that controls and communicates with a speci c I/
O device.

Role:

• Converts generic OS requests into device-speci c operations


• Handles interrupts, data transfer, and error management
Example:
A printer driver converts a print request into the exact commands required by a speci c
printer model.
fi
fi
fi
2. I/O Techniques

1. Programmed I/O

• CPU actively waits and controls all aspects of the I/O operation
Steps:

1. CPU sends data to the device


2. CPU waits (polls) until operation completes
Pros: Simple
Cons: CPU is busy-waiting and cannot do other tasks

2. Interrupt-driven I/O

• Device interrupts CPU when it is ready for data or has nished operation
Steps:

1. CPU issues an I/O command and continues other work


2. Device interrupts CPU when ready
Pros: CPU can work on other tasks while waiting
Cons: Context switching overhead

3. Direct Memory Access (DMA)

A DMA controller is special hardware that manages data transfer between device and
memory without CPU involvement.

Steps:

1. CPU initiates DMA transfer, then continues other work


2. DMA controller transfers data directly, signals CPU on completion
fi
Pros: High ef ciency, CPU free for other work
Cons: More hardware complexity

3. Disk Structure

Disk Components:

• Platters: Circular disks coated with magnetic material


• Tracks: Concentric circles on platters
• Sectors: Subdivisions of tracks, smallest addressable unit
• Cylinders: Set of tracks aligned vertically on all platters
• Read/Write Head: Moves to access data
Diagram:
fi
4. Disk Scheduling Algorithms

Goal: Minimize disk head movement (seek time) and response time

1. FCFS (First-Come, First-Served)

• Services requests in the order they arrive


• Simple but may be inef cient (large head movement)

2. SSTF (Shortest Seek Time First)

• Selects the request closest to the current head position


• Advantage: Reduces total seek time
• Disadvantage: May cause starvation for far-away requests

3. SCAN (Elevator Algorithm)

• Disk arm moves in one direction servicing requests, then reverses


• Like an elevator serving all requests in one direction before switching

4. LOOK

• Like SCAN, but the arm goes only as far as the last request in each direction,
then reverses (doesn't go to disk edge if not needed)

5. C-SCAN (Circular SCAN)

• Like SCAN, but when the head reaches one end, it returns to the beginning and
starts again (servicing only in one direction)

6. C-LOOK
fi
• Like LOOK, but only goes as far as the last request, then jumps to the rst
request (no movement to disk edge)

Example (Scheduling Comparison):

Suppose head starts at 53, pending requests: 98, 183, 37, 122, 14, 124, 65, 67 (cylinder
numbers).

• FCFS: 53 → 98 → 183 → 37 → 122 → 14 → 124 → 65 → 67


• SSTF: 53 → 65 → 67 → 37 → 14 → 98 → 122 → 124 → 183
• SCAN (if moving up): 53 → 65 → 67 → 98 → 122 → 124 → 183 → end, then
reverse if needed

5. RAID Levels

RAID = Redundant Array of Independent Disks


Combines multiple disks for reliability and/or performance.

Level Description Use-case


RAID
Striping (no redundancy) Fast, not fault-tolerant
0
RAID
Mirroring (exact copies) High reliability
1
RAID Fast reads, redundant, space-
Block-level striping + parity
5 ef cient
RAID
Like RAID 5, double parity Can survive 2 disk failures
6

RAID Combination of RAID 1 &


High performance and reliability
10 RAID 0

• Parity: Extra data to recover from disk failure


fi
fi
• Mirroring: Duplicate data on multiple disks

6. Error Handling

• Disk errors: Can occur due to hardware failure, bad sectors, power loss, etc.
• Detection: Using checksums, parity bits, ECC (Error Correction Codes)
• Correction: ECC can x certain errors; OS may mark bad sectors and avoid
them
• Recovery: RAID can recover data from failed disks depending on the level

Summary Table

Concept Description
I/O Device Hardware for input/output
Device Driver Software to control device
Programmed I/O CPU polls, waits for device
Interrupt-driven Device signals CPU when ready
DMA Data transfer directly between device & memory
FCFS, SSTF, Disk scheduling algorithms (seek time
SCAN… optimization)
RAID Levels Disk reliability and performance schemes
Detection and correction of hardware/software
Error Handling
errors
fi
Unit 8: Protection & Security

1. Protection

Goals of Protection

• Prevent unauthorized use of resources.


• Ensure that programs, processes, and users only access resources for which
they have explicit permission.
• Provide controlled sharing of resources.
Example:
One user’s process cannot read another user’s private les.

Access Matrix

An abstract model for specifying rights of each subject (user/process) on each object
( le/resource).

File File
Printer
A B
Alice R/W R -
Bob - R/W Print

• Rows: Subjects (users/processes)


• Columns: Objects (resources)
• Cells: Set of rights (read, write, execute, print, etc.)

Implementation of Access Matrix

1. Access Control List (ACL)


fi
fi
• For each object, store a list of (subject, rights) pairs.
Example:

• File A:

◦ Alice: Read, Write


◦ Bob: -
• Advantage: Easy to nd who can access a resource.

• Disadvantage: Hard to check all resources a user can access.

2. Capability List

• For each subject, store a list of (object, rights) pairs.


Example:

• Alice:

◦ File A: Read, Write


◦ Printer: -
• Advantage: Easy to nd all resources a user can access.

• Disadvantage: Hard to check all users who can access a resource.

Domain of Protection

• Domain: A set of access rights (what a process/user can access) that de nes
the scope of permissions.
• A process runs within a domain.
• Domains can change dynamically (e.g., sudo in Linux changes user domain
temporarily).

2. Security
fi
fi
fi
Security Threats

• Security threat: Potential violation of security.


Types:

• Trojan horse: Malicious program disguised as legitimate.


• Trap door: Hidden entry point into a program/system.
• Virus/Worm: Self-replicating code that spreads/maliciously acts.
• Spoo ng: Masquerading as another entity.
• Phishing: Fraudulent attempt to obtain sensitive info.

Attack Types

• Masquerading: Pretending to be another user/process.


• Replay attack: Reusing valid data transmission.
• Denial of Service (DoS): Overloading resources to deny access to others.
• Privilege escalation: Gaining higher privileges by exploiting vulnerabilities.
• Eavesdropping: Unauthorized listening to private communication.

Authentication

Verifying the identity of a user or process.

Common methods:

1. Password-based: User provides a secret password.


2. Biometric: Fingerprint, facial recognition.
3. Token-based: Smart cards, OTPs.
4. Multi-factor: Combination of two or more above.

Encryption
fi
• Encryption: Transforming data into unreadable form except by those with a
decryption key.
• Purpose: Protect con dentiality and integrity of data.
Types:

• Symmetric-key: Same key for encryption and decryption (e.g., AES).


• Asymmetric-key: Public/private key pairs (e.g., RSA).

User Authentication

• Ensures only legitimate users can access system resources.


UNIX example: Login requires username & password, stored in /etc/passwd or /
etc/shadow.

Best Practices:

• Strong, unique passwords


• Account lockouts after failed attempts
• Use of multifactor authentication

Security Policies

Set of rules that determine how system resources may be accessed and by whom.

Examples:

• Discretionary Access Control (DAC): Owner decides access rights.


• Mandatory Access Control (MAC): Access rights are enforced by system policy
(e.g., classi ed info).
• Role-Based Access Control (RBAC): Permissions assigned to roles, users
assigned roles.
fi
fi
Summary Table

Concept Description
Protection Prevents unauthorized use of resources
Access Matrix Table of subjects vs. objects (rights)
ACL Per-object access list
Capability List Per-subject access list
Domain Set of access rights for a process/user
Threat Potential security violation
Attack Types Spoo ng, DoS, privilege escalation, etc.
Authentication Identity veri cation (password, biometrics)
Encryption Securing data by encoding
Security Policy Set of system-wide access and usage rules
fi
fi
Unit 9: Advanced Topics & Case Studies

1. Distributed Operating Systems Concepts

A Distributed Operating System manages a collection of independent computers and


makes them appear as a single coherent system.

Key Features:

• Resource Sharing: Share les, printers, computation power.


• Transparency: Hides location, migration, and replication of resources from
users.
• Scalability: System can expand easily by adding more nodes.
• Fault Tolerance: System continues to work if some nodes fail.

Distributed File Systems

A Distributed File System allows les to be stored and accessed across multiple
networked machines.

Examples: Google File System (GFS), Hadoop Distributed File System (HDFS), NFS
(Network File System).

Features:

• Location Transparency: Users don’t need to know the physical location of les.
• Reliability: Data is often replicated across nodes.
• Concurrency: Supports simultaneous access by multiple users.

Distributed Coordination
fi
fi
fi
Multiple nodes/processes need to cooperate and synchronize.

Mechanisms:

• Distributed Locks: Ensure mutual exclusion (e.g., ZooKeeper).


• Leader Election: One node acts as a coordinator.
• Clock Synchronization: Ensures events are ordered across systems (e.g., NTP
protocol, Lamport clocks).

Remote Procedure Call (RPC)

De nition: Allows a program on one machine to execute a procedure on another


machine as if it were local.

How it works:

1. Procedure call is converted to a network message.


2. Remote machine receives, processes, and returns the result.
fi
Example:
A web client calls a function on a cloud server to process data.

2. Real-Time Operating Systems (RTOS)

An RTOS is an OS designed to process data and respond within a guaranteed time


constraint.

Used in: Embedded systems, medical devices, robotics, industrial control.

Types:

• Hard RTOS: Missing a deadline is catastrophic (e.g., pacemaker).


• Soft RTOS: Occasional missed deadlines are tolerable (e.g., video playback).
Features:

• Deterministic scheduling (predictable timing)


• Priority-based task scheduling
• Minimal interrupt latency
Examples: VxWorks, FreeRTOS, QNX, RTLinux

3. Mobile OS (Android/iOS Basics)

Android Architecture

1. Linux Kernel: Hardware abstraction, drivers, memory/process management


2. Libraries: Media, SQLite, WebKit, OpenGL
3. Android Runtime: ART/Dalvik VM, core libraries
4. Application Framework: Activities, Services, Content Providers
5. Applications: User apps and system apps
iOS Architecture
1. Core OS: Kernel, device drivers, low-level networking
2. Core Services: Data management, networking, location
3. Media: Graphics, audio, video
4. Cocoa Touch: UI framework, gesture recognition, noti cations
5. Apps: Native and third-party apps

4. Case Studies

Unix/Linux

• Monolithic kernel (everything runs in kernel space)


• Command-line interface (CLI) and GUI available
• Filesystem hierarchy: /, /home, /bin, /usr, etc.
• Powerful multitasking, networking, and scripting (shell)
fi
Windows OS

• Hybrid kernel (mix of microkernel and monolithic features)


• GUI-based by default, also has command prompt and PowerShell
• Registry for system con guration
• Wide hardware and application support

Android (recap)

• Built on Linux kernel


• Java/Kotlin-based application framework
• Application sandboxing for security
fi
5. Recent Trends

Virtualization

De nition: Running multiple virtual machines (VMs) on a single physical machine using
a hypervisor.

Bene ts: Resource ef ciency, isolation, easier testing and deployment.

Examples: VMware, VirtualBox, KVM

Containers (Docker)

De nition: Lightweight, portable, and isolated application environments sharing the


host OS kernel.

Bene ts: Fast startup, small footprint, easy scaling/deployment.

Docker: Popular container platform.


fi
fi
fi
fi
fi
Diagram:

[Host OS]
|-- Container 1 (App + Libraries)
|-- Container 2 (App + Libraries)
|-- Container 3 (App + Libraries)

Cloud OS Basics

A Cloud OS is a software platform that manages cloud infrastructure and services.

Examples: OpenStack, AWS EC2, Microsoft Azure

Functions: Orchestrates VMs, storage, networks, and security across large data
centers.

Key Concepts: Elasticity (scale up/down), multi-tenancy (multiple users), self-service


provisioning

Summary Table

Concept Description
Distributed OS OS for networked computers, resource
sharing
Distributed File File access across network (NFS, HDFS)
System
Coordination Synchronizing multiple distributed nodes
RPC Procedure calls across machines
RTOS OS for real-time, time-constrained tasks
Mobile OS OS for mobile devices (Android, iOS)
Unix/Linux Monolithic kernel, scripting, multitasking
Windows OS Hybrid kernel, registry, GUI
Virtualization Multiple VMs on one machine (hypervisor)
Containers (Docker) Lightweight app isolation, shares OS kernel
Cloud OS Manages cloud resources, scaling,
provisioning
Unit 10: Miscellaneous & Frequently Asked Interview
Questions

1. Memory Leaks

Memory Leak: Occurs when a program allocates memory but fails to release it after
use.

• Result: Reduced available memory, degraded performance, possible system


crash over time.
• Common Causes:
◦ Forgetting to use free() in C/C++ after malloc()/calloc().

◦ Losing all references to a dynamically allocated object.


• Detection: Use tools like valgrind (Linux) or built-in debuggers.
• Prevention:
◦ Always deallocate memory when done.
◦ Use smart pointers in C++ (std::unique_ptr, std::shared_ptr).
Example (C):

int* arr = malloc(10 * sizeof(int));


// ... forgot to free arr
// memory leak!

2. Zombie & Orphan Processes

Zombie Process

• De nition: A process that has completed execution but still has an entry in the
process table (because its parent hasn’t read its exit status).
fi
• Reason: Parent needs to call wait() to clean up (“reap”) the child.
• How to remove: Parent process should use wait() or waitpid().
Example:

• Child (exits) → becomes zombie


• Parent (running) → hasn't called wait()
Orphan Process

• De nition: A process whose parent has terminated before it.


• What happens: OS (init/systemd) adopts the orphan and cleans it up after
exit.

3. Fork-Exec Model

• fork(): System call to create a new process (child is copy of parent).


• exec(): System call to replace the process’s memory with a new program.
Typical sequence:

1. Parent process calls fork()


2. Child process calls exec() to run a new program
3. Parent can wait for the child
Example (C code):

pid_t pid = fork();


if (pid == 0) {
// Child process
execlp("ls", "ls", "-l", NULL);
// If exec fails
exit(1);
} else {
fi
// Parent process
wait(NULL); // waits for child
}

4. Shell & System Calls

• Shell: A command-line interpreter that allows users to interact with the OS (e.g.,
bash, sh, PowerShell, cmd).
• System Call: An interface between user programs and the OS kernel
(e.g., open, read, write, fork, exec).
Role of Shell:

• Reads user commands


• Parses and interprets them
• Makes corresponding system calls
Example:
User types cat file.txt
Shell parses, uses fork() and exec() to run cat, and makes open/read system
calls.

5. Common OS Commands (Linux/Windows)

Linux

Command Description
ls List directory contents
cd Change directory
pwd Print current working directory
cp Copy les
mv Move/rename les
rm Remove les
ps List running processes
top Monitor processes live
kill Terminate a process
chmod Change le permissions
chown Change le owner
man Display manual pages
grep Search text in les
nd Find les and directories
df Show disk usage
du Show le/folder sizes

Windows

Command Description
dir List directory contents
cd Change directory
copy Copy les
move Move/rename les
del Delete les
tasklist List running processes
taskkill Terminate a process
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
cls Clear screen
type Display le contents
attrib Display/change le attributes
icacls Change le permissions

100+ Most Frequently Asked OS Interview Questions (with


Answers)

Process Management

Q1. What is a process?


A process is a program in execution, consisting of code, data, stack, and program
counter.

Q2. What is the difference between a process and a thread?


A process is an independent program with its own memory space; a thread is a
lightweight process sharing memory with other threads in the same process.

Q3. What is a process control block (PCB)?


PCB is a data structure maintained by the OS containing information about a process
(PID, state, registers, memory info, etc.).

Q4. What are the different process states?


New, Ready, Running, Waiting (Blocked), Terminated.

Q5. What is context switching?


Saving the state of a running process and loading the state of another process.
fi
fi
fi
Q6. What is a zombie process?
A process that has nished execution but still has an entry in the process table because
its parent hasn't read its exit status.

Q7. What is an orphan process?


A process whose parent has terminated before it; adopted by init/systemd.

Q8. What is the fork() system call?


Creates a new child process as a copy of the parent.

Q9. What is exec()?


Replaces the current process image with a new program.

Q10. What is the difference between fork() and exec()?


fork() creates a new process; exec() replaces the process's memory with a new
program.

Threads & Concurrency

Q11. What are user-level and kernel-level threads?


User-level threads are managed by user libraries; kernel-level threads are managed by
the OS.

Q12. What is multithreading?


Running multiple threads within a process for concurrent execution.

Q13. What is a race condition?


When multiple threads/processes access shared data concurrently and the result
depends on the order of execution.
fi
Q14. What is a critical section?
A section of code accessing shared resources that must not be executed by more than
one thread at a time.

Q15. What is mutual exclusion?


Ensuring only one process/thread accesses the critical section at a time.

Q16. What is a semaphore?


A synchronization primitive used to control access to shared resources.

Q17. What is a mutex?


A mutual exclusion lock allowing only one thread to access a resource at a time.

Q18. What is a monitor?


A high-level synchronization construct that allows safe access to shared data.

CPU Scheduling

Q19. What is CPU scheduling?


The process of selecting which process in the ready queue should be assigned to the
CPU.

Q20. What is preemptive scheduling?


The OS can interrupt and switch out a running process.

Q21. What is non-preemptive scheduling?


A process runs until it nishes or yields control.

Q22. Name some CPU scheduling algorithms.


FCFS, SJF, SRTF, Priority, Round Robin, Multilevel Queue, Multilevel Feedback Queue.
fi
Q23. What is starvation?
A process waits inde nitely because other processes are continuously chosen.

Q24. What is aging?


Gradually increasing the priority of waiting processes to prevent starvation.

Q25. What is turnaround time?


Total time taken from submission to completion of a process.

Q26. What is waiting time?


Total time a process spends in the ready queue.

Q27. What is response time?


Time from submission to the rst response.

Deadlocks

Q28. What is a deadlock?


A situation where processes are blocked, each waiting for a resource held by another.

Q29. What are the necessary conditions for deadlock?


Mutual exclusion, hold and wait, no preemption, circular wait.

Q30. What is a resource allocation graph?


A graphical representation of processes and resources to detect deadlocks.

Q31. How can deadlocks be prevented?


By eliminating one of the necessary conditions.

Q32. What is the Banker's algorithm?


A deadlock avoidance algorithm that checks for safe resource allocation.
fi
fi
Q33. How can deadlocks be detected?
By checking for cycles in the resource allocation graph.

Q34. How can deadlocks be recovered?


Terminate processes or preempt resources to break the cycle.

Memory Management

Q35. What is contiguous memory allocation?


Allocating a single continuous block of memory to a process.

Q36. What is fragmentation?


Wasted memory due to allocation methods: internal (within allocated block), external
(between blocks).

Q37. What is paging?


Dividing memory into xed-size pages and frames to eliminate external fragmentation.

Q38. What is segmentation?


Dividing memory into variable-sized segments based on logical divisions.

Q39. What is virtual memory?


A technique allowing execution of processes not completely in main memory.

Q40. What is demand paging?


Loading pages into memory only when needed.

Q41. What is a page fault?


Occurs when a referenced page is not in memory.

Q42. Name some page replacement algorithms.


FIFO, LRU, Optimal, Second Chance, LFU, MFU.
fi
Q43. What is thrashing?
Excessive paging causing low CPU utilization.

Q44. What is a TLB?


Translation Lookaside Buffer, a cache for page table entries.

File Systems

Q45. What is a le?


A collection of related data stored on disk.

Q46. What is a directory?


A structure to organize and manage les.

Q47. What are le attributes?


Metadata like name, type, size, permissions, timestamps.

Q48. What are le access methods?


Sequential, direct (random), indexed.

Q49. What is contiguous le allocation?


Storing a le in consecutive disk blocks.

Q50. What is linked allocation?


Each le block points to the next block.

Q51. What is indexed allocation?


An index block contains pointers to all le blocks.

Q52. What is a le control block (FCB)?


A data structure storing le metadata.
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
Q53. What is mounting?
Making a le system accessible at a directory.

Q54. What is free-space management?


Tracking free disk blocks using bitmaps, linked lists, etc.

I/O and Disk Management

Q55. What is programmed I/O?


CPU actively waits for I/O completion.

Q56. What is interrupt-driven I/O?


Device interrupts CPU when ready.

Q57. What is DMA?


Direct Memory Access allows devices to transfer data without CPU intervention.

Q58. What is a device driver?


Software that controls a hardware device.

Q59. What is disk scheduling?


Algorithms to optimize disk head movement (FCFS, SSTF, SCAN, LOOK, C-SCAN, C-
LOOK).

Q60. What is RAID?


Redundant Array of Independent Disks for reliability and performance.

Synchronization

Q61. What is the producer-consumer problem?


A classic synchronization problem involving a shared buffer.
fi
Q62. What is the readers-writers problem?
Multiple readers can read simultaneously, writers need exclusive access.

Q63. What is the dining philosophers problem?


Models resource allocation and deadlock with philosophers and forks.

Q64. What is a condition variable?


Used for signaling between threads in monitors.

Protection & Security

Q65. What is protection in OS?


Mechanisms to control access to resources.

Q66. What is an access matrix?


A table specifying rights of subjects over objects.

Q67. What is an ACL?


Access Control List, per-object list of users and permissions.

Q68. What is a capability list?


Per-subject list of objects and permissions.

Q69. What is authentication?


Verifying user identity (passwords, biometrics, tokens).

Q70. What is encryption?


Encoding data to prevent unauthorized access.

Q71. What is a Trojan horse?


Malicious code disguised as legitimate software.
Q72. What is privilege escalation?
Gaining higher access rights by exploiting vulnerabilities.

System Calls & Shell

Q73. What is a system call?


An interface for user programs to request OS services.

Q74. What is a shell?


A command-line interpreter for user-OS interaction.

Q75. What is the difference between kernel mode and user mode?
Kernel mode has full access to hardware; user mode is restricted.

Advanced Topics

Q76. What is a distributed OS?


An OS managing a group of networked computers as a single system.

Q77. What is a real-time OS?


An OS with guaranteed response times for critical tasks.

Q78. What is virtualization?


Running multiple virtual machines on one physical machine.

Q79. What is a container?


A lightweight, isolated environment sharing the host OS kernel.

Q80. What is cloud OS?


Software managing cloud infrastructure and services.
Miscellaneous

Q81. What is a memory leak?


Allocated memory not freed, causing resource drain.

Q82. What is a swap space?


Disk space used as an extension of RAM.

Q83. What is the difference between logical and physical address?


Logical address is generated by CPU; physical address is in RAM.

Q84. What is the bootloader?


Program that loads the OS kernel at startup.

Q85. What is the difference between hard and soft real-time systems?
Hard: missing a deadline is catastrophic; Soft: occasional misses tolerated.

Q86. What is a hypervisor?


Software that creates and manages virtual machines.

Q87. What is the difference between paging and segmentation?


Paging uses xed-size blocks; segmentation uses variable-sized logical units.

Q88. What is the working set model?


Tracks the set of pages a process is actively using.

Q89. What is Belady’s anomaly?


Increasing the number of frames increases page faults (in FIFO).

Q90. What is the difference between internal and external fragmentation?


Internal: wasted space within allocated memory; External: wasted space between
allocations.
fi
Q91. What is the difference between monolithic and microkernel?
Monolithic: all OS services in kernel space; Microkernel: minimal kernel, services in user
space.

Q92. What is the difference between symmetric and asymmetric multiprocessing?


Symmetric: all processors are equal; Asymmetric: one master, others are slaves.

Q93. What is a system daemon?


A background process providing system services.

Q94. What is the difference between hard link and soft link?
Hard link: direct pointer to le data; Soft link: pointer to le name.

Q95. What is the inode?


A data structure storing le metadata in Unix/Linux.

Q96. What is the difference between swapping and paging?


Swapping moves entire processes; paging moves xed-size pages.

Q97. What is the difference between compile-time and run-time binding?


Compile-time: addresses determined at compile; Run-time: addresses determined
during execution.

Q98. What is the difference between static and dynamic loading?


Static: all code loaded at start; Dynamic: code loaded as needed.

Q99. What is the difference between synchronous and asynchronous I/O?


Synchronous: process waits for I/O; Asynchronous: process continues, noti ed on
completion.

Q100. What is the difference between user space and kernel space?
User space: where user applications run; Kernel space: where OS code runs.
fi
fi
fi
fi
fi

You might also like