Complete Operating Systems Interview
Preparation Guide
This comprehensive guide combines insights from multiple expert sources to provide you with
everything needed for OS interview preparation. Each topic includes definitions, detailed
explanations, real-life examples, and potential interview questions.
1. Introduction to Operating Systems
Definition for Interview: An Operating System is a software that acts as an intermediary
between computer hardware and users, managing hardware resources such as CPU, memory,
storage, and providing a user-friendly interface [1] [2] .
1.1 What is an Operating System?
An Operating System (OS) is essentially the restaurant manager of your computer. Just like a
restaurant manager coordinates between customers (users) and kitchen staff (hardware), the
OS ensures that:
Orders (tasks) are taken correctly
Food (processes) is prepared efficiently
Service is delivered smoothly
Customer complaints (errors) are handled [1]
Real-life Example: Think of Windows, macOS, Linux, Android, or iOS - these are all operating
systems that make your device usable.
1.2 Primary Goals of Operating System
1. Convenience: Make the system easy to use
2. Efficiency: Utilize hardware resources optimally
3. Resource Management: Fair allocation of CPU, memory, disk space
4. Security: Protect data and system integrity [2] [3]
1.3 Main Functions of Operating System
Mnemonic: "Please Make Files Download Securely Users"
Process Management
Memory Management
File System Management
Device Management
Security and Access Control
User Interface [1]
Interview Questions:
1. Q: What is an Operating System?
A: An OS is system software that manages computer hardware and software resources,
providing common services for computer programs and acting as an interface between
users and hardware.
2. Q: What are the main functions of an OS?
A: Process management, memory management, file system management, device
management, security, and providing user interface.
2. Types of Operating Systems
2.1 Batch Operating System (Less Important - Basic Understanding Sufficient)
Definition: Processes similar jobs together as a batch to save time.
Real-life Example: Like physics tuition where all physics students are taught together in one
time slot (4-5 PM), then chemistry students in next slot, rather than mixing subjects [1] .
2.2 Time Sharing Operating System
Definition: Allows multiple users to use computer simultaneously by quickly switching between
them.
Example: University computer lab where multiple students access the same server, or library
system where multiple people can access the same book database [1] [2] .
2.3 Multiprogramming vs Multitasking vs Multiprocessing vs Multithreading
Aspect Multiprogramming Multitasking Multiprocessing Multithreading
Multiple tasks
Multiple programs Multiple processors Multiple threads
Definition executed
loaded in memory working together within a process
concurrently
Like a chef cooking Like a person Like multiple threads
Real Like multiple chefs
multiple dishes watching TV in a game (graphics,
Example in kitchen
simultaneously while eating sound, controls)
Give illusion of Increase
Maximize CPU Improve performance
Purpose parallel computational
utilization within a process
execution power
Real-life Scenario: Your smartphone runs multiple apps (multiprogramming), switches between
them quickly (multitasking), may have multiple CPU cores (multiprocessing), and each app may
use multiple threads (multithreading) [2] .
2.4 Real-Time Operating System
Definition: Guarantees response within a specific time constraint.
Examples:
Anti-lock Braking System (ABS): Must respond instantly to prevent accidents
Air Traffic Control: Zero latency communication required
Medical equipment: Pacemakers, life support systems [1]
Interview Questions:
1. Q: Difference between multitasking and multiprogramming?
A: Multitasking is about time-sharing where CPU switches between tasks quickly.
Multiprogramming is about keeping multiple programs in memory simultaneously to
maximize CPU utilization.
3. Operating System Structure
3.1 Simple Structure (Less Important)
Definition: Basic structure where application programs directly interact with hardware.
Problem: If any application fails, entire OS can crash.
Example: MS-DOS [3]
3.2 Layered Structure
Definition: OS organized in layers where each layer uses services of lower layers only.
Layers (L0 to Ln):
L0: Hardware layer
L1: Process scheduling
L2: Memory management
Ln: User interface layer [3]
3.3 Microkernel Architecture (Important for Interviews)
Definition: Minimal kernel with only essential functions, other services run in user space.
Microkernel contains:
Inter-Process Communication (IPC)
Basic memory management
CPU scheduling [3]
Advantages:
Better security
Easier to extend
More reliable
Real-life Example: Like having a small core team handling essential functions, while specialized
teams handle specific tasks independently.
4. Process Management (Very Important)
4.1 Process vs Program vs Thread
Definitions for Interview:
Program: Set of instructions written to perform a specific task (passive entity).
Process: Running instance of a program (active entity).
Thread: Smallest unit of execution within a process (lightweight process).
Aspect Program Process Thread
Nature Static/Passive Dynamic/Active Lightweight
Memory Stored on disk Loaded in RAM Shares process memory
Example Game installer file Game running Graphics thread in game
Communication N/A Through IPC Direct memory sharing
Real-life Example:
Program: Recipe book (instructions)
Process: Actually cooking the dish (execution)
Thread: Different cooking tasks happening simultaneously (chopping, boiling, frying) [2]
4.2 Process States (Very Important)
Definition: Different stages a process goes through during its lifetime.
Five Main States:
1. New: Process is being created
2. Ready: Process is waiting for CPU allocation
3. Running: Process instructions are being executed
4. Waiting/Blocked: Process waiting for I/O or event
5. Terminated: Process has finished execution [1] [2]
State Transition Mnemonic: "New Recruits Run While Terminated"
Real-life Example - Cab Booking:
New: You just booked a cab
Ready: You're waiting for driver to arrive
Running: You're traveling in the cab
Waiting: Cab stopped for friend to board
Terminated: You reached destination [1]
4.3 Process Control Block (PCB) (Important)
Definition: Data structure containing all information about a process.
PCB Contents:
Process ID (PID)
Process State
CPU registers
Memory limits
Open files list
Priority level [2] [3]
Real-life Example: Like a student profile in school database containing roll number, name,
current grade, subjects enrolled, etc. [1]
4.4 Context Switching (Very Important)
Definition: Process of saving state of current process and loading state of next process.
Steps:
1. Save current process state in PCB
2. Select next process to run
3. Load new process state from its PCB
4. Resume execution [1] [2]
Real-life Example: Like cooking multiple dishes - you pause one dish (save its state), work on
another dish (switch context), then return to first dish (restore context) [1] .
Interview Questions:
1. Q: What is the difference between process and thread?
A: Process is an independent program in execution with its own memory space, while thread
is a lightweight unit within a process sharing the process's memory space.
2. Q: What is context switching?
A: It's the process of storing the state of a currently running process and loading the state
of the next process to be executed by the CPU.
5. CPU Scheduling (Extremely Important)
5.1 CPU Scheduling Algorithms
CPU scheduling determines which process gets CPU time and for how long. Let's solve
numerical examples with the same dataset for comparison.
Common Dataset for Examples:
Process Arrival Time Burst Time Priority
P1 0 8 3
P2 1 4 1
P3 2 2 2
P4 3 1 4
5.2 First Come First Serve (FCFS) (Important)
Definition: Processes are executed in order of arrival (non-preemptive).
Algorithm: Execute processes in arrival order.
Numerical Example:
Execution Order: P1 → P2 → P3 → P4
Timeline: |P1(0-8)|P2(8-12)|P3(12-14)|P4(14-15)|
Waiting Time:
P1: 0-0 = 0
P2: 8-1 = 7
P3: 12-2 = 10
P4: 14-3 = 11
Average Waiting Time = (0+7+10+11)/4 = 7 ms
Advantages: Simple, fair
Disadvantages: Convoy effect (small processes wait for large ones)
Real-life Example: Bank queue - first person in line gets served first [1] [2] .
5.3 Shortest Job First (SJF) (Important)
Definition: Execute process with shortest burst time first (can be preemptive or non-
preemptive).
Non-preemptive SJF:
Execution Order: P4 → P3 → P2 → P1
Timeline: |P4(0-1)|P3(1-3)|P2(3-7)|P1(7-15)|
Waiting Time:
P1: 7-0 = 7
P2: 3-1 = 2
P3: 1-2 = -1 (arrival at 2, so 1-2+2 = 1)
P4: 0-3 = -3 (arrival at 3, so 0)
Average Waiting Time = (7+2+1+0)/4 = 2.5 ms
Problem: Starvation of long processes [1] [2] .
5.4 Round Robin (RR) (Very Important)
Definition: Each process gets fixed time quantum in circular order (preemptive).
Example with Time Quantum = 2:
Timeline: |P1(0-2)|P2(2-4)|P3(4-6)|P4(6-7)|P1(7-9)|P2(9-11)|P1(11-15)|
Waiting Time calculation:
P1: (0) + (7-2) + (11-9) = 7
P2: (2-1) + (9-4) = 6
P3: (4-2) = 2
P4: (6-3) = 3
Average Waiting Time = (7+6+2+3)/4 = 4.5 ms
Advantages: Fair for all processes, no starvation
Disadvantages: Higher context switching overhead [1] [2]
Real-life Example: Classroom where each student gets 2 minutes to ask questions in rotation.
5.5 Priority Scheduling (Important)
Definition: Execute processes based on priority (lower number = higher priority).
Non-preemptive Priority:
Priority Order: P2(1) → P3(2) → P1(3) → P4(4)
Timeline: |P2(1-5)|P3(5-7)|P1(7-15)|P4(15-16)|
Waiting Time:
P1: 7-0 = 7
P2: 1-1 = 0
P3: 5-2 = 3
P4: 15-3 = 12
Average Waiting Time = (7+0+3+12)/4 = 5.5 ms
Problem: Low-priority processes may starve [1] [2] .
5.6 Multilevel Queue Scheduling (Less Important - Basic Understanding)
Definition: Multiple queues with different priorities, each using different scheduling algorithms.
Example Structure:
High Priority Queue (System processes): FCFS
Medium Priority Queue (Interactive): Round Robin
Low Priority Queue (Batch): SJF [1] [2]
Real-life Example: Hospital emergency system - critical patients get immediate attention, while
regular checkups wait in normal queue.
Interview Questions:
1. Q: Which scheduling algorithm has minimum average waiting time?
A: SJF (Shortest Job First) theoretically provides minimum average waiting time.
2. Q: What is the convoy effect?
A: In FCFS, when a long process arrives first, all shorter processes have to wait, creating a
convoy effect.
3. Q: When do we use Round Robin scheduling?
A: In time-sharing systems where fairness is important and we want to avoid starvation.
6. Process Synchronization (Extremely Important)
6.1 Race Condition (Very Important)
Definition: Situation where multiple processes access shared memory concurrently and final
outcome depends on timing of execution.
Real-life Example: Two people trying to withdraw money from shared bank account
simultaneously. Both check balance (₹10,000), both try to withdraw ₹10,000. Without
synchronization, bank might lose money by allowing both withdrawals [1] [2] .
6.2 Critical Section Problem (Very Important)
Definition: Part of code where shared resources are accessed. Only one process should
execute in critical section at a time.
Critical Section Structure:
Entry Section
Critical Section (shared resource access)
Exit Section
Remainder Section
Three Requirements:
1. Mutual Exclusion: Only one process in critical section
2. Progress: If no process in critical section, one can enter
3. Bounded Waiting: Limit on how long a process waits [2] [3]
Real-life Example: Shared bathroom with lock - only one person can use at a time [1] .
6.3 Synchronization Mechanisms
6.3.1 Mutex (Important)
Definition: Mutual exclusion lock ensuring only one thread accesses critical section.
Real-life Example: Key to shared bathroom - only person with key can enter [1] .
6.3.2 Semaphore (Very Important)
Definition: Signaling mechanism controlling access to shared resource with a counter.
Types:
Binary Semaphore: Acts like mutex (0 or 1)
Counting Semaphore: Controls access to multiple instances of resource
Operations:
Wait()/P(): Decrements semaphore value
Signal()/V(): Increments semaphore value [2] [3]
Real-life Example: Traffic lights controlling vehicle flow at intersection - has multiple states (red,
yellow, green) controlling when vehicles can proceed [1] .
6.3.3 Monitor (Less Important)
Definition: High-level synchronization mechanism combining mutual exclusion with condition
variables.
Example: Java synchronized keyword or Python condition objects [1] .
6.4 Classical Synchronization Problems
6.4.1 Producer-Consumer Problem (Important)
Definition: Producer generates data for buffer, consumer takes data from buffer. Must
synchronize access to shared buffer.
Solution using 3 Semaphores:
S (Mutex): Binary semaphore for mutual exclusion (initially 1)
E (Empty): Counting semaphore for empty slots (initially N)
F (Full): Counting semaphore for filled slots (initially 0) [3]
Real-life Example: Pizza kitchen (producer) making pizzas for delivery queue (buffer), delivery
person (consumer) taking pizzas from queue.
6.4.2 Dining Philosophers Problem (Less Important)
Definition: 5 philosophers sit around table with 5 forks. Each needs 2 forks to eat. Demonstrates
deadlock prevention.
Real-life Example: 5 people sharing 5 chopsticks at Chinese restaurant [1] .
Interview Questions:
1. Q: What is a race condition?
A: A race condition occurs when multiple processes access shared data concurrently and
the final result depends on the timing of their execution.
2. Q: Difference between mutex and semaphore?
A: Mutex is binary (0/1) for mutual exclusion, while semaphore can be counting (multiple
values) to control access to multiple resources.
3. Q: What is critical section?
A: Critical section is a segment of code where shared resources are accessed, and only one
process should execute in it at any time.
7. Deadlock (Very Important)
7.1 Deadlock Definition
Definition for Interview: A situation where two or more processes are permanently blocked,
each waiting for the other to release resources.
Real-life Example: Two cars facing each other on a narrow bridge - each waiting for the other to
move back, but neither can proceed [1] [2] .
7.2 Deadlock Conditions (Coffman Conditions)
All 4 must be present simultaneously:
1. Mutual Exclusion: Only one process can use resource at a time
2. Hold and Wait: Process holding resources can request more
3. No Preemption: Resources cannot be forcefully taken
4. Circular Wait: Processes waiting in circular chain [1] [2]
Mnemonic: "My Happy New Car"
7.3 Deadlock Example
Process P1: Has Resource A, Wants Resource B
Process P2: Has Resource B, Wants Resource A
Result: Both processes wait forever!
Real-life Example:
You have my pen, I have your book
You won't give pen until you get book
I won't give book until I get pen
Deadlock! [1]
7.4 Deadlock Handling Strategies
7.4.1 Deadlock Prevention (Important)
Strategy: Eliminate at least one of the four deadlock conditions.
Methods:
Eliminate Mutual Exclusion: Make resources shareable (not always possible)
Eliminate Hold and Wait: Allocate all resources at once
Allow Preemption: Forcefully take resources from processes
Eliminate Circular Wait: Order resources, request in ascending order [1] [2]
7.4.2 Deadlock Avoidance (Important)
Strategy: Use algorithms like Banker's Algorithm to ensure system remains in safe state.
Banker's Algorithm:
Check if granting request keeps system in safe state
If yes, grant request; if no, make process wait [2]
7.4.3 Deadlock Detection and Recovery (Less Important)
Detection: Periodically check for cycles in resource allocation graph.
Recovery Methods:
Kill one or more processes
Preempt resources from some processes
Restart entire system [1] [2]
7.4.4 Deadlock Ignorance (Ostrich Algorithm) (Less Important)
Strategy: Ignore deadlock problem assuming it's rare. Used by most operating systems like
Windows and Linux [2] .
7.5 Starvation vs Livelock (Important)
Starvation: Process perpetually denied access to resources (like low-priority process never
getting CPU).
Livelock: Processes continuously change states but make no progress (like two people trying to
pass each other in corridor, both stepping aside repeatedly) [1] .
Interview Questions:
1. Q: What are the four conditions for deadlock?
A: Mutual exclusion, hold and wait, no preemption, and circular wait - all four must be
present simultaneously.
2. Q: How can we prevent deadlock?
A: By eliminating any one of the four deadlock conditions - mutual exclusion, hold and wait,
no preemption, or circular wait.
3. Q: What is Banker's Algorithm?
A: A deadlock avoidance algorithm that checks if granting a resource request will keep the
system in a safe state.
8. Memory Management (Very Important)
8.1 Memory Hierarchy (Important)
Definition: Organization of computer memory in hierarchical structure based on speed, cost,
and capacity.
Hierarchy (Fastest to Slowest):
1. Registers: Fastest, smallest, inside CPU
2. Cache Memory: Faster than RAM, stores frequently accessed data
3. Main Memory (RAM): Primary storage for running applications
4. Secondary Storage: Hard drives, SSDs (non-volatile)
5. Tertiary Storage: CDs, magnetic tapes (slowest, backup) [1]
Memory Allocation Strategies:
8.1.1 First Fit (Important)
Definition: Allocate first available block that's large enough.
Example:
Memory blocks: [100K, 500K, 200K, 300K, 600K]
Process requests: P1(212K), P2(417K), P3(112K)
P1(212K) → 500K block (288K remaining)
P2(417K) → 600K block (183K remaining)
P3(112K) → 288K remaining from first allocation
8.1.2 Best Fit (Important)
Definition: Allocate smallest available block that fits the process.
Example: Same memory blocks and requests
P1(212K) → 300K block (88K remaining)
P2(417K) → 500K block (83K remaining)
P3(112K) → 200K block (88K remaining)
8.1.3 Worst Fit (Less Important)
Definition: Allocate largest available block.
8.2 Paging (Very Important)
Definition: Memory management scheme that eliminates external fragmentation by dividing
physical memory into fixed-size blocks called frames and logical memory into blocks of same
size called pages.
Key Concepts:
Page: Fixed-size block of logical memory (typically 4KB)
Frame: Fixed-size block of physical memory
Page Table: Maps logical pages to physical frames [2] [3]
Advantages:
No external fragmentation
Efficient memory utilization
Supports virtual memory
Real-life Example: Like a book with numbered pages that can be stored in any available shelf
slot in library - page table keeps track of which shelf holds which page [2] .
8.3 Segmentation (Important)
Definition: Memory management scheme that divides program into variable-size segments
based on logical units.
Examples of Segments:
Code segment
Data segment
Stack segment
Heap segment [3]
Advantage: Reflects program structure
Disadvantage: External fragmentation
8.4 Thrashing (Important)
Definition: Situation when system spends more time swapping pages than executing processes
due to insufficient physical memory.
Cause: Too many processes loaded with insufficient RAM [1] [2] .
Solution:
Reduce number of active processes
Increase physical memory
Improve page replacement algorithm
Real-life Example: Like trying to work with 50 books on a small desk - you spend more time
moving books around than actually reading [2] .
Interview Questions:
1. Q: What is the difference between paging and segmentation?
A: Paging divides memory into fixed-size pages while segmentation divides into variable-
size segments based on logical units. Paging eliminates external fragmentation but
segmentation may have it.
2. Q: What is thrashing?
A: Thrashing occurs when system spends more time swapping pages between memory and
disk than executing processes, usually due to insufficient physical memory.
9. Virtual Memory (Very Important)
9.1 Virtual Memory Concept
Definition for Interview: Virtual memory is a memory management technique that provides an
illusion of having more memory than physically available by using disk storage as extension of
RAM.
Benefits:
Programs larger than physical memory can run
More programs can run concurrently
Less I/O needed for loading programs [2]
Real-life Example: Like having a small desk but large filing cabinet nearby - you keep current
work on desk and swap papers with cabinet as needed [2] .
9.2 Demand Paging (Important)
Definition: Pages are loaded into memory only when needed (on demand).
Page Fault: Occurs when referenced page is not in memory.
Page Fault Handling:
1. Check if page reference is valid
2. Find free frame in memory
3. Load page from disk to frame
4. Update page table
5. Resume process execution [2]
9.3 Page Replacement Algorithms (Very Important)
When memory is full and new page needed, must replace existing page.
Common Dataset for Examples:
Reference String: 2, 3, 2, 1, 5, 2, 4, 5, 3, 2, 5, 2
Number of frames: 3
9.3.1 FIFO (First In First Out)
Algorithm: Replace page that has been in memory longest.
Example:
Reference: 2 3 2 1 5 2 4 5 3 2 5 2
Frame1: 2 2 2 1 1 1 4 4 4 2 2 2
Frame2: 3 3 3 5 5 5 5 3 3 3 3
Frame3: - - - 2 2 - - - 5 5
Page Faults: 2, 3, 1, 5, 2, 4, 5, 3, 2, 5 = 10 page faults
9.3.2 LRU (Least Recently Used) (Very Important)
Algorithm: Replace page that hasn't been used for longest time.
Example:
Reference: 2 3 2 1 5 2 4 5 3 2 5 2
Frame1: 2 2 2 1 1 2 2 2 3 3 3 3
Frame2: 3 3 3 5 5 4 4 4 2 2 2
Frame3: - - - - - 5 5 - - 5 -
Page Faults: 2, 3, 1, 5, 2, 4, 5, 3, 2, 5 = 10 page faults
Real-life Example: Like organizing books on shelf - move least recently read books to
storage [2] .
9.3.3 Optimal Algorithm (Important for Comparison)
Algorithm: Replace page that won't be used for longest time in future.
Note: Theoretical algorithm (requires future knowledge), used for comparison with practical
algorithms [2] .
Interview Questions:
1. Q: What is virtual memory?
A: Virtual memory is a memory management technique that creates an illusion of having
more memory than physically available by using disk storage as extension of RAM.
2. Q: What happens during a page fault?
A: When a referenced page is not in memory, OS loads the page from disk into memory,
updates page table, and resumes process execution.
3. Q: Which page replacement algorithm is best?
A: Optimal algorithm is theoretically best but impractical. LRU is often used in practice as it
performs well and is implementable.
10. File Systems (Important)
10.1 File Concepts (Less Important - Basic Understanding)
Definition: File is a collection of related data stored on disk.
File Attributes:
Name
Type
Size
Location
Permissions
Time stamps [1] [3]
10.2 File Allocation Methods (Important)
10.2.1 Contiguous Allocation
Definition: File stored in contiguous blocks on disk.
Advantages: Fast access, simple
Disadvantages: External fragmentation, file size fixed [1]
Real-life Example: Parking cars side by side - fast to find but wastes space if gaps [1] .
10.2.2 Linked Allocation
Definition: Each block points to next block of file.
Advantages: No external fragmentation
Disadvantages: Slow random access [1]
Real-life Example: Treasure hunt where each clue leads to next location [1] .
10.2.3 Indexed Allocation
Definition: Uses index block containing pointers to file blocks.
Advantages: Efficient random access
Disadvantages: Index block overhead [1]
Real-life Example: Book index showing which page contains which topic [1] .
10.3 Directory Structure (Less Important)
Types:
Single-level directory
Two-level directory
Hierarchical directory
Acyclic graph directory [1]
11. Disk Management (Important)
11.1 Disk Structure
Physical Components:
Track: Concentric circles on disk
Sector: Smallest unit of data transfer
Cylinder: Same track on different platters
Head: Reads/writes data [1] [2]
11.2 Disk Access Time
Total Access Time = Seek Time + Rotational Latency + Transfer Time
Seek Time: Time to move head to correct track
Rotational Latency: Time for sector to rotate under head
Transfer Time: Time to transfer data [3]
11.3 Disk Scheduling Algorithms (Very Important)
Sample Disk Requests: 98, 183, 37, 122, 14, 124, 65, 67
Current Head Position: 53
Direction: Moving towards higher numbers
11.3.1 FCFS (First Come First Serve)
Algorithm: Service requests in arrival order.
Execution: 53 → 98 → 183 → 37 → 122 → 14 → 124 → 65 → 67
Total Head Movement:
|53-98| + |98-183| + |183-37| + |37-122| + |122-14| + |14-124| + |124-65| + |65-67|
= 45 + 85 + 146 + 85 + 108 + 110 + 59 + 2 = 640 tracks
11.3.2 SSTF (Shortest Seek Time First)
Algorithm: Service request closest to current head position.
Execution: 53 → 65 → 67 → 37 → 14 → 98 → 122 → 124 → 183
Total Head Movement:
|53-65| + |65-67| + |67-37| + |37-14| + |14-98| + |98-122| + |122-124| + |124-183|
= 12 + 2 + 30 + 23 + 84 + 24 + 2 + 59 = 236 tracks
Problem: May cause starvation [1] [2] .
11.3.3 SCAN (Elevator Algorithm) (Important)
Algorithm: Move head in one direction until end, then reverse direction.
Execution: 53 → 65 → 67 → 98 → 122 → 124 → 183 → 199 → 37 → 14
Total Head Movement:
(199-53) + (199-14) = 146 + 185 = 331 tracks
Real-life Example: Elevator going up to top floor, then down to service all requests [1] .
11.3.4 C-SCAN (Circular SCAN) (Important)
Algorithm: Like SCAN but returns to beginning when reaching end.
Execution: 53 → 65 → 67 → 98 → 122 → 124 → 183 → 199 → 0 → 14 → 37
Total Head Movement:
(199-53) + (199-0) + (37-0) = 146 + 199 + 37 = 382 tracks
Interview Questions:
1. Q: Which disk scheduling algorithm has uniform waiting time?
A: C-SCAN provides more uniform waiting time as it services requests in one direction only.
2. Q: What is the problem with SSTF?
A: SSTF can cause starvation of requests that are far from current head position.
12. Input/Output Management (Less Important - Basic Understanding)
12.1 I/O Hardware
Components:
Devices: Keyboard, mouse, printer, monitor
Device Controllers: Interface between device and system
Device Drivers: Software to control hardware devices [1]
12.2 I/O Methods
1. Programmed I/O: CPU directly controls I/O operations
2. Interrupt-driven I/O: Device interrupts CPU when operation complete
3. DMA (Direct Memory Access): Device directly transfers data to/from memory [1]
Real-life Example of DMA: Like courier service delivering package directly to your home
without you personally going to pick it up [1] .
Summary and Key Interview Points
Most Important Topics for Interviews:
1. Process Management (Process vs Thread, PCB, Context Switching)
2. CPU Scheduling (All algorithms with numerical examples)
3. Process Synchronization (Race conditions, Critical section, Semaphores)
4. Deadlock (Conditions, Prevention, Detection)
5. Memory Management (Paging, Virtual Memory)
6. Page Replacement Algorithms (FIFO, LRU, Optimal)
7. Disk Scheduling (FCFS, SSTF, SCAN, C-SCAN)
Moderate Important Topics:
1. OS Structure and Types
2. File Systems
3. I/O Management
4. Thrashing
Less Important Topics:
1. Historical OS generations
2. Detailed hardware components
3. Advanced file allocation methods
Essential Definitions to Remember:
Operating System: Software intermediary between user and hardware
Process: Program in execution
Thread: Lightweight process sharing memory space
Critical Section: Code segment accessing shared resources
Deadlock: Processes waiting for each other indefinitely
Virtual Memory: Technique providing illusion of larger memory
Page Fault: Reference to page not in memory
Thrashing: Excessive paging activity
Key Algorithms to Practice:
1. All CPU scheduling algorithms with numerical examples
2. Page replacement algorithms (especially LRU)
3. Disk scheduling algorithms
4. Banker's algorithm for deadlock avoidance
This comprehensive guide covers all essential OS concepts needed for technical interviews.
Focus on understanding concepts with real-life examples rather than just memorizing definitions.
Practice numerical problems for scheduling algorithms as they're frequently asked in interviews.
⁂
Difference Between User-Level Threads and
Kernel-Level Threads
Definition for Interview:
User-Level Threads (ULT): Threads managed entirely by user-space libraries without direct
kernel involvement.
Kernel-Level Threads (KLT): Threads managed directly by the operating system kernel.
Aspect User-Level Threads Kernel-Level Threads
Managed by user-level thread library (e.g.,
Management Managed by the OS kernel.
POSIX threads).
Creation Speed Fast (no kernel interaction required). Slow (requires system calls).
Blocking One blocked ULT blocks all threads in the One blocked KLT does not affect other
Behavior process. threads.
Supports true parallelism on multi-core
Parallelism Cannot leverage multiple CPU cores.
systems.
Context
Faster (no kernel mode transition). Slower (requires kernel intervention).
Switching
Windows OS threads, Linux kernel
Examples Java threads, Python’s threading module.
threads.
Scheduling Application decides scheduling. OS scheduler handles thread scheduling.
Resource High (each thread requires kernel
Low (lightweight).
Overhead resources).
Starvation Risk Possible if not properly managed. Managed by kernel scheduler.
Applications requiring rapid thread Compute-intensive tasks needing
Use Case
creation/switching. parallelism (e.g., servers).
Banker’s Algorithm Explained with Numerical
Example
Definition for Interview:
Banker’s Algorithm is a deadlock avoidance algorithm that ensures resource allocation leaves the
system in a safe state (no deadlock).
Key Components:
1. Available: Resources currently free.
2. Max: Maximum resources each process may request.
3. Allocation: Resources already allocated to processes.
4. Need: Remaining resources required by each process (Need = Max – Allocation).
Numerical Example
Given Data
Processes: P0, P1, P2, P3, P4
Resource Types: A, B, C, D
Process Allocation (A,B,C,D) Max (A,B,C,D) Available (A,B,C,D)
P0 0,1,1,0 0,2,1,0 1,5,2,0
P1 1,2,3,1 1,6,5,2
P2 1,3,6,5 2,3,6,6
P3 0,6,3,2 0,6,5,2
P4 0,0,1,4 0,6,5,6
Step 1: Calculate Need Matrix
Need = Max – Allocation
Process Need (A,B,C,D)
P0 0,1,0,0
P1 0,4,2,1
P2 1,0,0,1
P3 0,0,2,0
P4 0,6,4,2
Step 2: Safety Check
Work = Available = (1,5,2,0)
Finish = [False, False, False, False, False]
Iteration 1:
P0’s Need (0,1,0,0) ≤ Work (1,5,2,0).
Allocate resources: Work = Work + Allocation(P0) = (1,5,2,0) + (0,1,1,0) = (1,6,3,0)
Safe Sequence: [P0]
Finish = [True, False, False, False, False]
Iteration 2:
P3’s Need (0,0,2,0) ≤ Work (1,6,3,0).
Work = (1,6,3,0) + (0,6,3,2) = (1,12,6,2)
Safe Sequence: [P0, P3]
Finish = [True, False, False, True, False]
Iteration 3:
P4’s Need (0,6,4,2) ≤ Work (1,12,6,2).
Work = (1,12,6,2) + (0,0,1,4) = (1,12,7,6)
Safe Sequence: [P0, P3, P4]
Finish = [True, False, False, True, True]
Iteration 4:
P1’s Need (0,4,2,1) ≤ Work (1,12,7,6).
Work = (1,12,7,6) + (1,2,3,1) = (2,14,10,7)
Safe Sequence: [P0, P3, P4, P1]
Finish = [True, True, False, True, True]
Iteration 5:
P2’s Need (1,0,0,1) ≤ Work (2,14,10,7).
Work = (2,14,10,7) + (1,3,6,5) = (3,17,16,12)
Safe Sequence: [P0, P3, P4, P1, P2]
Finish = [True, True, True, True, True]
Result: The system is in a safe state with the sequence <P0, P3, P4, P1, P2>.
Interview Questions:
1. Q: What is the purpose of the Banker’s Algorithm?
A: To prevent deadlock by ensuring resource allocation always leaves the system in a safe
state.
2. Q: How do you calculate the Need matrix?
A: Need = Max – Allocation for each process.
3. Q: What happens if no safe sequence exists?
A: The system is in an unsafe state, and resource allocation could lead to deadlock.
Real-Life Example:
Imagine a bank with limited cash reserves. The Banker’s Algorithm ensures the bank never
allocates money in a way that prevents it from satisfying all customer withdrawals.
⁂
1. watch?v=ivmD72fV0LQ&ab_channel=PrimeCoding
2. watch?v=8XBtAjKwCm4&ab_channel=Coding
3. Operating-system.pdf