KEMBAR78
Operating System Notes | PDF | Process (Computing) | Scheduling (Computing)
0% found this document useful (0 votes)
16 views19 pages

Operating System Notes

The document provides an overview of various computer science concepts including virtual memory, direct memory access (DMA), semaphores, process scheduling, network and distributed operating systems, processes vs threads, cloud operating systems, segmentation, paging, and threading. Each concept is explained with definitions, purposes, workings, advantages, and disadvantages, highlighting their roles in memory management, resource allocation, and process execution. The information serves as a foundational understanding of operating system functionalities and architectures.

Uploaded by

yocot91621
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views19 pages

Operating System Notes

The document provides an overview of various computer science concepts including virtual memory, direct memory access (DMA), semaphores, process scheduling, network and distributed operating systems, processes vs threads, cloud operating systems, segmentation, paging, and threading. Each concept is explained with definitions, purposes, workings, advantages, and disadvantages, highlighting their roles in memory management, resource allocation, and process execution. The information serves as a foundational understanding of operating system functionalities and architectures.

Uploaded by

yocot91621
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

1. Virtual Memory ?

• Virtual memory is a memory management technique that provides an "illusion" of a large,


contiguous block of memory to programs, even if the physical memory is smaller or fragmented.
• Virtual memory works by temporarily moving data from the RAM to the hard disk's storage
space, making room for other data in the RAM. When the original data is needed again, it is
swapped back into the RAM. This process is called "swapping".
• Virtual memory typically uses paging, where the memory is divided into fixed-sized blocks called
"pages." These pages are mapped between virtual and physical memory.
• Virtual memory uses a memory management unit (MMU) to translate logical addresses to
physical addresses. The MMU is positioned between the CPU and physical memory, and
performs the address translations as directed by the OS.
• Virtual memory provides isolation and protection, so one process cannot directly access the
memory space of another process, enhancing security and stability.
• It supports multitasking by enabling multiple applications to run simultaneously without
exhausting physical memory.
• Virtual memory is a fundamental technology that enables modern computing, providing
flexibility, efficiency, and robustness in memory management.

2. What is DMA (Direct memory access)


• Direct memory access (DMA) is a computer bus architecture that allows data to be sent directly
from a device to the computer's main memory, bypassing the CPU. This speeds up memory
operations and frees up the CPU to focus on other tasks.
• DMA is managed by a DMA controller (DMAC) chip that's programmed by the operating system.
• DMA is useful when the CPU can't keep up with the rate of data transfer, or when it needs to
perform other work while waiting for a slow I/O data transfer. Many hardware systems use DMA,
including: disk drives, external memory, graphics cards, network cards, and sound cards.


WORKING:
• Initiation: The CPU initializes the DMA transfer by specifying the source, destination, and amount
of data to transfer.
• Execution: Once initiated, the DMA controller takes over the data transfer, moving data directly
between the memory and the peripheral.
• Completion: After the transfer is complete, the DMA controller sends an interrupt to the CPU,
signaling the end of the operation.
Types of DMA:
• Burst Mode: Transfers a block of data in a single, continuous burst, suspending the CPU’s access
to the memory during the transfer.
• Cycle Stealing: The DMA controller transfers one data word at a time, interleaving with CPU
memory accesses, thus "stealing" cycles from the CPU.
• Transparent Mode: DMA transfers occur only when the CPU is not using the bus, ensuring no
interference with CPU operations.
• Thus, DMA is a critical component in modern computer architecture, enhancing performance
and efficiency by enabling direct data transfers between memory and peripherals without heavy
CPU involvement.

3. What is Semaphore and its types


• A semaphore is a variable or data type that controls access to a shared resource in an operating
system. Semaphores are used to solve critical section problems in concurrent systems, such as
multitasking operating systems. They are a type of synchronization primitive that uses two
atomic operations, wait and signal, to synchronize processes.
• Types of Semaphores:-
• Binary Semaphore (Mutex): Like a simple lock, it can be either 0 (locked) or 1 (unlocked).
• Counting Semaphore: Can count above 1, useful for managing access to multiple instances of a
resource.
• How it Works:-
• Initialization: Start with a value representing available resources.
• Wait Operation (P): When a thread wants to use a resource, it performs a wait operation. If the
semaphore value is greater than 0, it decreases the value by 1 and proceeds. If the value is 0, the
thread waits.
• Signal Operation (V): When a thread is done using the resource, it performs a signal operation,
increasing the semaphore value by 1. This may wake up a waiting thread.
• Why Use Semaphores?:-
• Prevent Conflicts: They prevent multiple threads from accessing a resource at the same time,
avoiding conflicts.
• Control Resource Access: They help manage how many threads can access a resource
simultaneously.
• Cons:
• Deadlocks: Incorrect use can cause deadlocks, where threads get stuck waiting for each other.
• Priority Inversion: Lower-priority threads holding a semaphore can block higher-priority threads.
• Therefore, Semaphores are crucial in programming for safely managing access to shared
resources, ensuring that multiple threads or processes do not interfere with each other.

Types of Semaphore:-

Binary Semaphore (Mutex):-

• Definition: A binary semaphore, also known as a mutex (mutual exclusion), is a semaphore that
can only take values 0 or 1.
• Purpose: Used to ensure that only one thread or process can access a critical section of code or a
shared resource at a time.
• Operation:

Value 1: The resource is available.

Value 0: The resource is locked by a thread/process.


• Example Use: Preventing multiple threads from simultaneously writing to a file.

2. Counting Semaphore:-

• Definition: A counting semaphore is a semaphore that can take any non-negative integer value.
• Purpose: Used to manage access to a resource that has a limited number of instances.
• Operation:

Initial Value: Set to the number of available resources.

Wait Operation (P): Decreases the value by 1 when a thread/process acquires a resource. If the
value is 0, the thread/process waits.

Signal Operation (V): Increases the value by 1 when a thread/process releases a resource,
potentially waking up a waiting thread/process.

• Example Use: Managing a pool of database connections, where the initial value is the number of
available connections.

4. Process scheduling and process switching

Feature Process Scheduling Process Switching


The method by which the operating system The act of switching the CPU from one process or
Definition decides which process to run next. thread to another.
To optimize CPU utilization and ensure fair To allow multiple processes to share the CPU and
Purpose process execution. progress concurrently.
At specific intervals or events, such as the end During events like interrupts, system calls, or when
of a time slice or when a process becomes the currently running process is blocked or yields
When it Occurs ready to run. the CPU.
The scheduler component of the operating
Triggered By system. The operating system kernel.
Key Scheduling algorithms (e.g., Round Robin,
Components Priority Scheduling). Context switching mechanisms.
Selecting the next process to run based on a Saving the state of the current process and loading
Involves predefined policy. the state of the next process.
Impact on Determines overall system efficiency and Can introduce overhead, affecting system
Performance response time. performance due to context switch time.
Depends on the chosen scheduling algorithm Generally straightforward but involves low-level
Complexity (some are simple, some complex). operations to save and restore process states.
First-Come, First-Served (FCFS), Shortest Job
Examples of Next (SJN), Round Robin (RR), Priority Not applicable as it's a mechanism rather than an
Algorithms Scheduling. algorithm.
Decides which process gets CPU time and in Changes the currently executing process on the
Outcome what order. CPU.
Feature Process Scheduling Process Switching
Related Process states (ready, running, waiting), Context (registers, program counter, stack
Concepts priority levels, time quantum. pointer), process control block (PCB).

5. Network os and distributed os

Network Operating System Distributed Operating System

Network Operating System‘s main Distributed Operating System’s main


1. objective is to provide the local objective is to manage the hardware
services to remote client. resources.

In Network Operating System, In Distributed Operating System,


2. Communication takes place on the Communication takes place on the
basis of files. basis of messages and shared memory.

Network Operating System is Distributed Operating System is less


3. more scalable than Distributed scalable than Network Operating
Operating System. System.

In Network Operating System, While in Distributed Operating


4.
fault tolerance is less. System, fault tolerance is high.

Rate of autonomy in Network While The rate of autonomy in


5.
Operating System is high. Distributed Operating System is less.

Ease of implementation in
While in Distributed Operating System
6. Network Operating System is also
Ease of implementation is less.
high.

In Network Operating System, All While in Distributed Operating


7. nodes can have different operating System, All nodes have same
system. operating system.
6. Difference between Process and threads

S.NO Process Thread

Process means any


Thread means a segment of a process.
1. program is in execution.

The process takes more The thread takes less time to


2. time to terminate. terminate.

It takes more time for


It takes less time for creation.
3. creation.

It also takes more time for


It takes less time for context switching.
4. context switching.

The process is less efficient Thread is more efficient in terms of


5. in terms of communication. communication.

We don’t need multi programs in


Multiprogramming holds
action for multiple threads because a
the concepts of multi-
single process consists of multiple
process.
6. threads.

7. The process is isolated. Threads share memory.

A Thread is lightweight as each thread


The process is called the
in a process shares code, data, and
heavyweight process.
8. resources.

Process switching uses an Thread switching does not require


interface in an operating calling an operating system and
9. system. causes an interrupt to the kernel.

If one process is blocked


If a user-level thread is blocked, then
then it will not affect the
all other user-level threads are
execution of other
blocked.
10. processes
S.NO Process Thread

The process has its own Thread has Parents’ PCB, its own
Process Control Block, Thread Control Block, and Stack and
11. Stack, and Address Space. common Address space.

Since all threads of the same process


Changes to the parent share address space and other
process do not affect child resources so any changes to the main
processes. thread may affect the behavior of the
12. other threads of the process.

A system call is involved in No system call is involved, it is created


13. it. using APIs.

The process does not share


Threads share data with each other.
14. data with each other.

7. What is Cloud OS and its features

A Cloud Operating System (Cloud OS) is a type of operating system designed to manage and
coordinate the resources of cloud computing infrastructure. It serves as a layer of software that
abstracts, pools, and manages the resources (computing power, storage, and networking) of a cloud
environment, providing a platform for deploying and running applications and services. It allows an
application to run a number of web-based applications. Provides access to the user's own desktop.
Allows users to access their virtual desktop and do simple operations from anywhere in the country.
Activates the user's ability to complete any simple task.
Features are :-
Resource Management: It combines and manages computing power, storage, and network resources
from multiple servers.
Scalability: It automatically adjusts resources based on the needs of your applications, so they can
handle more or less work as needed.
High Availability: It ensures that your services stay up and running, even if some parts fail.
Multi-Tenancy: It allows multiple users or organizations to share the same infrastructure securely
without interfering with each other.
Security: It protects your data and resources with strong access controls and encryption.
Monitoring: It provides tools to keep track of how your cloud resources are performing and being
used. APIs: It offers interfaces that allow different software and services to interact with the cloud
infrastructure easily.
8. What is Segmentation and its Types
Segmentation is a memory management technique in operating systems (OS) that divides a
computer's primary memory into unevenly sized segments or sections. Each segment is assigned a
specific type of data or program.
Segmentation was originally invented to increase the reliability of systems running multiple processes
simultaneously by isolating software processes and data they are using.
Segmentation is a non-continuous memory allocation method, along with paging. Paging divides
memory into fixed-sized blocks, while segmentation divides memory into unevenly sized blocks.
Segmentation can improve computer performance by:

• Facilitating quicker access: Helps in locating and managing data more swiftly
• Prevents memory wastage: Avoids fragmentation and optimizes memory space usage
• Enhances performance: Boosts the overall performance of the computer by streamlining
operations
Advantages of Segmentation:-
Modularity: Segmentation supports modular programming by allowing different modules to be
independently managed and protected.
Protection: Each segment can have different access rights, enhancing security.
Sharing: Code segments can be shared among different processes, reducing redundancy.
Dynamic Memory Allocation: Segments can grow or shrink dynamically as needed.
Disadvantages of Segmentation:-
Complexity: Managing variable-sized segments is more complex compared to fixed-size pages.
Fragmentation: Segmentation can lead to external fragmentation, where free memory is scattered in
small chunks, making it hard to allocate large contiguous blocks of memory.
9. What is Paging .
Paging is a memory management technique in an operating system (OS) that stores and retrieves data
from a device's secondary storage to the primary storage. It works by breaking each process into
individual pages, and the primary memory into frames of the same size. The OS then reads data from
secondary storage in blocks called pages, and stores each page in one of the frames of the main
memory.
It breaks down the physical memory and the process's logical memory into fixed-sized blocks called
pages and frames, respectively. This allows the operating system to efficiently and flexibly allocate
memory, eliminating issues like fragmentation.
Paging helps in:

• Allocating memory to a process at different locations in the main memory


• Reducing memory wastage
• Removing external fragmentation
• Storing a process at non-contagious locations in the main memory
• Paging is fundamental to implementing virtual memory, where processes can use more
memory than is physically available by paging out unused pages to disk.
• The fixed size of pages and frames simplifies memory allocation and management.
Disadvantages of Paging:-
Overhead: Maintaining page tables and performing address translation introduces overhead.
Page Table Size: The size of the page table can become large, especially for processes with a large
address space, requiring additional memory and management.
Page Faults: If a process tries to access a page not currently in memory, a page fault occurs, causing a
delay as the page is loaded from disk.
Therefore, Paging is a memory management technique that divides both logical and physical memory
into fixed-size blocks called pages and frames, respectively. It eliminates external fragmentation and
supports efficient memory allocation and virtual memory. However, it introduces some overhead and
can lead to large page tables and potential delays due to page faults.

10. Threading and Multithreading


Threading
Threading refers to the process of creating and managing threads within a single process. A thread is
the smallest unit of execution within a process and is sometimes called a lightweight process. Each
thread within a process shares the same resources, such as memory and file handles, but operates
independently.
Key Concepts of Threading:
Thread: A single sequence of execution within a process.
Single Threading: A process where only one thread is present, meaning only one sequence of
instructions is executed at a time.
Multithreading
Multithreading extends the concept of threading by allowing multiple threads to exist within the same
process. These threads run concurrently and share the same resources, allowing for parallel execution
of tasks.
Key Concepts of Multithreading:
Multiple Threads: Multiple sequences of execution within a single process.
Concurrency: The ability to run multiple threads seemingly simultaneously, improving the utilization
of CPU resources.
Parallelism: Actual simultaneous execution of threads on multiple CPU cores.

Aspect Threading Multithreading


The creation and management of individual The ability of an OS or a program to manage
Definition threads. multiple threads concurrently.
Typically refers to the management of a Involves multiple threads within a single process,
Context single thread within a process. sharing the same resources.
Multiple threads run concurrently within a
Concurrency Single thread execution at any given time. process.
Threads share resources of the process,
Resource Threads share resources of the process, such facilitating efficient communication and resource
Sharing as memory and file handles. usage.
Limited by single-thread execution,
potentially leading to underutilization of Better utilization of multi-core processors,
Efficiency multi-core processors. improving overall efficiency and performance.
Slightly higher overhead due to the need for
Lower overhead compared to processes synchronization and context switching between
Overhead since threads share the same address space. threads.
Requires mechanisms like mutexes, semaphores,
Simpler as there is only one thread of and condition variables to manage access to
Synchronization execution. shared resources.
Enables parallelism within a single process,
Parallelism Limited to single-threaded operations. enhancing performance in multi-core systems.
Applications requiring parallel processing, such as
web servers, multimedia applications, and
Use Cases Simple applications with linear workflows. scientific computing.
More complex due to the need to handle
Complexity Easier to implement and manage. concurrency and synchronization issues.
Less scalable as it can't fully utilize multi- Highly scalable as it can take advantage of
Scalability core processors. multiple cores for concurrent execution.
If a thread encounters an error, it can affect Errors in one thread can affect the entire process,
Fault Isolation the entire process. but structured handling can mitigate this.
Modern web browsers, servers, complex scientific
Examples Basic scripts, simple command-line tools. computations, real-time gaming.
11. Demand Paging
Demand paging is a memory management technique used in operating systems to load pages into
memory only when they are needed, rather than loading the entire process into memory at once. This
helps in efficient utilization of memory and allows more processes to be loaded into the system
concurrently
How Demand Paging Works:-
Page Table Setup: Each process has a page table that keeps track of all its pages. The page table
entries indicate whether a page is in physical memory or needs to be fetched from secondary storage
(like a disk).
Page Fault Handling: When a process tries to access a page that is not currently in memory, a page
fault occurs. The operating system then locates the page on the disk, loads it into a free frame in
physical memory, and updates the page table. The process is then allowed to continue its execution
from the point where the page fault occurred.
Lazy Loading: Pages are loaded only on demand. This means that if a process does not use certain
pages, they are never loaded into memory, saving resources.
Advantages of Demand Paging:-
Efficient Memory Use: Only the necessary pages are loaded into memory, which can significantly
reduce the amount of memory used.
More Processes in Memory: Since not all pages of all processes need to be in memory at the same
time, more processes can be loaded, improving CPU utilization.
Faster Process Startup: Processes can start execution quickly because not all of their pages need to
be loaded initially.
Disadvantages of Demand Paging:-
Page Fault Overhead: Handling page faults introduces overhead, which can slow down process
execution.
Disk I/O Overhead: Frequent page faults can lead to high disk I/O activity, which can degrade
system performance.
Latency: There is a delay when a page needs to be loaded from disk, which can affect real-time
applications.
In simple terms, here's how hardware supports demand paging:
Memory Management Unit (MMU): Converts virtual addresses to physical addresses using a page
table.
Page Table: Stores mappings of virtual pages to physical frames.
Page Fault Handling: Alerts the operating system when a needed page isn't in memory.
Secondary Storage (Disk): Stores pages not currently in memory, like a backup.
Translation Lookaside Buffer (TLB): Speeds up address translation by remembering recent
translations.
Hardware Timer: Helps manage page replacement and other tasks related to paging.
12. Conversion of virtual address to Physical address (Adress Translation)
The process of converting a virtual address to a physical address is called address translation, and is
managed by the computer's memory management unit (MMU). The MMU translates virtual addresses
into physical addresses, allowing the program to access the actual physical memory locations
The MMU works with the Translation Lookaside Buffer (TLB) to map virtual memory addresses to
the physical memory layer. The TLB acts as a cache for the MMU to reduce the time taken to access
physical memory.
The process for translating virtual memory addresses to physical memory addresses is as follows:

• Dissect the virtual address into a virtual page number and the page offset.
• The first four bits are the index of your page number.
• The value is copied into the physical address.
• The last 12 bit offset is copied as it is from input power pole.
• Combining these two values will get physical address.
For example, 142A16 is 2A16 bytes past the start of page 5 at 140016. So, you add 2A16 to the
starting address of physical page 1 which is 040016. So logical address 142A16 really is physical
address 042A16.
13. Deadlock and how to avoid Deadlock

In an operating system (OS), a deadlock occurs when multiple processes or threads are unable to
proceed because each is waiting for another to release a resource. This is an infinite waiting situation
where processes are stuck in a circular dependency and cannot progress.
For a deadlock to occur, all the necessary conditions must be fulfilled, including: mutual exclusion,
hold and wait, no preemption, and circular wait. Deadlocks are a common problem in multitasking
environments, such as operating systems, database systems, and concurrent programming.
Key Characteristics of Deadlock:-
Mutual Exclusion: At least one resource must be held in a non-sharable mode, meaning only one
process can use it at a time.
Hold and Wait: Processes must hold resources already allocated to them while waiting for additional
resources that are currently held by other processes.
No Preemption: Resources cannot be forcibly taken from the processes holding them; they can only
be released voluntarily.
Circular Wait: There must be a circular chain of two or more processes, each holding a resource that
the next process in the chain is waiting for.
Example of Deadlock:-
Consider a simple scenario with two processes, A and B, and two resources, R1 and R2:
Process A acquires resource R1.
Process B acquires resource R2.
Process A now needs resource R2, so it waits for Process B to release it.
Meanwhile, Process B needs resource R1, held by Process A, so it waits for Process A to release it.
Deadlock Prevention and Avoidance:-
Resource Ordering: Assign a unique ordering to resources and require that resources be acquired in
increasing order. This prevents circular waits.
Resource Allocation: Use techniques like Banker's algorithm to ensure that resources are allocated in
a way that does not lead to deadlock.
Timeouts: Implement timeouts to abort processes that are waiting too long for resources, breaking
potential deadlocks.
Resource Preemption: Allow operating system intervention to preemptively reclaim resources from
processes to resolve deadlocks.
Avoidance Algorithms: Use algorithms that dynamically analyze resource allocation requests to
determine if they could potentially lead to deadlock, and deny requests that would cause deadlock.

14. Technique of Virtual Memory


Paging: Imagine a big book (your program) divided into equal-sized pages. When you need to read a
page, you go to the table of contents (page table) to find where it is in the book (physical memory). If
it's not there, you fetch it from the library (disk).
Segmentation: Think of your program as different sections like chapters (code, data, stack). Each
section has its own bookmark (segment table) showing where it is in the book (physical memory).
You can add more chapters as needed.
Demand Paging: Picture only bringing out the pages you need from your bookshelf (disk) to your
desk (physical memory) when you're working. If you need a page that's not there, you quickly grab it.
Page Replacement Algorithms: Suppose your desk can only hold a few pages at a time. When you
need space for a new page, you decide which one to put back on the shelf (swap out). You might
choose the one you haven't looked at in a while.
Working Set Model: Imagine you're working on a project and need certain pages open on your desk.
You keep the pages you're actively using close by (in physical memory) to avoid constantly getting up
to fetch them from the bookshelf (disk).
Thrashing Prevention: This is like trying to work with too many books open on your desk. You keep
shuffling them around because there's not enough space. To prevent this, you manage how many
books you have open at once to work efficiently without constantly shuffling.

15. File Allocation methods


File allocation methods in operating systems define how files are stored and organized on
storage devices such as hard drives. Here are some common file allocation methods:
1. Contiguous Allocation:
Description: Allocates each file as a contiguous block of disk space.
Operation: When a file is created, the operating system finds a contiguous block of free space large
enough to accommodate the entire file.
Advantages: Simple to implement, efficient for sequential access.
Disadvantages: Fragmentation can occur over time, leading to inefficient use of disk space and
difficulty in finding contiguous free space for large files.
2. Linked Allocation:
Description: Allocates each file as a linked list of disk blocks.
Operation: Each file is divided into blocks of fixed size. Each block contains a pointer to the next
block in the file. The last block of the file contains a special end-of-file marker.
Advantages: No fragmentation, easy to extend files, efficient for sequential and direct access.
Disadvantages: Inefficient for sequential access due to non-contiguous disk blocks and increased
overhead for maintaining block pointers.
3. Indexed Allocation:
Description: Allocates each file using an index block that contains pointers to disk blocks.
Operation: Each file has an index block containing pointers to disk blocks that store the file's data.
The index block is located using a file control block (FCB) or an inode.
Advantages: Efficient for direct access, minimal overhead compared to linked allocation, supports
dynamic file size changes.
Disadvantages: Requires additional disk space for the index block, limited by the size of the index
block for large files.
4. Multi-Level Index Allocation:
Description: Extends indexed allocation by using multiple levels of index blocks.
Operation: Instead of a single index block, multiple levels of index blocks are used to accommodate
larger files. Each level of the index points to the next level until reaching the data blocks.
Advantages: Supports larger files than traditional indexed allocation, efficient for managing large file
systems.
Disadvantages: Increased complexity in managing multiple levels of index blocks, potential overhead
in traversing multiple levels for file access.

16. Scheduling and types of scheduling


Scheduling in an operating system (OS) is the process of assigning tasks to CPUs and switching
between them. The process scheduler decides which process runs at a given time. It can pause a
running process, move it to the back of the queue, or start a new process.
Scheduling is a fundamental operation in systems and networks, and is critical for application
performance and system efficiency. The purpose of CPU scheduling is to make the system faster,
more efficient, and fairer. It ensures that CPU utilization is maximized so that the computer is more
productive.
a) Short-Term Scheduling:-
Purpose: Also known as CPU scheduling, it determines which process in the ready queue will be
executed next and allocates CPU time to processes.
Frequency: Occurs frequently, typically on the order of milliseconds or microseconds.
Objective: Maximizing CPU utilization, minimizing response time, and ensuring fairness among
processes.
Examples: First-Come, First-Served (FCFS), Shortest Job Next (SJN), Round Robin (RR),
Priority Scheduling.
b) Mid-Term Scheduling:
Purpose: Also known as Swapping or Medium-term scheduling, it involves swapping processes
between main memory and disk.
Frequency: Occurs less frequently compared to short-term scheduling, typically on the order of
seconds or minutes.
Objective: Managing the degree of multiprogramming by swapping out less frequently used
processes to disk to free up memory for other processes.
Examples: Swapping, Suspended and Resumed processes.
c) Long-Term Scheduling:
Purpose: Also known as Job Scheduling or Admission Control, it selects which processes from
the pool of new processes will be admitted into the system for execution.
Frequency: Occurs least frequently among the three, typically when a new process enters the
system.
Objective: Ensuring a good mix of processes in the system to maintain overall system
performance and avoid resource exhaustion.
Examples: Process admission, deciding whether to accept a new batch job or interactive session
into the system.

17. Preemptive and non-preemptive algorithm

NON-PREEMPTIVE
PREEMPTIVE SCHEDULING SCHEDULING

Once resources(CPU Cycle) are


In this resources(CPU Cycle) are
allocated to a process, the process
allocated to a process for a limited
holds it till it completes its burst
time.
time or switches to waiting state.

Process can be interrupted in Process can not be interrupted until


between. it terminates itself or its time is up.

If a process having high priority If a process with a long burst time is


frequently arrives in the ready running CPU, then later coming
queue, a low priority process may process with less CPU burst time
starve. may starve.
NON-PREEMPTIVE
PREEMPTIVE SCHEDULING SCHEDULING

It has overheads of scheduling the


It does not have overheads.
processes.

flexible rigid

cost associated no cost associated

In preemptive scheduling, CPU It is low in non preemptive


utilization is high. scheduling.

Preemptive scheduling waiting time Non-preemptive scheduling waiting


is less. time is high.

Preemptive scheduling response Non-preemptive scheduling


time is less. response time is high.

Decisions are made by the process


Decisions are made by the
itself and the OS just follows the
scheduler and are based on priority
process’s instructions
and time slice allocation

The OS has less control over the


The OS has greater control over the
scheduling of processes
scheduling of processes

Lower overhead since context


Higher overhead due to frequent
switching is less frequent
context switching

Examples of preemptive scheduling Examples of non-preemptive


are Round Robin and Shortest scheduling are First Come First
Remaining Time First. Serve and Shortest Job First.
18. Thrashing and how to handle it
Thrashing in operating systems occurs when the system spends a significant amount of time swapping
data between physical memory (RAM) and secondary storage (usually a hard disk or SSD) due to
excessive paging activity. This results in a severe degradation of system performance as the CPU
spends more time managing the swapping of pages than executing actual tasks. Thrashing typically
happens when the system's memory is overcommitted, meaning there are more processes or tasks
competing for memory than the system can handle effectively.
Causes of Thrashing:
Overcommitting Memory: When the sum of the memory requirements of all active processes
exceeds the physical memory capacity of the system.
Inefficient Scheduling: Poorly designed scheduling algorithms may prioritize processes in a way that
leads to excessive paging activity.
Insufficient Memory: If the system doesn't have enough physical memory to accommodate the
working set of active processes, thrashing may occur.
Handling Thrashing:-
To handle thrashing effectively, operating systems employ various techniques to mitigate
excessive paging activity:
Increase Physical Memory: Adding more RAM to the system can alleviate thrashing by providing
more space for active processes to reside in memory.
Optimize Page Replacement Algorithms: Efficient page replacement algorithms, such as Least
Recently Used (LRU) or Clock, can help minimize thrashing by intelligently selecting pages for
eviction from memory.
Adjust Swapping Policies: Fine-tuning swapping policies to balance memory allocation between
processes and prevent overcommitment can help mitigate thrashing.
Limit Concurrent Processes: Limiting the number of concurrent processes or adjusting scheduling
parameters to prevent memory overcommitment can reduce the likelihood of thrashing.
Implement Memory Monitoring: Proactively monitor system memory usage and performance
metrics to detect signs of thrashing early and take corrective actions.
Optimize Application Behavior: Encourage developers to optimize their applications to minimize
memory usage and reduce the likelihood of thrashing.

19. Linked Allocation


Linked allocation, also known as linked list allocation, is a file allocation method for storing files on a
disk using disk blocks as a linked list. Each block contains data and a pointer to the next block. The
blocks can be scattered throughout the disk, which prevents fragmentation.
Here are some advantages of linked allocation:

• Avoids external fragmentation


• Easy to increase file size
• Less load on the directory
• Free blocks can be used to satisfy file block requests
• Files can continue to grow as long as free blocks are available
Here are some limitations of linked allocation: No random access and Space required for
pointers.
In linked allocation, each file is a linked list of disk blocks. The directory entry has a pointer to the
first and optionally the last block of the file. For example, a file of 5 blocks which starts at block 4,
might continue at block 7, then block 16, block 10, and finally block 27.

20. Statefull vs Stateless service

Aspect Stateful Service Stateless Service


Persistence of Maintains the state or context of past interactions Does not retain information about past
State with clients. interactions with clients.
Establishes a session with each client and retains Treats each client request independently
information about the client's state throughout without maintaining any context between
Client Session the session. requests.
Requires memory or storage to maintain client
Resource state, potentially leading to higher resource Typically requires less memory or storage since
Utilization utilization. there is no need to maintain client state.
May be more challenging to scale horizontally
since the state must be synchronized across Easier to scale horizontally since instances are
Scalability multiple instances. independent and do not need to share state.
Web server that serves static content or
Chat application where the server keeps track of processes API requests without storing any
Example the conversation history for each user session. client-specific data.

21. Page replacement algorithm

Algorithm Description
- Oldest page in memory is the one selected for replacement.<br>- Easy to implement, uses a
FIFO (First-In, simple queue to keep track of page arrival order.<br>- Can suffer from the "Belady's Anomaly"
First-Out) where increasing the number of frames can increase the number of page faults.
- Replaces the page that will not be used for the longest period of time in the future.<br>-
Requires future knowledge of page accesses, which is not feasible in practice but serves as a
benchmark for comparison with other algorithms.<br>- Demonstrates the lowest possible
Optimal (OPT) number of page faults among all algorithms.
- Replaces the page that has not been used for the longest period of time in the past.<br>-
Requires keeping track of the order of page accesses, typically done using a stack or counter
LRU (Least implementation.<br>- More complex than FIFO but often more effective, as it aims to capture
Recently Used) temporal locality in page access patterns.
- Replaces the page with the lowest frequency of access.<br>- Requires maintaining a count of
LFU (Least the number of times each page is referenced, which can be resource-intensive.<br>- May suffer
Frequently from the "Frequency Count Overflow" problem, where counts become skewed if they reach their
Used) maximum value.
22. What is context switching in os
Context switching in operating systems refers to the process of saving and restoring the state of a
process or thread so that it can be resumed from the same point later. It occurs when the operating
system switches the CPU from executing one process or thread to another, allowing multiple
processes or threads to share the CPU resources efficiently.
Here's how context switching works:
Saving State: When the operating system decides to switch from executing one process or thread to
another (for example, due to a scheduling decision or an interrupt), it saves the current state of the
running process or thread. This includes information such as CPU registers, program counter
(instruction pointer), stack pointer, and other relevant CPU and memory state.
Loading State: The operating system then loads the saved state of the next process or thread that is
scheduled to run. This involves restoring the CPU registers, program counter, stack pointer, and other
relevant state from the saved context.
Execution: Once the state has been loaded, the CPU resumes execution of the newly loaded process
or thread from the point where it was previously interrupted. The process or thread continues its
execution until it is preempted or voluntarily yields the CPU.
Context switching is a fundamental mechanism used by operating systems to implement multitasking,
where multiple processes or threads appear to run concurrently on a single CPU. It allows the CPU to
efficiently switch between different tasks, enabling the system to make the most effective use of
available resources and provide a responsive user experience. However, context switching also incurs
overhead in terms of CPU time and system resources, so minimizing unnecessary context switches is
essential for optimizing system performance.

23. What is race condition

A race condition in operating systems occurs when multiple processes or threads attempt to access
shared resources concurrently without proper synchronization, leading to unpredictable behavior. This
arises due to non-atomic operations and interleaved execution, where the timing and sequence of
events can vary. Without synchronized access, conflicting modifications to shared resources can
occur, potentially causing data corruption or inconsistent program state. For example, if one thread
reads a shared variable while another thread is modifying it, the reading thread may operate on
outdated data. To mitigate race conditions, proper synchronization mechanisms like locks or
semaphores should be employed to ensure mutual exclusion and coordinated access to shared
resources, preventing simultaneous modifications and ensuring predictable program behavior.

For example, imagine two tasks in a program both want to change the value of a variable. If they
don't wait for each other, they could end up overwriting each other's changes, leading to errors or
unexpected behavior.
So, to avoid race conditions, programs need to use special tricks to make sure that only one task can
access a resource at a time, like taking turns or using locks. This way, everyone gets a chance to do
their thing without stepping on each other's toes.

You might also like