OPERATING SYSTEM Note
OPERATING SYSTEM Note
OPERATING SYSTEM
I) Background
B) Software:
* This consists of the programs and data that run on the hardware. Key software elements include:
* Operating System (OS): Manages hardware and software resources, providing a platform for applications. (e.g.,
Windows, macOS, Linux).
* Application Software: Programs designed to perform specific tasks (e.g., word processors, web browsers,
games).
How it Works:
* Input: Data is entered into the computer through input devices.
* Processing: The CPU processes the data according to instructions from software.
* Storage: Data and programs are stored in memory or storage devices.
* Output: Processed information is displayed or presented through output devices.
Key Functions:
* Inputting data: Gathering information from the outside world.
* Processing data: Manipulating and transforming data.
* Storing data: Retaining data for future use.
* Outputting data: Presenting processed information to the user.
Computer systems are incredibly versatile and are used in a wide range of applications, from personal computing to
complex scientific simulations.
An operating system (OS) is fundamental software that manages computer hardware and software resources and
provides common services for computer programs. Here's a concise overview:
Core Functions :
A) Hardware Management:
* The OS controls and coordinates the use of hardware devices, such as the CPU, memory, storage, and
input/output devices.
B) Resource Allocation:
* It allocates resources to different programs and users, ensuring that they can run efficiently and without
interfering with each other. This includes managing CPU time, memory space, and peripheral devices.
SWE 122: BASIC ENVONMENT II
C) User Interface:
* The OS provides a user interface that allows users to interact with the computer. This can be a graphical user
interface (GUI) or a command-line interface (CLI).
D) File Management:
* It organizes and manages files and directories, allowing users to store, retrieve, and manipulate data.
E) Process Management:
* The OS manages the execution of programs (processes), scheduling them to run on the CPU and providing
them with the necessary resources.
F) Memory Management:
* It manages the computers RAM, allocating memory to programs and ensuring that they do not interfere with each
other.
G) Security:
* Operating systems often include security features to protect the computer from unauthorized access and
malicious software.
Key Concepts:
* Kernel: The core of the OS, responsible for managing the most basic functions.
* Processes: Programs that are currently running.
* Files: Collections of data stored on storage devices.
* Drivers: Software that allows the OS to communicate with hardware devices.
Examples of Operating Systems:
A) Desktop/Laptop:
* Windows
* macOS
* Linux
B) Mobile Devices:
* Android
* iOS
In essence, the operating system serves as a crucial intermediary between the computer hardware and the user,
enabling the smooth and efficient operation of the computer.
The history of operating systems is a fascinating journey that mirrors the evolution of computing itself. Here's a
breakdown of key periods and developments:
1) Early Days (1940s-1950s):
A) No OS:
* Early computers were massive and complex, requiring manual operation. There was no concept of an operating
system as we know it today.
* Programs were entered directly into the hardware, and each task required extensive manual setup.
B) Batch Processing:
* As computers became more powerful, the need for efficiency grew. Batch processing emerged, where similar
jobs were grouped and executed sequentially.
SWE 122: BASIC ENVONMENT II
* Early operating systems, like GMOS (General Motors Operating System), were developed to automate this
process.
* Punch cards were commonly used for input.
2) The 1960s:
A) Multiprogramming:
* This era saw the introduction of multiprogramming, which allowed multiple programs to reside in memory and
share the CPU.
* Time-sharing systems were developed, enabling multiple users to interact with the computer simultaneously.
* UNIX began its development, laying the groundwork for many modern operating systems.
B) Linux Emerges:
* The open-source Linux operating system gained popularity, offering a free and customizable alternative to
commercial operating systems.
D) Modern Trends:
* Cloud computing, virtualization, and containerization have significantly impacted operating system design.
Operating systems are becoming increasingly complex, with features like advanced security, networking, and
support for diverse hardware.
In essence, the history of operating systems is a story of continuous innovation, driven by the need for greater
efficiency, usability, and functionality.
The structure of an operating system defines how its various components are organized and interact. Due to the
complexity of modern operating systems, a variety of architectural approaches have been developed. Here's an
overview of the common structures:
Key Operating System Structures:
1) Simple Structure:
SWE 122: BASIC ENVONMENT II
* This is the most basic structure, where the OS has minimal separation between its components.
* MS-DOS is a classic example.
* It's simple to implement but lacks modularity, making it difficult to maintain and extend.
2) Monolithic Structure:
* In a monolithic kernel, all OS services run within the kernel space.
* Linux is often considered a monolithic kernel, although it has modular aspects.
* It offers high performance but can be complex and difficult to modify.
3) Layered Structure:
* The OS is organized into layers, with each layer building upon the services of lower layers.
* This approach simplifies debugging and verification.
* However, it can be challenging to define the layers correctly, and it can reduce performance.
4) Microkernel Structure:
* Only essential OS services (like process management and inter-process communication) run in the kernel space.
* Other services (like file systems and device drivers) run in user space.
* This enhances modularity and security but can introduce performance overhead due to increased communication
between user and kernel space.
5) Modular Structure:
* The OS is divided into modules, each responsible for a specific task.
* Modules can be loaded and unloaded dynamically, providing flexibility.
* This approach combines the benefits of layered and microkernel structures.
6) Hybrid Kernels:
* These Kernels try to take the best of both the micro and monolithic kernel structures. Many modern operating
systems fall into this catagory.
Key Considerations:
* The choice of OS structure impacts performance, security, and maintainability.
* Modern operating systems often employ hybrid approaches, combining elements of different structures.
Understanding these structures is essential for comprehending how operating systems function and evolve.
When examining operating system structure, it's crucial to understand how key components like processes,
files, system calls, and the shell interact. Here's a breakdown:
1. Processes:
* Definition:
* A process is an instance of a program in execution. The OS is responsible for managing these processes.
* This includes allocating resources (CPU time, memory), scheduling their execution, and ensuring they don't
interfere with each other.
* OS Role:
* The OS kernel maintains process control blocks (PCBs) to track the state of each process.
* It handles process creation, termination, and communication.
2. Files:
* Definition:
* A file is a named collection of related data stored on a storage device.
SWE 122: BASIC ENVONMENT II
* The OS provides a file system to organize and manage these files.
* OS Role:
* The OS handles file creation, deletion, reading, and writing.
* It manages file permissions and access control.
* The OS provides the abstraction of files, so that programs do not need to know the low level details of the
storage devices.
3. System Calls:
* Definition:
* System calls are the interface between user-level programs and the OS kernel.
* They allow programs to request services from the OS, such as accessing hardware or managing files.
* Function:
* When a program needs to perform a privileged operation, it makes a system call.
* The OS kernel then handles the request and returns the result.
* System calls are essential for security and resource management. They prevent user programs from directly
accessing hardware.
* Examples:
* open(), read(), write(), fork(), exec().
4. The Shell:
* Definition:
* The shell is a user interface that allows users to interact with the OS.
* It interprets user commands and translates them into system calls.
* Function:
* The shell acts as a command-line interpreter.
* It can launch programs, manage files, and perform other OS tasks.
* When you type a command in the shell, it parses the command and initiates the necessary system calls.
* Relationship to System Calls:
* The shell uses system calls to execute user commands.
* For example, when you use the cp command to copy a file, the shell makes system calls to open, read, and write
the files.
* Relationship to processes:
* The shell itself is a process. When a user inputs a command that requires another program to run, the shell will
create a new process for that program to run in.
In Summary:
* The OS provides the underlying infrastructure for managing processes and files.
* System calls act as the bridge between user programs and the OS kernel.
* The shell provides a user-friendly interface for interacting with the OS and initiating system calls.
This interconnectedness of these components is fundamental to the operation of a modern operating system.
I) THE KERNEL
The kernel is the core of an operating system, acting as the fundamental bridge between the computer's hardware
and its software. It's the most basic level of software that interacts directly with the hardware. Here's a breakdown of
its key aspects:
Key Functions:
* Hardware Management:
SWE 122: BASIC ENVONMENT II
* The kernel manages the computer's hardware resources, including the CPU, memory, and input/output (I/O)
devices.
* It controls how these resources are allocated and used by different software applications.
* Memory Management:
* The kernel is responsible for allocating and deallocating memory to processes.
* It ensures that processes do not interfere with each other's memory space.
* Process Management:
* The kernel manages the execution of processes, which are instances of running programs.
* It schedules processes to run on the CPU and handles process creation and termination.
* Device Drivers:
* The kernel interacts with hardware devices through device drivers, which are software components that enable
communication between the kernel and specific hardware.
* System Calls:
* The kernel provides an interface for user-level programs to request services from the operating system.
* These requests are made through system calls, which allow programs to access hardware and perform
privileged operations.
* Privileged Mode:
* The kernel operates in a privileged mode, which allows it to access and control all hardware resources.
* Abstraction Layer:
* The kernel provides an abstraction layer that hides the complexity of the hardware from software applications.
In essence:
The kernel is the essential software component that enables the operating system to function. It manages the
computer's resources and provides a foundation for running applications.
● Process management:
● Memory management:
● File system management:
● Device drivers:
are all integrated into a single, large kernel.
Here's a breakdown of key aspects:
Characteristics:
* Tight Integration:
* All OS services are tightly coupled, allowing for direct communication and efficient execution.
SWE 122: BASIC ENVONMENT II
* Kernel Space:
* Everything runs within the kernel space, which has direct access to the hardware.
* Performance:
* Generally, monolithic kernels can offer high performance due to the direct communication between components.
Advantages:
A) Speed:
* Because services are integrated, communication is faster, leading to better performance.
Disadvantages:
A) Stability:
* A bug in one component can crash the entire system.
B) Maintainability:
* Due to the large size and tight coupling, modifying or updating the kernel can be complex.
C) Security:
* Because all services run in kernel space, a security breach in one service can compromise the entire system.
Examples:
* Linux
* Older versions of Unix
Important Note:
* Modern monolithic kernels, like Linux, often incorporate modularity. This allows certain parts of the kernel, such as
device drivers, to be loaded and unloaded dynamically. This adds flexibility without fundamentally changing the
monolithic architecture.
In essence, a monolithic kernel provides a single, unified environment for operating system services, prioritizing
performance through tight integration.
iii) Micro-kernels
Microkernels represent a contrasting approach to operating system design compared to monolithic kernels.
Instead of integrating all OS services into a single kernel space, a microkernel aims to keep the kernel as small as
possible, moving most services to user space. Here's a breakdown:
1) Core Principles:
A) Minimal Kernel:
* The microkernel itself only provides essential services, such as:
* Inter-process communication (IPC)
* Basic process management
* Low-level memory management
B) User-Space Services:
SWE 122: BASIC ENVONMENT II
* Other OS services, like file systems, device drivers, and network stacks, run as separate processes in user
space.
Advantages:
1) Increased Stability:
* Because most services run in user space, a failure in one service is less likely to crash the entire system.
2) Enhanced Security:
* The smaller kernel footprint reduces the attack surface.
* Running services in user space provides better isolation.
3) Improved Maintainability:
* The modular design makes it easier to modify, update, or replace individual services.
4) Flexibility:
* It becomes easier to implement different operating systems personalities ontop of the microkernel.
Disadvantages:
1) Performance Overhead:
* Communication between user-space services and the kernel (or between user-space services) involves IPC,
which can introduce performance overhead.
2) Complexity:
* While the kernel is smaller, the overall system can be more complex due to the distributed nature of services.
Examples:
* QNX
* L4 microkernels
* Minix
In essence, microkernels prioritize stability, security, and modularity by minimizing the kernel's responsibilities
and moving services to user space, though this can come at the cost of performance.
Process management is a fundamental function of an operating system. It involves the operating system's ability to
create, schedule, and terminate processes, as well as manage their resources and interactions. Here's a more
detailed look:
What is a Process?
SWE 122: BASIC ENVONMENT II
* Essentially, a process is a program in execution. It's an active entity, while a program is a passive entity (a set of
instructions stored on disk).
* A process requires resources, such as CPU time, memory, and I/O devices, to perform its tasks.
2) Process Scheduling:
* The OS determines which process gets to use the CPU at any given time.
* Scheduling algorithms are used to optimize CPU utilization and ensure fairness.
3) Process Synchronization:
* When multiple processes need to access shared resources, the OS provides mechanisms to synchronize their
actions.
* This prevents conflicts and ensures data consistency.
Key Concepts:
A) Process Control Block (PCB):
* The OS maintains a PCB for each process, which stores information about the process, such as its state,
program counter, and memory allocation.
B) Process States:
* Processes can be in various states, such as:
* New: The process is being created.
* Ready: The process is waiting to be assigned to the CPU.
* Running: The process is currently being executed by the CPU.
* Waiting: The process is waiting for an event to occur (e.g., I/O completion).
* Terminated: The process has finished execution.
In essence, process management is crucial for efficient and reliable multitasking, allowing multiple programs to run
concurrently and share system resources effectively.
Process description and control are fundamental aspects of how an operating system manages running
programs. Here's a breakdown of the key concepts:
A) PROCESS DESCRIPTION:
SWE 122: BASIC ENVONMENT II
* What is a Process?
* A process is an instance of a computer program that is being executed. It's more than just the program code;
it includes the current activity, as represented by the program counter and the contents of the processor's
registers.
* Process Image:
* The process image is the collection of program, data, stack, and attributes of a process. It essentially
represents the entire state of the process.
B) PROCESS CONTROL:
* Operating System's Role:
* The operating system is responsible for managing the lifecycle of processes, from creation to termination.
* Process States:
* Processes transition between different states during their execution. Common states include:
* New: The process is being created.
* Ready: The process is waiting to be assigned to the CPU.
* Running: The process is currently being executed by the CPU.
* Waiting (Blocked): The process is waiting for an event to occur.
* Terminated: The process has finished execution.
In essence, process description provides the necessary information for the operating system to understand the
state of a process, and process control enables the operating system to manage and coordinate the execution of
multiple processes.
SWE 122: BASIC ENVONMENT II
2.2 PROCESS INTERRUPTS
Process interrupts are a critical mechanism that allows a computer's operating system to handle events
efficiently. Here's a breakdown of how they work:
What is an Interrupt?
* An interrupt is a signal sent to the processor (CPU) by hardware or software, indicating that an event has
occurred that requires immediate attention.
* Essentially, it's a way for devices or programs to "interrupt" the CPU's current activity and request it to
perform a specific task.
Importance of Interrupts:
*1)Efficiency:
* Interrupts allow the CPU to handle multiple tasks concurrently without wasting time waiting for events.
2) Responsiveness:
* They enable the system to respond quickly to events that require immediate attention.
3) Real-time Systems:
* Interrupts are essential for real-time systems, where timely responses to events are critical.
SWE 122: BASIC ENVONMENT II
In essence, process interrupts are a fundamental mechanism that enables operating systems to manage
hardware and software events efficiently, ensuring smooth and responsive operation.
Context switching is a fundamental concept in computer science, particularly in operating systems and
multitasking environments. It refers to the process of saving the state of a running process or thread so that it
can be restored later, allowing another process or thread to run.
Here's a breakdown of what it entails:
What Happens During a Context Switch?
When a context switch occurs, the operating system (or hypervisor in the case of virtual machines) essentially
does the following:
1. Saves the State of the Current Process/Thread: This involves storing all the crucial information about the
currently running process or thread. This typically includes:
* Program Counter (PC): The address of the next instruction to be executed.
* Registers: The contents of all the CPU registers, which hold temporary data and control information.
* Memory Management Information: Details about the process's memory space, such as page tables.
* Process Control Block (PCB) or Thread Control Block (TCB): A data structure that holds all the information
about the process or thread.
* Stack Pointer: The current position in the process's or thread's stack.
* Open Files and Network Connections: Information about the resources the process is currently using.
2. Selects the Next Process/Thread to Run: The operating system's scheduler determines which process or
thread should be executed next based on various scheduling algorithms (e.g., First-Come, First-Served, Round
Robin, Priority Scheduling).
3. Restores the State of the Next Process/Thread: The operating system loads the saved state of the selected
process or thread from its PCB or TCB back into the CPU. This includes:
* Loading the program counter.
* Restoring the register values.
* Setting up the memory management information.
* Restoring the stack pointer.
4. Resumes Execution: The CPU then starts executing the instructions of the newly loaded process or thread
from the point where it was last interrupted (indicated by the restored program counter).
Why is Context Switching Necessary?
* Multitasking: It allows multiple processes or threads to share a single CPU, creating the illusion of them
running concurrently.
* Responsiveness: In interactive systems, context switching enables the operating system to quickly switch
between different applications, providing a responsive user experience.
* Interrupt Handling: When an interrupt occurs (e.g., from a peripheral device), the CPU needs to switch context
to handle the interrupt service routine. After the interrupt is handled, the context of the interrupted process is
usually restored.
SWE 122: BASIC ENVONMENT II
* Time-Slicing: In time-sharing operating systems, each process is given a small time slice to execute. When the
time slice expires, a context switch occurs to allow another process to run.
Overhead of Context Switching:
Context switching is not a free operation. It introduces overhead because the CPU spends time saving and
restoring the states instead of executing actual user or kernel code. This overhead can become significant if
context switches occur too frequently. Factors contributing to this overhead include:
* Saving and loading register values.
* Updating memory management structures (like TLB flushes).
* Cache invalidation (as the new process might access different memory locations).
* Scheduling overhead (the time taken by the scheduler to decide which process to run next).
In summary, context switching is a crucial mechanism that enables modern operating systems to support
multitasking and provide a responsive computing experience. However, it also introduces overhead that needs
to be managed for efficient system performance.
If you had a specific question about context switching, or if "CONTECT SWAPPING" was a typo and you meant
something else, please provide more details so I can assist you better.
Process scheduling is a crucial function of an operating system that manages the execution of multiple processes
(programs in execution) on a single or multi-core CPU. Its main goal is to maximize CPU utilization, minimize
response time, turnaround time, and waiting time, and ensure fairness among processes.
Here's a breakdown of key aspects of process scheduling:
Objectives of Process Scheduling:
* Maximize CPU Utilization: Keep the CPU as busy as possible.
* Maximize Throughput: Complete the maximum number of processes per unit of time.
* Minimize Turnaround Time: Reduce the total time taken by a process from submission to completion.
* Minimize Waiting Time: Reduce the time a process spends waiting in the ready queue.
* Minimize Response Time: In interactive systems, provide quick feedback to the user.
* Ensure Fairness: Give each process a fair share of the CPU time.
Process Scheduling Queues:
The operating system maintains different queues to manage processes:
* Job Queue: Holds all the processes that have entered the system.
* Ready Queue: Contains processes that are in main memory and ready to execute.
* Device Queues: Processes that are waiting for a particular I/O device.
First-Come, First-Served (FCFS), also known as First-In, First-Out (FIFO), is the simplest process scheduling
algorithm. It operates based on the principle that the process that arrives in the ready queue first is the first one
to be allocated the CPU.
Here's a breakdown of how FCFS works:
How it Works:
1. Arrival Order: Processes are placed in the ready queue as they arrive in the system.
2. CPU Allocation: When the CPU becomes free, the process at the front of the ready queue is selected to
run.
3. Non-Preemptive: Once a process is allocated the CPU, it continues to run until it completes its execution
or voluntarily releases the CPU (e.g., for I/O operations). The CPU is not taken away from a running
process.
Example:
Consider three processes, P1, P2, and P3, arriving in the ready queue in the following order with their respective
burst times (the time required for CPU execution):
Advantages of FCFS:
* Simple to understand and implement: It's a straightforward algorithm.
* Fair in terms of arrival order: Processes are served in the order they request the CPU.
* No starvation: Every process will eventually get its turn to run.
Disadvantages of FCFS:
* Can lead to the "convoy effect": If a long-running process arrives first, it can block all the shorter processes
that arrive later, leading to a significant increase in the average waiting time and turnaround time. This is evident
in the example above where P1, being a long process, delays P2 and P3 considerably.
* Not optimal in terms of average waiting time and turnaround time: Other scheduling algorithms can often
achieve better performance.
* Not suitable for time-sharing systems: Its non-preemptive nature can lead to poor responsiveness in
interactive systems where users expect quick feedback.
In summary, FCFS is a basic and easy-to-implement scheduling algorithm. However, its simplicity comes at the
cost of potential inefficiencies, especially when dealing with a mix of short and long processes.
Round Robin (RR) Scheduling is a preemptive process scheduling algorithm widely used in time-sharing
operating systems. Its primary goal is to provide fairness to all processes by giving each one a small and equal
slice of CPU time, known as a time quantum or time slice, in a cyclic order.
Key Concepts:
1. Time Quantum: A fixed unit of CPU time (typically ranging from 10 to 100 milliseconds) allocated to each
process. The operating system determines this value.
2. Ready Queue: Processes that are ready for execution are maintained in a queue, often implemented as a
circular queue.
3. CPU Allocation: The scheduler picks the process at the front of the ready queue and assigns the CPU to it
for the duration of the time quantum.
4. Preemption: If a process is still running when its time quantum expires, the CPU is forcibly taken away
(preempted).
5. Queueing (Again): The preempted process is then moved to the back of the ready queue.
6. Context Switching: The act of saving the state of the preempted process and loading the state of the next
process to be executed. This introduces some overhead.
7. Cyclic Execution: The scheduler repeats this process, iterating through the ready queue and giving each
process its turn to run for one time quantum.
SWE 122: BASIC ENVONMENT II
8. New Arrivals: When a new process enters the system and becomes ready, it is typically added to the end
of the ready queue.
Illustrative Example:
Consider four processes (P1, P2, P3, P4) that arrive at time 0 with the following burst times. Let's assume a time
quantum of 4 units.
| Process | Arrival Time | Burst Time |
|---|---|---|
| P1 | 0 | 20 |
| P2 | 0 | 4 |
| P3 | 0 | 12 |
| P4 | 0 | 8 |
Performance Metrics:
* Completion Time: P1: 6, P3: 13, P2: 21
* Turnaround Time: P1: 6-0=6, P3: 13-0=13, P2: 21-0=21
* Waiting Time: P1: 0, P3: 6, P2: 13
* Average Waiting Time: (0 + 6 + 13) / 3 = 19 / 3 = 6.33
* Average Turnaround Time: (6 + 13 + 21) / 3 = 40 / 3 = 13.33
* Advantages of SJF:
* Optimal for minimizing average waiting time: Among all non-preemptive algorithms, SJF gives the minimum
average waiting time for a given set of processes.
* Good for throughput: Shorter jobs are completed quickly.
* Disadvantages of SJF:
* Requires knowing the burst time in advance: This is a significant limitation as the exact burst time of a process
is often not known before it starts execution. Estimating burst times can be complex.
* Can lead to starvation of long processes: If there's a continuous arrival of short processes, long processes
might never get a chance to run.
2. Shortest Remaining Time First (SRTF) - Preemptive:
* Concept: SRTF is the preemptive version of SJF. The process with the smallest remaining burst time is selected
to run next. If a new process arrives with a remaining burst time shorter than the remaining burst time of the
currently running process, the currently running process is preempted, and the new process gets the CPU.
* How it Works:
i. The scheduler keeps track of the remaining burst time of the currently executing process and the burst times
of all processes in the ready queue.
ii. Whenever a new process arrives or the current process finishes its CPU burst, the scheduler selects the
process with the smallest remaining burst time.
iii. If the newly arrived process has a smaller remaining burst time than the current process, a preemption
occurs.
* Example:
Consider three processes arriving at different times with their burst times:
| Process | Arrival Time | Burst Time |
|---|---|---|
| P1 | 0 | 7 |
| P2 | 2 | 4 |
| P3 | 4 | 1 |
Gantt Chart:
| P1(2) | P2(4) | P3(1) | P1(5) |
0 2 6 7 12
SWE 122: BASIC ENVONMENT II
Explanation of Gantt Chart:
* At time 0, P1 arrives and starts.
* At time 2, P2 arrives with a burst time of 4. P1 has a remaining burst time of 5. Since 4 < 5, P1 is preempted,
and P2 runs.
* At time 4, P3 arrives with a burst time of 1. P2 has a remaining burst time of 2. Since 1 < 2, P2 is preempted,
and P3 runs.
* At time 5, P3 finishes. The remaining processes are P1 (remaining 5) and P2 (remaining 2). P2 has a shorter
remaining time, so it runs.
* At time 7, P2 finishes. P1 with a remaining burst time of 5 runs until completion.
Performance Metrics:
* Completion Time: P1: 12, P2: 7, P3: 5
* Turnaround Time: P1: 12-0=12, P2: 7-2=5, P3: 5-4=1
* Waiting Time: P1: (0-0) + (7-2) = 5, P2: 2-2=0, P3: 4-4=0
* Average Waiting Time: (5 + 0 + 0) / 3 = 5 / 3 = 1.67
* Average Turnaround Time: (12 + 5 + 1) / 3 = 18 / 3 = 6
* Advantages of SRTF:
* Provides the minimum average waiting time among all preemptive scheduling algorithms.
* Better throughput than SJF in terms of shorter jobs.
* Disadvantages of SRTF:
* Requires knowing the burst time in advance: Similar to SJF, this is often impractical.
* Can lead to starvation of long processes: If short processes keep arriving, long processes might never get the
CPU.
* Higher overhead due to frequent context switches: Preemption introduces additional overhead.
In the context of Cameroon (as of April 10, 2025):
The theoretical concepts of SJF and SRTF are fundamental in operating systems and would be taught in computer
science programs across universities in Cameroon. However, due to the practical difficulty of knowing burst
times in advance and the potential for starvation, pure SJF and SRTF are not commonly implemented in their
purest forms in general-purpose operating systems like those used on personal computers or servers in
Cameroon.
Real-world operating systems often use more complex scheduling algorithms (like variations of priority
scheduling, multi-level feedback queues, etc.) that try to approximate the benefits of SJF/SRTF while addressing
their limitations. These algorithms might use historical CPU usage to predict the next burst time.
However, the principles of prioritizing shorter tasks to improve average waiting time are often incorporated
into more sophisticated scheduling strategies used in various computing systems in Cameroon, from embedded
devices to large server infrastructures.
Shortest Remaining Time First (SRTF) is a preemptive scheduling algorithm. It is the preemptive counterpart of
the Shortest Job First (SJF) algorithm. The key idea behind SRTF is to always schedule the process that has the
smallest remaining burst time.
Here's a detailed explanation:
How it Works:
SWE 122: BASIC ENVONMENT II
1. Tracking Remaining Burst Time: The operating system keeps track of the remaining CPU burst time for
each process in the ready queue and the currently executing process.
2. Scheduling Decision Points: Scheduling decisions are made when a new process arrives in the ready queue
or when the currently running process completes its CPU burst.
3. Selection: At each decision point, the scheduler examines the remaining burst times of all processes in the
ready queue (including the currently running process). The process with the smallest remaining burst time is
selected to run.
4. Preemption: If a new process arrives with a remaining burst time that is shorter than the remaining burst
time of the currently running process, the currently running process is preempted (its execution is interrupted),
and the newly arrived process is given the CPU.
5. Context Switching: When a preemption occurs, a context switch is performed to save the state of the
preempted process and load the state of the newly scheduled process.
6. Execution: The selected process runs until it completes, or a new process arrives with an even shorter
remaining burst time, causing another preemption.
Illustrative Example:
Consider the following processes with their arrival times and burst times:
| Process | Arrival Time | Burst Time |
|---|---|---|
| P1 | 0 | 7 |
| P2 | 2 | 4 |
| P3 | 4 | 1 |
| P4 | 5 | 4 |
Gantt Chart:
| P1(2) | P2(2) | P3(1) | P2(1) | P4(4) | P1(5) |
0 2 4 5 6 10 15
SRTF is a crucial algorithm to understand for its theoretical optimality in minimizing average waiting time.
Students in universities and technical colleges (like those in Buea, Yaoundé, Douala, etc.) learning about
operating system principles would study SRTF to grasp the concepts of preemptive scheduling and the impact
of job length on scheduling performance.
However, due to the practical challenges of knowing burst times and the risk of starvation, pure SRTF is not
commonly used as the primary scheduling algorithm in general-purpose operating systems (like Windows,
macOS, or Linux) that are prevalent on personal computers and servers in Cameroon.
Real-world operating systems often employ more complex and adaptive scheduling algorithms, such as
variations of priority scheduling, multi-level feedback queues, and combinations of different techniques, to
achieve a balance between responsiveness, fairness, and efficiency without requiring precise knowledge of
future burst times. These algorithms might incorporate elements of prioritizing shorter or I/O-bound tasks to
improve overall system performance.
Therefore, while SRTF is a theoretically important algorithm and its principles influence scheduling strategies,
it's not typically implemented directly in its purest form in the operating systems encountered by users and
administrators in Cameroon today.
SWE 122: BASIC ENVONMENT II
2.4.5 THREADS, SYMMETRIC MULTIPROCESSING
A) THREADS.
What are Threads?
A thread, in the context of computer science, is the smallest sequence of programmed instructions that can be
managed independently by the operating system scheduler. Think of it as a lightweight sub-process within a
process.
To understand threads, it's helpful to first understand processes:
* A process is an instance of a running program. It has its own independent memory space, resources (like open
files, network connections), and a single thread of control by default.
Threads, on the other hand:
* Exist within a process. A single process can contain multiple threads.
* Share the process's memory space, code, data, and other resources. This allows for efficient communication
and data sharing between threads within the same process.
* Each thread has its own program counter (PC), stack, and set of registers. This allows each thread to execute
independently.
Analogy:
Imagine a software application as a large company (the process). Within this company, there are different
departments (the threads) working on different tasks. All departments share the company's resources (office
space, equipment, data), but each department has its own team leader (program counter) and specific tasks
(instructions to execute).
B) Symmetric Multiprocessing
Symmetric Multiprocessing (SMP) is a computer architecture where two or more identical processors (CPUs)
are connected to a single shared main memory. They have equal access to all input/output (I/O) devices and are
controlled by a single operating system instance. In essence, all processors are treated equally and can perform
any task.
Here's a breakdown of key aspects of SMP:
Key Characteristics:
* Multiple Identical Processors: The system contains two or more processors that are of the same type and have
similar capabilities. In multi-core processors, each core is treated as a separate CPU within an SMP architecture.
* Shared Memory: All processors share a single main memory space. This allows any processor to access any
location in memory. This is also known as Uniform Memory Access (UMA) when the access time to memory is
the same for all processors.
* Shared I/O Devices: All processors have equal access to all input and output devices connected to the system.
* Single Operating System: A single instance of the operating system manages and schedules tasks across all the
processors. The OS is responsible for load balancing and ensuring efficient utilization of the available processing
power.
SWE 122: BASIC ENVONMENT II
* Tightly Coupled System: SMP systems are considered tightly coupled because the processors communicate
closely through the shared memory and system bus.
How SMP Works:
The operating system in an SMP system can divide tasks or threads of a single application across the multiple
processors. This allows for parallel processing, where different parts of a program are executed simultaneously
on different CPUs, potentially leading to significant performance improvements, especially for multi-threaded
applications and multitasking environments.
Advantages of SMP:
* Increased Throughput: By allowing parallel execution of tasks, SMP can significantly increase the amount of
work a system can perform in a given time.
* Improved Performance: Applications designed for multithreading can experience substantial speedups.
* Load Balancing: The operating system can dynamically distribute the workload across the available
processors, preventing any single processor from becoming a bottleneck.
* Cost-Effective Scalability: Adding more processors can be a relatively cost-effective way to increase the
processing power of a system compared to other approaches.
* Simplified System Architecture: Compared to other multiprocessing architectures like NUMA (Non-Uniform
Memory Access), the shared memory model of SMP can lead to a simpler hardware and software design.
* Better Reliability: If one processor fails, the system can often continue to function (though with reduced
performance) because the remaining processors can take over its tasks.
Disadvantages of SMP:
* Scalability Limitations: The shared memory and bus can become bottlenecks as the number of processors
increases. All processors contend for access to the same memory and I/O resources, which can limit the
scalability of SMP systems.
* Cache Coherency Issues: When multiple processors have their own caches and access the same memory
locations, ensuring that all caches have a consistent view of the data (cache coherency) becomes a complex
challenge that can impact performance.
* Operating System Complexity: The operating system needs to be specifically designed to manage multiple
processors efficiently, handle synchronization between them, and ensure fair resource allocation.
* Potential Bottlenecks: Contention for shared resources like memory and the system bus can lead to
performance bottlenecks if not managed effectively.
* Higher Expansion Costs for Large Systems: While adding a few processors can be cost-effective, scaling to a
very large number of processors in an SMP architecture can become expensive due to the need for high-
bandwidth memory and interconnects.
Applications of SMP:
SMP is commonly used in:
* Server Systems: For handling multiple user requests and running various server applications concurrently.
* Workstations: For demanding tasks like video editing, 3D rendering, and scientific simulations.
* High-Performance Computing (HPC): For parallel processing of complex computational problems (though
very large-scale HPC often utilizes other architectures like distributed memory).
* Multitasking Operating Systems: To efficiently run multiple applications simultaneously.
SWE 122: BASIC ENVONMENT II
In conclusion, Symmetric Multiprocessing is a fundamental architecture for enhancing the performance and
throughput of computer systems by utilizing multiple processors that share resources under the control of a
single operating system. While it offers significant advantages, its scalability is limited by the shared resources
and the complexities of maintaining cache coherency.