KEMBAR78
604 Operating System | PDF | Operating System | Thread (Computing)
0% found this document useful (0 votes)
6 views18 pages

604 Operating System

The document provides an overview of operating systems, detailing their purpose, types, components, and services. It explains the organization of computer systems, the role of the OS as an intermediary between users and hardware, and various OS structures. Additionally, it covers process management, inter-process communication, and specific tools in UNIX/Linux, including file management and IPC mechanisms like pipes and FIFOs.

Uploaded by

bc230409427wsa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views18 pages

604 Operating System

The document provides an overview of operating systems, detailing their purpose, types, components, and services. It explains the organization of computer systems, the role of the OS as an intermediary between users and hardware, and various OS structures. Additionally, it covers process management, inter-process communication, and specific tools in UNIX/Linux, including file management and IPC mechanisms like pipes and FIFOs.

Uploaded by

bc230409427wsa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Lecture 1: Introduction to Operating System

An Operating System (OS) is the fundamental software that manages all the hardware and
software resources of a computer. It acts as an intermediary between the user and the computer
hardware, making the system convenient to use.

Organization and Purpose of a Computer System

A computer system is organized into four main components:

1.​ Hardware: Provides the basic computing resources, including the CPU, memory, and
input/output (I/O) devices.
2.​ Operating System: Manages the hardware for various application programs and users,
simplifying the user's interaction with the machine.
3.​ Application Programs: Use system resources to solve users' computing problems.
Examples include compilers, web browsers, and video games.
4.​ Users: People, machines, or other computers interacting with the system.

The main purpose of a computer is to generate and execute programs efficiently. This requires
services like storing and loading programs, managing memory, and allowing processes to
communicate.

What is an Operating System?

There are two primary views of an OS:

●​ The Top-Down View (User's View): The OS is a program that acts as an intermediary
between the user and the hardware. It provides a convenient interface, hiding the
complex details of the hardware. For example, when you use a command like​
copy file1 file2, the OS handles the low-level disk operations.
●​ The Bottom-Up View (System's View): The OS is a resource manager. It allocates
and deallocates resources like CPU time, memory space, and I/O devices among
various users and programs in an efficient, fair, and secure manner. It also acts as a​
control program, managing user programs to prevent errors and improper use of the
computer.

Lecture 2: Types of Operating System


Operating systems have evolved over time, leading to different types designed for specific
purposes.
Types of Systems

●​ Single-user Systems: Allow only one user at a time. The main goal is user convenience
and responsiveness. Examples include DOS, early versions of Windows, and MacOS.
●​ Batch Systems: Users prepare jobs (program, data, and control info) and submit them
to an operator who batches similar jobs together to run as a group. The user does not
interact with the job during execution. The main task of the OS was to automatically
transfer control from one job to the next. The CPU was often idle in these systems
because I/O devices were much slower.
●​ Multiprogrammed Systems: To increase CPU utilization, the OS keeps several jobs in
memory simultaneously. When one job has to wait for an I/O operation, the OS switches
the CPU to another job. This keeps the CPU busy and increases system throughput.
●​ Time-Sharing Systems: These are multi-user, multi-process, and interactive systems.
They use multiprogramming and CPU scheduling to give each user a small portion of
CPU time, allowing multiple users to interact with their jobs simultaneously. UNIX, Linux,
and Windows servers are examples.
●​ Real-Time Systems: Used when there are rigid time requirements for operations. These
systems control scientific experiments, medical imaging, and industrial processes.
○​ Hard Real-Time: Guarantees that critical tasks are completed on time. All
system delays must be bounded.
○​ Soft Real-Time: A critical task gets priority over other tasks until it completes.
This is less restrictive and can be mixed with other system types.

Interrupts and Protection

●​ Interrupt: A signal from a hardware device (like a keyboard or disk) to get the CPU's
attention. The OS is interrupt-driven software.
●​ Trap: A software-generated interrupt caused by an error (e.g., division by zero) or a user
request for an OS service (a system call).
●​ Signal: An event generated to get the attention of a process (e.g., pressing <Ctrl-C>).
●​ Hardware Protection: In a multiprogramming environment, the OS must protect
processes from each other.
○​ Dual-Mode Operation: The CPU has two modes: user mode and monitor
mode (or system/privileged mode). A mode bit in the hardware indicates the
current mode. Privileged instructions, like those for I/O, can only be executed in
monitor mode. When a user program needs to perform a privileged operation, it
makes a​
system call, which traps to the OS, switches the mode to monitor, performs the
operation, and then returns to user mode.

Lecture 3: Components, Services, and Structures


OS Components

An OS is composed of several key components that manage system resources:

●​ Process Management: A process is a program in execution. The OS is responsible for


creating, terminating, suspending, and resuming processes, as well as providing
mechanisms for synchronization and communication.
●​ Main Memory Management: The OS keeps track of which parts of memory are being
used and by whom, allocates and deallocates memory space as needed, and ensures
processes do not interfere with each other.
●​ Secondary Storage Management: The OS manages disk space, including free-space
management, storage allocation, and disk scheduling.
●​ File Management: The OS handles file creation, deletion, manipulation, mapping files
onto secondary storage, and backing up files.
●​ Protection System: In a multi-user system, the OS provides mechanisms to control the
access of programs, processes, or users to system resources.
●​ Command-Line Interpreter (Shell): This is the interface between the user and the OS.
It reads user commands and executes them. In UNIX/Linux, popular shells include
Bourne shell (sh), C shell (csh), and Bourne Again shell (bash).

OS Services

The OS provides services for the convenience of the user and the efficiency of the system:

●​ Program Execution: Loading and running a program.


●​ I/O Operations: Managing I/O with files or devices.
●​ File System Manipulation: Creating, deleting, reading, and writing files.
●​ Communications: Exchanging information between processes, either on the same
computer or across a network, via shared memory or message passing.
●​ Error Detection: Constantly checking for errors in the CPU, memory, I/O devices, or
user programs.
●​ Resource Allocation: Allocating resources to multiple users or jobs running at the same
time.

OS Structures

●​ Simple/Monolithic Structure: The OS code is not well-structured. It's written for


functionality and efficiency, but can be difficult to enhance. MS-DOS and the original
UNIX are examples.
●​ Layered Approach: The OS is broken into a number of layers, each built on top of lower
layers. This approach is modular and simplifies debugging, but can be less efficient due
to the overhead of passing through multiple layers for a system call.
●​ Microkernels: This structure removes all non-essential components from the kernel,
implementing them as user-level programs. The main function of the microkernel is to
provide communication between the client program and various services. This makes
the OS more secure, reliable, and easier to extend.
●​ Virtual Machines (VM): This approach provides an interface that is identical to the
underlying hardware. Each process is provided with a "virtual" copy of the computer,
allowing multiple operating systems to run on a single physical machine.

Lecture 4: Introduction to Unix/Linux Interface


Directory Structure

UNIX and Linux use a hierarchical file system, often visualized as a tree starting from the

root directory (/).

●​ Pathname: A list of directories separated by slashes (/) that leads to a file or directory.
○​ Absolute Pathname: Starts from the root directory (/), e.g., /usr/bin/gcc.
○​ Relative Pathname: Starts from the current location, e.g., courses/cs604.
●​ Special Directories:
○​ . (dot): Refers to the current directory.
○​ .. (dot-dot): Refers to the parent directory.
○​ ~ (tilde): Refers to the user's home directory.

Important Directories in Linux


Directory Purpose

/
The root directory that contains all other
directories.

/bin
Contains essential binary executable files available to
all users.

/etc
Contains system configuration files.
/home
Contains personal directories for each
user.

/dev
Contains device files. Linux treats devices
like files.

/usr
Contains user applications, libraries, and
documentation.

/var
Contains variable data files like logs and mail
spools.

/tmp
Used for storing temporary files.

Export to Sheets

Lecture 5: Processes
Browse the Directory and Managing Files

●​ ls: Lists the contents of a directory.​


ls -l shows a long listing with details, and ls -a shows all files, including hidden dot
files.
●​ mkdir: Creates a new directory.
●​ rmdir: Removes an empty directory.
●​ cd: Changes the current working directory.
●​ pwd: Displays the absolute pathname of the current working directory.
●​ cp file1 file2: Copies file1 to file2.
●​ mv file1 file2: Moves or renames file1 to file2.
●​ rm file1: Removes (deletes) file1.
●​ gcc program.c -o myprogram: Compiles program.c and creates an executable
file named myprogram.

The Process Concept


A

process is a program in execution. It is an active entity, unlike a program, which is a passive


entity (a file on a disk). A process includes:

●​ Text Section: The program code.


●​ Program Counter: The address of the next instruction to execute.
●​ CPU Registers: Current values of hardware registers.
●​ Process Stack: Contains temporary data like function parameters and local variables.
●​ Data Section: Contains global variables.

Process States

As a process executes, it changes state:

●​ New: The process is being created.


●​ Ready: The process is waiting to be assigned to a CPU.
●​ Running: Instructions are being executed.
●​ Waiting: The process is waiting for some event to occur (like I/O completion).
●​ Terminated: The process has finished execution.

Process Control Block (PCB)

Each process is represented in the OS by a

Process Control Block (PCB), which stores all information associated with a specific process,
including its state, program counter, CPU registers, scheduling information, and memory
management data.

Lecture 6: Process Management & Scheduling


Process Creation and Termination

●​ Processes are created and deleted dynamically. A creating process is called a​


parent, and the new processes are its children. This forms a tree of processes.
●​ When a child is created, it can obtain resources from the OS or from its parent.
●​ fork(): In UNIX/Linux, this system call creates a new process. The child process is a
duplicate of the parent's address space.​
fork() returns 0 to the child and the child's process ID (PID) to the parent.
●​ exec(): This system call is typically used after fork() to replace the process's
memory space with a new program.
●​ wait(): A parent can use this system call to wait for a child process to terminate.
●​ exit(): A process terminates by executing this system call, which deallocates its
resources.
●​ Cascading Termination: If a parent terminates, some systems require that all its
children must also be terminated.

Lecture 7: Inter-Process Communication (IPC)


Cooperating Processes

Processes can be either independent or cooperating. A

cooperating process is one that can affect or be affected by other processes, typically by
sharing data. Cooperation is useful for:

●​ Information Sharing: Multiple users accessing a shared file.


●​ Computation Speedup: Breaking a task into subtasks that run in parallel.
●​ Modularity: Structuring a system as separate processes or threads.
●​ Convenience: A user running multiple tasks (editing, compiling, printing) at once.

The Producer-Consumer Problem

This is a classic synchronization problem where a

producer process produces information that is consumed by a consumer process. They share
a buffer. The producer must wait if the buffer is full, and the consumer must wait if it is empty.
This requires a synchronization mechanism to ensure they access the shared buffer correctly.

Lecture 8: UNIX/Linux IPC Tools - 1


IPC Mechanism: Message Passing

IPC provides a way for processes to communicate and synchronize without sharing the same
address space. In a

message-passing system, processes communicate by sending and receiving messages


through a communication link. Key aspects include:

●​ Direct vs. Indirect Communication: In direct communication, processes must name


each other explicitly (Send(P, message)). In indirect communication, messages are
sent to and received from mailboxes or ports.
●​ Synchronization:
○​ Blocking (Synchronous): The sender waits until the message is received; the
receiver waits until a message is available.
○​ Non-blocking (Asynchronous): The sender sends and continues; the receiver
gets either a message or a null.
●​ Buffering: A temporary queue for messages can have zero, bounded, or unbounded
capacity.

UNIX Pipes

pipe is an IPC tool for one-way communication between related processes (e.g., parent-child).

●​ The​
pipe() system call creates a pipe and returns two file descriptors: one for reading
(fd[0]) and one for writing (fd[1]).
●​ After a fork(), both parent and child have the pipe's file descriptors. Typically, one
process closes the read end and writes to the pipe, while the other closes the write end
and reads from it.
●​ read(), write(), and close() are the standard system calls used to interact with the
pipe's file descriptors.

Lecture 9: UNIX/Linux IPC Tools - 2


Standard Files and File Access

●​ Every UNIX/Linux process starts with three open files, known as​
standard files:
1.​ Standard Input (stdin): File descriptor 0, usually the keyboard.
2.​ Standard Output (stdout): File descriptor 1, usually the display screen.
3.​ Standard Error (stderr): File descriptor 2, usually the display screen.
●​ The kernel uses a​
Per-Process File Descriptor Table to map these small integer file descriptors to
entries in a system-wide File Table, which in turn points to an Inode Table that
contains the actual file's attributes and location.

Pipes at the Command Line

The pipe operator (


|) connects the standard output of one command to the standard input of another.

●​ Example: cat /etc/passwd | grep zaheer


○​ The cat command's output is "piped" to the grep command's input.​
grep then filters this input and displays only the lines containing "zaheer".

Named Pipes (FIFO)

FIFO (First-In, First-Out), or named pipe, allows communication between unrelated processes
on the same system.

●​ It has a file-like access point in the file system. Processes can open this file to read from
or write to the FIFO.
●​ Unlike unnamed pipes, FIFOs are persistent; they exist until explicitly removed.

Lecture 10: Input/Output in UNIX/Linux


I/O Redirection

Redirection detaches the standard files from their default devices (keyboard/screen) and
attaches them to other files.

●​ Input Redirection (<): command < input_file


○​ The command reads its input from​
input_file instead of the keyboard.
●​ Output Redirection (>): command > output_file
○​ The command sends its standard output to​
output_file instead of the screen.
●​ Error Redirection (2>): command 2> error_file
○​ The command sends its standard error messages to​
error_file.

Using FIFOs

FIFOs are created using the

mknod() system call or the mkfifo command.


●​ Client-Server Model: A common use for FIFOs is in client-server applications on the
same machine. A server creates a "well-known" FIFO that clients use to send requests.
The server can then send responses back to each client through client-specific FIFOs.
●​ Command Line: FIFOs can be used to pass data between different shell pipelines
without creating temporary files. For example, one command can write to a FIFO while
another reads from it, allowing for more complex data flows than a simple pipe.

Lecture 11: Use of FIFO & Process Management in UNIX


This lecture expands on Inter-Process Communication (IPC) using FIFOs and introduces
commands to monitor processes.

Using FIFOs for Client-Server Communication

FIFOs (named pipes) are powerful for communication between unrelated processes. A common
application is a client-server model on a single machine.

1.​ Server Process: The server starts first. It creates two FIFOs, often in a public directory
like​
/tmp. One FIFO (e.g.,​
FIFO1) is for reading requests from clients, and the other (FIFO2) is for writing
responses back to the client.
2.​ Client Process: The client process opens FIFO1 to write its request to the server and
opens FIFO2 to read the server's response.
3.​ Communication Flow:
○​ The client writes a message to​
FIFO1 and then waits (blocks) for a response on FIFO2.
○​ The server, which was waiting for a message on​
FIFO1, reads the client's request, processes it, and writes a response back
through FIFO2.
○​ Once the client receives the response, both processes can close the FIFOs. The
client often handles the final cleanup by deleting the FIFOs from the file system.

UNIX/Linux Process Management Commands

●​ ps (Process Status): Gives a snapshot of the current processes.


○​ ps by itself shows processes owned by the current user.
○​ ps -e shows every process running on the system.
○​ ps -l provides a long format with more details.
●​ top: Displays real-time information about the top CPU-using processes and periodically
updates it. It's a great tool for seeing system load, memory usage, and which processes
are most active at any given moment.

Lecture 12: Threads - 1


This lecture covers more process management techniques and introduces the concept of
threads.

More Process Management

●​ Foreground vs. Background: A foreground process is one you are currently interacting
with in your terminal. A background process runs without tying up your terminal. You can
start a process in the background by adding an ampersand (&) to the end of the
command.
●​ <Ctrl-Z>: Suspends the current foreground process.
●​ bg: Puts a suspended process into the background to resume its execution.
●​ fg: Brings a background or suspended process into the foreground.
●​ jobs: Lists the status of all background and suspended jobs.
●​ <Ctrl-C>: Sends an interrupt signal (SIGINT) to terminate the current foreground
process.
●​ kill: Sends a specified signal to a process to terminate it. The most common use is​
kill PID (sends a termination signal) and kill -9 PID (sends a "sure kill" signal
that cannot be ignored).

The Thread Concept 💡


Creating a new process with

fork() is expensive because it involves copying the entire memory space. Communication
between processes also requires special IPC mechanisms.

Threads offer a more lightweight solution.

●​ A​
thread, or lightweight process (LWP), is a basic unit of CPU utilization.
●​ A process can have multiple threads of control, allowing it to perform multiple tasks at
once.
●​ What Threads Have on Their Own: Each thread has its own thread ID, program
counter, register set, and stack.
●​ What Threads Share: Threads within the same process share the code section, data
section, open files, and other OS resources.

Advantages of Threads:

●​ Responsiveness: An application can remain responsive to the user even if one of its
threads is blocked (e.g., waiting for I/O).
●​ Resource Sharing: Threads share memory and resources by default, which is more
efficient than setting up complex IPC.
●​ Economy: It's much cheaper and faster to create and switch between threads than
processes.
●​ Multiprocessor Utilization: On a multi-CPU machine, multiple threads can run in
parallel on different processors, increasing concurrency.

Lecture 13: Threads - 2


This lecture delves deeper into thread implementation models and the standard Pthreads library.

User vs. Kernel Threads

●​ User Threads: Managed by a thread library at the user level without kernel support.
They are fast to create and manage. However, if one user thread makes a blocking
system call (like reading a file), the entire process blocks, including all other threads
within it.
●​ Kernel Threads: Supported directly by the operating system. They are slower to create
and manage than user threads. But if one kernel thread blocks, the kernel can schedule
another thread from the same process to run.

Multi-threading Models

These models describe how user threads are mapped to kernel threads.

●​ Many-to-One: Many user-level threads are mapped to a single kernel thread. It's
efficient but suffers from the blocking problem mentioned above.
●​ One-to-One: Each user-level thread is mapped to a kernel thread. This provides true
concurrency but has the overhead of creating a kernel thread for every user thread.
●​ Many-to-Many: Multiplexes many user-level threads over a smaller or equal number of
kernel threads. This model offers a balance of efficiency and concurrency.

POSIX Threads (Pthreads)


Pthreads is a POSIX standard API for creating and synchronizing threads. It is a specification,
not an implementation. Key functions include:

●​ pthread_create(): Creates a new thread.


●​ pthread_join(): Causes the calling thread to wait for another thread to terminate.
This is similar to the​
wait() system call for processes.
●​ pthread_exit(): Terminates the calling thread.

Lecture 14: Short Term Scheduler/Dispatcher


This lecture introduces the core concepts of how the OS decides which process gets to use the
CPU.

Basic Concepts

●​ Process execution consists of a cycle of​


CPU execution (a "CPU burst") and I/O wait. Processes alternate between these two
states.
●​ The​
CPU Scheduler (or short-term scheduler) selects a process from the ready queue to
be executed when the CPU becomes idle.
●​ The​
Dispatcher is the module that gives control of the CPU to the process selected by the
scheduler. This involves switching context, switching to user mode, and jumping to the
proper location in the program to restart it. The time this takes is called​
dispatch latency.

Preemptive vs. Non-Preemptive Scheduling

●​ Non-Preemptive: Once the CPU has been allocated to a process, it keeps the CPU until
it either terminates or switches to the waiting state (e.g., for I/O).
●​ Preemptive: The CPU can be taken away from a running process. This happens, for
example, when a higher-priority process enters the ready queue or when a process's
time slice expires.

Scheduling Criteria

To compare scheduling algorithms, we use several criteria:

●​ CPU Utilization: Keep the CPU as busy as possible.


●​ Throughput: The number of processes completed per time unit.
●​ Turnaround Time: The total time from submission of a process to its completion.
●​ Waiting Time: The amount of time a process spends waiting in the ready queue.
●​ Response Time: The time from a request's submission until the first response is
produced (important for interactive systems).

First-Come, First-Served (FCFS) Scheduling

This is the simplest scheduling algorithm. The process that requests the CPU first is allocated
the CPU first. It is implemented with a FIFO queue. FCFS is non-preemptive. Its major
drawback is that the average waiting time can be long, especially if a long process arrives
before several short ones (the

convoy effect).

Lecture 15: Process Scheduling Algorithms - 1


This lecture covers more sophisticated scheduling algorithms.

Shortest-Job-First (SJF) Scheduling

●​ This algorithm assigns the CPU to the process with the smallest next CPU burst. It's
provably optimal in terms of minimizing average waiting time.
●​ The main challenge is knowing the length of the next CPU burst. This is often predicted
using an​
exponential average of the process's previous burst lengths.
●​ SJF can be:
○​ Non-Preemptive: Lets the currently running process finish its CPU burst.
○​ Preemptive: If a new process arrives with a CPU burst shorter than the
remaining time of the current process, it preempts it. This is called​
Shortest-Remaining-Time-First (SRTF) scheduling.

Priority Scheduling

●​ A priority is associated with each process, and the CPU is allocated to the process with
the highest priority. (Often, the smallest integer represents the highest priority).
●​ Like SJF, it can be preemptive or non-preemptive.
●​ A major problem is​
starvation or indefinite blocking, where a low-priority process may never get to run.
●​ A solution to starvation is​
aging, which involves gradually increasing the priority of processes that have been
waiting in the system for a long time.
Lecture 16: Process Scheduling Algorithms - 2
Round-Robin (RR) Scheduling

●​ This algorithm is designed specifically for​


time-sharing systems.
●​ It's similar to FCFS, but with preemption added. A small unit of time called a​
time quantum (or time slice) is defined.
●​ The ready queue is treated as a circular queue. The scheduler gives each process the
CPU for up to one time quantum. If the process is still running after the quantum expires,
it is preempted and placed at the end of the ready queue.
●​ The performance of RR depends heavily on the size of the time quantum q:
○​ If​
q is very large, RR becomes FCFS.
○​ If​
q is very small, it appears as though each process has its own slower processor,
but the overhead from frequent context switches becomes high.

Multilevel Queue Scheduling

●​ This algorithm is used when processes can be easily classified into different groups,
such as​
foreground (interactive) and background (batch) processes.
●​ The ready queue is partitioned into several separate queues, each with its own
scheduling algorithm (e.g., RR for foreground, FCFS for background) and priority.
●​ Scheduling must also be done​
among the queues, often using fixed-priority preemptive scheduling (serve all from the
foreground queue first).

Lecture 17: UNIX Process Management & Scheduling


Multilevel Feedback Queue Scheduling

●​ This is a more flexible version of the multilevel queue. It allows a process to​
move between queues.
●​ The idea is to separate processes based on their CPU burst characteristics. For
instance, if a process uses too much CPU time, it can be moved to a lower-priority
queue.
●​ If a process waits too long in a lower-priority queue, it can be moved to a higher-priority
queue to prevent starvation (a form of aging).

Algorithm Evaluation

How do we decide which scheduling algorithm is best for a particular system?

1.​ Analytic Evaluation: Uses a given algorithm and system workload to produce a formula
or number (like average waiting time) to assess performance.
○​ Deterministic Modeling: Calculates performance for a specific, predetermined
workload. It's simple and fast but only applies to that specific case.
○​ Queuing Models: Uses mathematical formulas (like Little's formula) to analyze
queues and server utilization. Often relies on simplifying assumptions.
2.​ Simulations: Involve programming a model of the computer system and running it with
simulated data to gather performance statistics. More accurate but can be expensive to
build and run.
3.​ Implementation: The most accurate way is to code the algorithm and run it in a real
operating system to see how it performs under real conditions. This is costly and
complex but provides the truest evaluation.

Lectures 18 & 19: The Critical Section Problem


These lectures introduce the fundamental challenge of keeping shared data consistent when
accessed by multiple processes.

Process Synchronization and Race Conditions

●​ When concurrent processes access shared data, the final state of the data can depend
on the specific order in which their instructions are executed. This is called a​
race condition.
●​ For example, if two processes try to increment a shared counter (counter++), the
operation might be implemented in machine language as three separate steps: load,
increment, store. If these steps are interleaved between the two processes, the final
value of the counter can be incorrect.

The Critical Section Problem

●​ Critical Section: A segment of code in a process where shared resources are accessed
or modified.
●​ The Problem: To design a protocol that processes can use to cooperate, ensuring that
when one process is executing in its critical section, no other process is allowed to
execute in its critical section.
●​ A solution must satisfy three requirements:
○​ Mutual Exclusion: If one process is in its critical section, no other process can
be in its critical section.
○​ Progress: If no process is in its critical section and some processes want to
enter, the selection of the next process to enter cannot be postponed indefinitely.
○​ Bounded Waiting: There must be a limit on the number of times other processes
are allowed to enter their critical sections after a process has made a request to
enter its critical section and before that request is granted.
●​ Software Solutions for 2 Processes:
○​ Algorithm 1 (Turn Variable): Uses a shared variable turn to strictly alternate
which process can enter its critical section. It ensures mutual exclusion but fails
the progress requirement because it forces strict alternation even if one process
doesn't need to enter.
○​ Algorithm 2 (Flag Array): Uses a boolean array flag[2] where flag[i] =
true indicates that process Pi is ready to enter. This solution can lead to a
deadlock where both processes set their flags to true and then wait indefinitely
for the other's flag to become false, thus failing the progress requirement.

Lecture 20: Critical Section Problems and Solutions


This lecture provides a correct software solution and extends the problem to handle n
processes.

A Correct 2-Process Solution (Peterson's Algorithm)

●​ Algorithm 3: This solution combines the ideas of the previous two algorithms, using
both a shared integer turn and a boolean array flag[2].
●​ A process​
Pi indicates its readiness by setting flag[i] = true and gives the other process Pj
priority by setting turn = j. It then waits only if​
Pj is also ready (flag[j] == true) and it is Pj's turn (turn == j).
●​ This algorithm correctly satisfies all three conditions:​
mutual exclusion, progress, and bounded waiting.

The Bakery Algorithm (n-Process Solution) 🥖


This elegant solution, created by Leslie Lamport, solves the critical section problem for

n processes. It is based on the system used in bakeries where customers take a number and
are served in order.
1.​ Taking a Number: Before entering its critical section, a process Pi takes a ticket
number that is one greater than the maximum of all currently held ticket numbers.
2.​ Waiting: The process then waits until its number is the smallest among all processes
that want to enter their critical section.
3.​ Tie-Breaking: If two processes receive the same number, the one with the smaller
process ID gets to go first.
4.​ Entering: Once a process has the smallest number, it enters its critical section.
5.​ Exiting: Upon exiting, the process resets its number to 0, indicating it is no longer
interested in entering.

This algorithm guarantees mutual exclusion, progress, and bounded waiting for

n processes in a fair, first-come-first-served manner

You might also like