Advanced Operating Systems EBook
Advanced Operating Systems EBook
SYSTEMS
Dr. A. Karunamurthy
Mr.R.Ramakrishnan
Mr. V. Udhayakumar
Mr. P. Rajapandian
ADVANCED OPERATING SYSTEMS
Authors
DR. A. KARUNAMURTHY
MR. R. RAMAKRISHNAN
MR. V. UDHAYAKUMAR
MR. P. RAJAPANDIAN
Published by
i
Copyright©2024 by CHENDUR PUBLISHING HOUSE
All Rights Reserved
First Edition: 2024
M.R.P: Rs. 600
International Standard Book Number (ISBN): 978-81-971220-2-6
The International Standard Book Number, commonly known as ISBN, is a
unique numeric identifier assigned to every commercially published book.
Serving as a universal method of identifying books, ISBNs are crucial for
book distribution, inventory management, and sales tracking worldwide.
Consisting of either 10 or 13 digits, ISBNs typically denote specific elements
such as the book's edition, publisher, and format. They facilitate efficient
cataloguing and accessibility across libraries, bookstores, and online
retailers, enabling streamlined book searches and purchases by providing a
standardized identification system for books across diverse markets globally.
Trademark Notice:
A trademark notice is a symbol or phrase used to assert ownership and alert
the public about the legal protection of a trademark. It typically consists of
the symbols ™ (for an unregistered trademark) or ® (for a registered
trademark) placed next to the trademarked name, logo, or slogan. The use of
these notices serves as a cautionary measure to inform others that the mark
is either actively claimed as a trademark (with ™) or officially registered with
the appropriate government authority (with ®). Including these notices helps
establish rights, discourage unauthorized use, and reinforce the owner's claim
to the mark, contributing to safeguarding the brand's identity and value while
deterring potential infringement.
Chendur Publishing House also publishes its books in a variety of
electronic formats. Some content that appears in print may not be available
in electronic formats. For more information visit our publication website:
www.chendurph.com
ii
PREFACE
Welcome to the enlightening journey through the realm of
Advanced Operating Systems! This syllabus is meticulously
crafted to provide a comprehensive understanding of the intricate
workings and advanced concepts that underpin modern operating
systems.
Chapter 1 serves as the gateway to this exploration, offering an in-
depth Introduction to Operating Systems. Here, you will embark
on a voyage to uncover the definition and purpose of operating
systems, unravel the multifaceted functions and components that
constitute their essence, and trace the fascinating history and
evolution of these pivotal systems. Moreover, you will delve into
the diverse types of operating systems, from real-time to batch
processing, multi-user to embedded, gaining insights into their
unique characteristics and applications. The chapter culminates
with an exploration of various operating system architectures,
shedding light on the underlying structures that enable efficient
resource management and system operation.
In Chapter 2, we delve into the intricacies of Process and Thread
Management. You will delve into the foundational concepts of
processes and their control blocks, and explore a myriad of
process scheduling algorithms, including FCFS, SJF, and Round
Robin, among others. Multithreading and thread management take
center stage as you uncover techniques for optimizing resource
utilization and enhancing system responsiveness. Additionally,
you will unravel the complexities of thread synchronization,
communication, and inter-process communication mechanisms,
essential for building robust and efficient concurrent systems.
iii
Memory Management takes center stage in Chapter 3, where you
will embark on a deep dive into the intricacies of virtual memory,
address translation, paging, segmentation, and memory allocation
techniques. With a focus on memory protection, sharing, and
management in multiprocessor systems, you will gain insights
into the challenges and strategies involved in efficiently managing
memory resources in modern computing environments.
Chapter 4 delves into the critical domain of File Systems and
Storage. Here, you will explore fundamental concepts of file
system organization, implementation, and data structures. From
disk management and storage technologies to file system security
and access control, you will gain a comprehensive understanding
of the complexities involved in managing and securing data
storage. Additionally, you will be introduced to emerging storage
technologies such as solid-state drives (SSDs) and storage
virtualization, paving the way for future advancements in storage
systems.
The final chapter, Chapter 5, navigates the dynamic landscape of
Distributed Systems and Virtualization. You will unravel the
challenges and opportunities presented by distributed operating
systems, exploring communication and synchronization
mechanisms, distributed file systems, and virtualization
technologies. From virtual machines and hypervisors to cloud
computing and virtualization technologies, you will gain insights
into the transformative impact of distributed systems and
virtualization on modern computing infrastructures.
This syllabus aims to provide you with a comprehensive
understanding of Advanced Operating Systems, equipping you
with the knowledge and skills to navigate the complexities of
modern computing environments. Whether you are a student,
iv
researcher, or practitioner in the field, we invite you to embark on
this enlightening journey and explore the fascinating world of
Advanced Operating Systems.
Authors
Dr. A. Karunamurthy
Mr. R. Ramakrishnan
Mr. V. Udhayakumar
Mr. P. Rajapandian
v
ACKNOWLEDGEMENT
I wish to express my sincere gratitude to
Dr. V.S.K. Venkatachalapathy, our esteemed Director Cum
Principal at Sri Manakula Vinayagar Engineering College. His
unwavering support was Instrumental in the successful
completion of my book.
I extend my thanks to all the faculty members of the Department
of MCA at Sri Manakula Vinayagar Engineering College for the
valuable information they provided in their respective fields.
I would also like to express my gratitude to The Management,
staff, and non-teaching staff for their significant contributions to
this book.
Special thanks go to My Parents & Friends for their unwavering
support, motivation, and Encouragement throughout the process
of completing this book and the associated course.
Finally, I express my sincere thanks to The Almighty for the
successful completion of my book.
Authors
Dr. A. Karunamurthy
Mr. R. Ramakrishnan
Mr. V. Udhayakumar
Mr. P. Rajapandian
vi
CONTENT
Chapter-1. Introduction to Operating Systems
1.1 Definition and Purpose of an Operating System 2
1.2 Functions and Components of an Operating System 2
1.3 History and Evolution of Operating Systems 17
1.4 Types of Operating Systems 20
1.4.1 Real-time Operating Systems 21
1.4.2 Batch Operating Systems 23
1.4.3 Multi-user Operating Systems 25
1.4.4 Others 28
1.5 Operating System Architectures 33
Chapter-2 Process and Thread Management
2.1 Process Concept and Process Control Block 45
2.2 Process Scheduling Algorithms 48
2.2.1 First Come, First Served (FCFS) 49
2.2.2 Shortest Job First (SJF) 51
2.2.3 Round Robin 53
2.2.4 Others 54
2.3 Multithreading and Thread Management 56
2.4 Thread Synchronization and Communication 63
2.5 Interprocess Communication (IPC) Mechanisms 65
Chapter-3 Memory Management
3.1 Virtual Memory and Address Translation 70
3.2 Paging and Segmentation 72
3.3 Memory Allocation Techniques 75
vii
3.3.1 Buddy System 75
3.3.2 Slab Allocation 76
3.3.3 Others 76
3.4 Memory Protection and Sharing 77
3.5 Memory Management in Multiprocessor Systems 78
Chapter-4 File Systems and Storage
4.1 File System Concepts and Organization 83
4.2 File System Implementation and Data Structures 84
4.3 Disk Management and Storage Technologies 86
4.4 File System Security and Access Control 91
4.5 Introduction to Solid-State Drives (SSDs) and Storage 93
Virtualization
Chapter-5 Distributed Systems and Virtualization
5.1 Distributed Operating Systems and Their Challenges 97
5.2 Communication and Synchronization in Distributed 101
Systems
5.3 Distributed File Systems and Naming 102
5.4 Virtual Machines and Hypervisors 104
5.5 Cloud Computing and Virtualization Technologies 106
viii
SYLLABUS
Chapter-1 Introduction to Operating Systems
Definition and purpose of an operating system - Functions and components
of an operating system - History and evolution of operating systems - Types
of operating systems (real-time, batch, multi-user, etc.) - Operating system
architectures
Chapter-2: Process and Thread Management
Process concept and process control block - Process scheduling algorithms
(e.g., FCFS, SJF, Round Robin) - Multithreading and thread management -
Thread synchronization and communication - Inter process communication
(IPC) mechanisms
Chapter-3: Memory Management
Virtual memory and address translation - Paging and segmentation - Memory
allocation techniques (e.g., buddy system, slab allocation) - Memory
protection and sharing - Memory management in multiprocessor systems.
Chapter-4: File Systems and Storage
File system concepts and organization - File system implementation and data
structures - Disk management and storage technologies - File system security
and access control - Introduction to solid-state drives (SSDs) and storage
virtualization
Chapter-5: Distributed Systems and Virtualization
Distributed operating systems and their challenges - Communication and
synchronization in distributed systems - Distributed file systems and naming
- Virtual machines and hypervisors - Cloud computing and virtualization
technologies.
ix
INTRODUCTION TO
OPERATING SYSTEMS
1.1 Definition:
[1]
Chapter-1 Introduction to Operating System
[2]
Advanced Operating System
[3]
Chapter-1 Introduction to Operating System
[4]
Advanced Operating System
[5]
Chapter-1 Introduction to Operating System
[6]
Advanced Operating System
[7]
Chapter-1 Introduction to Operating System
1. Process Management
For example: when you use a search engine like Chrome, there is a
process running for that browser program.
[8]
Advanced Operating System
[9]
Chapter-1 Introduction to Operating System
3. Network Management
[10]
Advanced Operating System
[11]
Chapter-1 Introduction to Operating System
[12]
Advanced Operating System
[13]
Chapter-1 Introduction to Operating System
• Storage allocation
• Free space management
• Disk scheduling
[14]
Advanced Operating System
The I/O management system offers the following functions, such as:
[15]
Chapter-1 Introduction to Operating System
[16]
Advanced Operating System
Its function is quite simple, get the next command statement, and
execute it. The command statements deal with process management, I/O
handling, secondary storage management, main memory management,
file system access, protection, and networking.
[17]
Chapter-1 Introduction to Operating System
• No OS – (0s to 1940s):
o As we know that before 1940s, there was no use of OS.
Earlier, people are lacking OS in their computer system so
they had to manually type instructions for each task in
machine language (0-1 based language). And at that time, it
was very hard for users to implement even a simple task. And
it was very time consuming and also not user-friendly.
Because not everyone had that much level of understanding
to understand the machine language and it required a deep
understanding.
• Batch Processing Systems - (1940s to 1950s):
o With the growth of time, batch processing system came into
the market. Now Users had facility to write their programs on
punch cards and load it to the computer operator. And then
operator make different batches of similar types of jobs and
then serve the different batch (group of jobs) one by one to the
CPU. CPU first executes jobs of one batch and them jump to
the jobs of other batch in a sequence manner.
• Multiprogramming Systems - (1950s to 1960s):
o Multiprogramming was the first operating system where
actual revolution began. It provides user facility to load the
multiple programs into the memory and provide a specific
portion of memory to each program. When one program is
waiting for any I/O operations (which take much time) at that
time the OS give permission to CPU to switch from previous
[18]
Advanced Operating System
[19]
Chapter-1 Introduction to Operating System
An Operating System performs all the basic tasks like managing files,
processes, and memory. Thus operating system acts as the manager of all the
resources, i.e. resource manager. Thus, the operating system becomes an
interface between the user and the machine. It is one of the most required
software that is present in the device.
[20]
Advanced Operating System
These types of OSs serve real-time systems. The time interval required to
process and respond to inputs is very small. This time interval is called
response time.
Real-time systems are used when there are time requirements that are
very strict like missile systems, air traffic control systems, robots, etc.
Advantages of RTOS
[22]
Advanced Operating System
Disadvantages of RTOS
• Limited Tasks: Very few tasks run at the same time and their
concentration is very less on a few applications to avoid errors.
• Use heavy system resources: Sometimes the system resources are
not so good and they are expensive as well.
• Complex Algorithms: The algorithms are very complex and
difficult for the designer to write on.
• Device driver and interrupt signals: It needs specific device
drivers and interrupts signal to respond earliest to interrupts.
• Thread Priority: It is not good to set thread priority as these
systems are very less prone to switching tasks.
This type of operating system does not interact with the computer
directly. There is an operator which takes similar jobs having the same
requirement and groups them into batches. It is the responsibility of the
operator to sort jobs with similar needs.
[23]
Chapter-1 Introduction to Operating System
[24]
Advanced Operating System
[25]
Chapter-1 Introduction to Operating System
[26]
Advanced Operating System
There are two types of Multi-Tasking Systems which are listed below.
• Pre-emptive Multi-Tasking
• Cooperative Multi-Tasking
[27]
Chapter-1 Introduction to Operating System
Each task is given some time to execute so that all the tasks work
smoothly. Each user gets the time of the CPU as they use a single system.
These systems are also known as Multitasking Systems. The task can be
from a single user or different users also. The time that each task gets to
execute is called quantum. After this time interval is over OS switches
over to the next task.
Advantages of Time-Sharing OS
[28]
Advanced Operating System
Disadvantages of Time-Sharing OS
• Reliability problem.
• One must have to take care of the security and integrity of user
programs and data.
• Data communication problem.
• High Overhead: Time-sharing systems have a higher overhead than
other operating systems due to the need for scheduling, context
switching, and other overheads that come with supporting multiple
users.
• Complexity: Time-sharing systems are complex and require
advanced software to manage multiple users simultaneously. This
complexity increases the chance of bugs and errors.
• Security Risks: With multiple users sharing resources, the risk of
security breaches increases. Time-sharing systems require careful
[29]
Chapter-1 Introduction to Operating System
[30]
Advanced Operating System
[31]
Chapter-1 Introduction to Operating System
[32]
Advanced Operating System
[33]
Chapter-1 Introduction to Operating System
1. Simple Structure
2. Monolithic Structure
3. Layered Approach Structure
4. Micro-Kernel Structure
5. Exo-Kernel Structure
6. Virtual Machines
1. Simple Structure
• There are four layers that make up the MS-DOS operating system,
and each has its own set of features.
[34]
Advanced Operating System
[35]
Chapter-1 Introduction to Operating System
2. Monolithic Structure
[36]
Advanced Operating System
This is an old operating system that was used in banks to carry out
simple tasks like batch processing and time-sharing, which allows
numerous users at different terminals to access the Operating System.
[37]
Chapter-1 Introduction to Operating System
and reduces the time required for address allocation for new
processes.
[38]
Advanced Operating System
• Work duties are separated since each layer has its own
functionality, and there is some amount of abstraction.
• Debugging is simpler because the lower layers are examined first,
followed by the top layers.
[39]
Chapter-1 Introduction to Operating System
5. Exokernel:
[40]
Advanced Operating System
Because the OS sits between the programs and the actual hardware,
it will always have an effect on the functionality, performance, and
breadth of the apps that are developed on it. By rejecting the idea that an
operating system must offer abstractions upon which to base applications,
the exokernel operating system makes an effort to solve this issue. The
goal is to give developers as few restrictions on the use of abstractions as
possible while yet allowing them the freedom to do so when necessary.
Because of the way the exokernel architecture is designed, a single tiny
kernel is responsible for moving all hardware abstractions into unreliable
libraries known as library operating systems. Exokernels differ from
micro- and monolithic kernels in that their primary objective is to prevent
forced abstraction.
[41]
Chapter-1 Introduction to Operating System
• A decline in consistency
• Exokernel interfaces have a complex architecture.
[42]
Advanced Operating System
Disk systems are the fundamental problem with the virtual machine
technique. If the actual machine only has three-disc drives but needs to
host seven virtual machines, let's imagine that. It is obvious that it is
impossible to assign a disc drive to every virtual machine because the
program that creates virtual machines would require a sizable amount of
disc space in order to offer virtual memory and spooling. The provision
of virtual discs is the solution.
The result is that users get their own virtual machines. They can then
use any of the operating systems or software programs that are installed
on the machine below. Virtual machine software is concerned with
programming numerous virtual machines simultaneously into a physical
machine; it is not required to take into account any user-support software.
With this configuration, it may be possible to break the challenge of
building an interactive system for several users into two manageable
chunks.
[43]
Chapter-1 Introduction to Operating System
[44]
Chapter-2 Process and Thread Management
The active program which running now on the Operating System is known as the process. The
Process is the base of all computing things. Although process is relatively similar to the
computer code but, the method is not the same as computer code.
[45]
Advanced Operating System
Stack
The process stack stores temporary information such as method or function arguments, the
return address, and local variables.
Heap
This is the memory where a process is dynamically allotted while it is running.
Text
This consists of the information stored in the processor's registers as well as the most recent
activity indicated by the program counter's value.
Data
Both global and static variables.
2.1.1 Process Control Block (PCB)
It is a data structure that is used by an Operating System to manage and regulate how processes
are carried out. In operating systems, managing the process and scheduling them properly play
the most significant role in the efficient usage of memory and other system resources. In the
process control block, all the details regarding the process corresponding to it like its current
status, its program counter, its memory use, its open files, and details about CPU scheduling
are stored.
[46]
Chapter-2 Process and Thread Management
PCB is helpful in doing that as it helps the OS to actively monitor the process and redirect
system resources to each process accordingly. The OS creates a PCB for every process which
is created, and it contains all the important information about the process. All this information
is afterward used by the OS to manage processes and run them efficiently.
Process State: The state of the process is stored in the PCB which helps to manage the
processes and schedule them. There are different states for a process which are
“running,” “waiting,” “ready,” or “terminated.”
Process ID: The OS assigns a unique identifier to every process as soon as it is created
which is known as Process ID, this helps to distinguish between processes.
Program Counter: While running processes when the context switch occurs the last
instruction to be executed is stored in the program counter which helps in resuming the
execution of the process from where it left off.
CPU Registers: The CPU registers of the process helps to restore the state of the
process so the PCB stores a copy of them.
[47]
Advanced Operating System
Memory Information: The information like the base address or total memory
allocated to a process is stored in PCB which helps in efficient memory allocation to
the processes.
Accounting Information: The information such as CPU time, memory usage, etc
helps the OS to monitor the performance of the process.
As, PCB stores all the information about the process so it lets the operating system execute
different tasks like process scheduling, context switching, etc.
Using PCB helps in scheduling the processes and it ensures that the CPU resources are
allocated efficiently.
When the different resource utilization information about a process is used from the PCB,
they help in efficient resource utilization and resource sharing.
The CPU registers and stack pointers information helps the OS to save the process state
which helps in Context switching.
To store the PCB for each and every process there is a significant usage of the memory
in there can be a large number of processes available simultaneously in the OS. So using
PCB adds extra memory usage.
Using PCB reduces the scalability of the process in the OS as the whole process of
using the PCB adds some complexity to the user so it makes it tougher to scale the
system further.
The act of determining which process is in the ready state, and should be moved to
the running state is known as Process Scheduling.
The prime aim of the process scheduling system is to keep the CPU busy all the time and to
deliver minimum response time for all programs. For achieving this, the scheduler must apply
appropriate rules for swapping processes IN and OUT of CPU.
[48]
Chapter-2 Process and Thread Management
First Come First Serve, is just like FIFO (First in First out) Queue data structure, where
the data element which is added to the queue first, is the one who leaves the queue first.
This is used in Batch Systems.
It's easy to understand and implement programmatically, using a Queue data
structure, where a new process enters through the tail of the queue, and the scheduler
selects process from the head of the queue.
A perfect real-life example of FCFS scheduling is buying tickets at ticket counter.
[49]
Advanced Operating System
The GANTT chart above perfectly represents the waiting time for each process.
If a process with very least priority is being executed, more like daily routine backup process,
which takes more time, and all of a sudden, some other high priority process arrives,
like interrupt to avoid system crash, the high priority process will have to wait, and hence in
this case, the system will crash, just because of improper process scheduling.
3. Resources utilization in parallel is not possible, which leads to Convoy Effect, and
hence poor resource (CPU, I/O etc) utilization.
Convoy Effect is a situation where many processes, who need to use a resource for short time
are blocked by one process holding that resource for a long time.
This essentially leads to poor utilization of resources and hence poor performance.
[50]
Chapter-2 Process and Thread Management
Here we have simple formulae for calculating various times for given processes:
Completion Time: Time taken for the execution to complete, starting from arrival time.
Turn Around Time: Time taken to complete after arrival. In simple words, it is the difference
between the Completion time and the Arrival time.
Waiting Time: Total time the process has to wait before its execution begins. It is the difference
between the Turn Around time and the Burst time of the process.
Shortest Job First scheduling works on the process with the shortest burst
time or duration first.
This is the best approach to minimize waiting time.
This is used in Batch Systems.
It is of two types:
• Non-Pre-emptive
• Pre-emptive
To successfully implement it, the burst time/duration time of the processes should be
known to the processor in advance, which is practically not feasible all the time.
This scheduling algorithm is optimal if all the jobs/processes are available at the same
time. (either Arrival time is 0 for all, or Arrival time is same for all)
Consider the below processes available in the ready queue for execution, with arrival
time as 0 for all and given burst times.
[51]
Advanced Operating System
As you can see in the GANTT chart above, the process P4 will be picked up first as it has the
shortest burst time, then P2, followed by P3 and at last P1.
We scheduled the same set of processes using the First come first serve algorithm in the
previous tutorial, and got average waiting time to be 18.75 Ms, whereas with SJF, the average
waiting time comes out 4.5 Ms.
If the arrival time for processes are different, which means all the processes are not available
in the ready queue at time 0, and some jobs arrive after some time, in such situation, sometimes
process with short burst time have to wait for the current process's execution to finish, because
in Non-Pre-emptive SJF, on arrival of a process with short duration, the existing job/process's
execution is not halted/stopped to execute the short job first.
This leads to the problem of Starvation, where a shorter process has to wait for a long time
until the current longer process gets executed. This happens if shorter jobs keep coming, but
this can be solved using the concept of aging.
In Pre-emptive Shortest Job First Scheduling, jobs are put into ready queue as they arrive, but
as a process with short burst time arrives, the existing process is pre-empted or removed from
execution, and the shorter job is executed first.
[52]
Chapter-2 Process and Thread Management
The average waiting time for pre-emptive shortest job first scheduling is less than both, non-
pre-emptive SJF scheduling and FCFS scheduling
As you can see in the GANTT chart above, as P1 arrives first, hence it's execution starts
immediately, but just after 1 Ms, process P2 arrives with a burst time of 3 Ms which is less
than the burst time of P1, hence the process P1(1 Ms done, 20 Ms left) is pre-empted and
process P2 is executed.
As P2 is getting executed, after 1 Ms, P3 arrives, but it has a burst time greater than that of P2,
hence execution of P2 continues. But after another millisecond, P4 arrives with a burst time
of 2 Ms, as a result P2(2 Ms done, 1 Ms left) is pre-empted and P4 is executed.
After the completion of P4, process P2 is picked up and finishes, then P2 will get executed and
at last P1.
The Pre-emptive SJF is also known as Shortest Remaining Time First, because at any given
point of time, the job with the shortest remaining time is executed first.
Round Robin (RR) scheduling algorithm is mainly designed for time-sharing systems. This
algorithm is similar to FCFS scheduling, but in Round Robin (RR) scheduling, pre-emption is
added which enables the system to switch between processes.
Some important characteristics of the Round Robin (RR) Algorithm are as follows:
[53]
Advanced Operating System
In this algorithm, the time slice should be the minimum that is assigned to a specific
task that needs to be processed. Though it may vary for different operating systems.
Important terms
1. CompletionTime
It is the time at which any process completes its execution.
2. TurnAroundTime
This mainly indicates the time Difference between completion time and arrival time.
The Formula to calculate the same is: Turn Around Time = Completion Time –
Arrival Time
3. WaitingTime(W.T):
It Indicates the time Difference between turnaround time and burst time.
And is calculated as Waiting Time = Turn Around Time – Burst Time
[54]
Chapter-2 Process and Thread Management
In the above diagram, arrival time is not mentioned so it is taken as 0 for all processes.
Note: If arrival time is not given for any problem statement, then it is taken as 0 for all
processes; if it is given then the problem can be solved accordingly.
Explanation
The value of time quantum in the above example is 5. Let us now calculate the Turnaround
time and waiting time for the above example:
P1 21 32-0=32 32-21=11
P2 3 8-0=8 8-3=5
P3 6 21-0=21 21-6=15
P4 2 15-0=15 15-2=13
Average waiting time is calculated by adding the waiting time of all processes and then dividing
them by no. of processes.
[55]
Advanced Operating System
Thread is an execution unit that consists of its own program counter, a stack, and a set of
registers where the program counter mainly keeps track of which instruction to execute next, a
set of registers mainly hold its current working variables, and a stack mainly contains the
history of execution
Threads are also known as Lightweight processes. Threads are a popular way to
improve the performance of an application through parallelism. Threads are mainly
used to represent a software approach in order to improve the performance of an
operating system just by reducing the overhead thread that is mainly equivalent to a
classical process.
The CPU switches rapidly back and forth among the threads giving the illusion that the
threads are running in parallel.
As each thread has its own independent resource for process execution; thus Multiple
processes can be executed parallelly by increasing the number of threads.
It is important to note here that each thread belongs to exactly one process and outside
a process no threads exist. Each thread basically represents the flow of control
separately. In the implementation of network servers and web servers’ threads have
[56]
Chapter-2 Process and Thread Management
been successfully used. Threads provide a suitable foundation for the parallel execution
of applications on shared-memory multiprocessors.
The given below figure shows the working of a single-threaded and a multithreaded
process:
Before moving on further let us first understand the difference between a process and thread.
Process Thread
A Process simply means any program in execution. Thread simply means a segment of a process.
The process takes more time to terminate The thread takes less time to terminate.
Communication between processes needs more Communication between threads needs less
time as compared to thread. time as compared to processes.
For some reason, if a process gets blocked then the In case if a user-level thread gets blocked, all
remaining processes can continue their execution of its peer threads also get blocked.
[57]
Advanced Operating System
Advantages of Thread
• Responsiveness
• Scalability. One thread runs on one CPU. In Multithreaded processes, threads can be
distributed over a series of processors to scale.
• Enhanced Throughput of the system. Let us take an example for this: suppose a process is
divided into multiple threads, and the function of each thread is considered as one job, then
the number of jobs completed per unit of time increases which then leads to an increase in
the throughput of the system.
[58]
Chapter-2 Process and Thread Management
Types of Thread
1. User Threads
2. Kernel Threads
User threads are above the kernel and without kernel support. These are the threads that
application programmers use in their programs.
Kernel threads are supported within the kernel of the OS itself. All modern OSs support
kernel-level threads, allowing the kernel to perform multiple simultaneous tasks and/or to
service multiple kernel system calls simultaneously.
Let us now understand the basic difference between User level Threads and Kernel level
threads:
These threads are not recognized by These threads are recognized by operating
operating systems, systems,
In User Level threads, the Context switch In Kernel Level threads, hardware support is
requires no hardware support. needed.
These threads are mainly designed as These threads are mainly designed as
dependent threads. independent threads.
In User Level threads, if one user-level On the other hand, if one kernel thread
thread performs a blocking operation then performs a blocking operation then another
the entire process will be blocked. thread can continue the execution.
[59]
Advanced Operating System
Example of User Level threads: Java thread, Example of Kernel level threads: Window
POSIX threads. Solaris.
Multithreading Models
The user threads must be mapped to kernel threads, by one of the following strategies:
• In the many to one model, many user-level threads are all mapped onto a single kernel
thread.
• Thread management is handled by the thread library in user space, which is efficient in
nature.
• In this case, if user-level thread libraries are implemented in the operating system in
some way that the system does not support them, then the Kernel threads use this many-
to-one relationship model.
[60]
Chapter-2 Process and Thread Management
The one-to-one model creates a separate kernel thread to handle each and every user
thread.
Most implementations of this model place a limit on how many threads can be created.
Linux and Windows from 95 to XP implement the one-to-one model for threads.
This model provides more concurrency than that of many to one Model.
The many to many models multiplex any number of user threads onto an equal or
smaller number of kernel threads, combining the best features of the one-to-one and
many-to-one models.
Users can create any number of threads.
Blocking the kernel system calls does not block the entire process.
Processes can be split across multiple processors.
[61]
Advanced Operating System
Thread libraries provide programmers with API for the creation and management of threads.
Thread libraries may be implemented either in user space or in kernel space. The user space
involves API functions implemented solely within the user space, with no kernel support. The
kernel space involves system calls and requires a kernel with thread library support.
3. Java threads: Since Java generally runs on a Java Virtual Machine, the implementation
of threads is based upon whatever OS and hardware the JVM is running on, i.e. either
Pitheads or Win32 threads depending on the system.
Multithreading Issues
Below we have mentioned a few issues related to multithreading. Well, it's an old saying, All
good things, come at a price.
Thread Cancellation
Thread cancellation means terminating a thread before it has finished working. There can be
two approaches for this, one is Asynchronous cancellation, which terminates the target thread
[62]
Chapter-2 Process and Thread Management
immediately. The other is Deferred cancellation allows the target thread to periodically check
if it should be cancelled.
Signal Handling
Signals are used in UNIX systems to notify a process that a particular event has occurred. Now
in when a multithreaded process receives a signal, to which thread it must be delivered? It can
be delivered to all or a single thread.
fork () is a system call executed in the kernel through which a process creates a copy of itself.
Now the problem in the Multithreaded process is, if one thread forks, will the entire process be
copied or not?
Security Issues
Yes, there can be security issues because of the extensive sharing of resources between multiple
threads.
There are many other issues that you might face in a multithreaded process, but there are
appropriate solutions available for them. Pointing out some issues here was just to study both
sides of the coin.
Process Synchronization means sharing system resources by processes in a such a way that,
Concurrent access to shared data is handled thereby minimizing the chance of inconsistent data.
Maintaining data consistency demands mechanisms to ensure synchronized execution of
cooperating processes.
Process Synchronization was introduced to handle problems that arose while multiple process
executions. Some of the problems are discussed below.
A Critical Section is a code segment that accesses shared variables and has to be executed as
an atomic action. It means that in a group of cooperating processes, at a given point of time,
only one process must be executing its critical section. If any other process also wants to
execute its critical section, it must wait until the first one finishes.
[63]
Advanced Operating System
A solution to the critical section problem must satisfy the following three conditions:
Mutual Exclusion
Out of a group of cooperating processes, only one process can be in its critical section at a
given point of time.
Progress
If no process is in its critical section, and if one or more threads want to execute their critical
section then any one of these threads must be allowed to get into its critical section.
Bounded Waiting
After a process makes a request for getting into its critical section, there is a limit for how many
other processes can get into their critical section, before this process's request is granted. So,
after the limit is reached, system must grant the process permission to get into its critical
section.
Synchronization Hardware
Many systems provide hardware support for critical section code. The critical section
problem could be solved easily in a single-processor environment if we could disallow
interrupts to occur while a shared variable or resource is being modified.
In this manner, we could be sure that the current sequence of instructions would be
allowed to execute in order without pre-emption. Unfortunately, this solution is not
feasible in a multiprocessor environment.
Disabling interrupt on a multiprocessor environment can be time consuming as the
message is passed to all the processors.
[64]
Chapter-2 Process and Thread Management
This message transmission lag, delays entry of threads into critical section and the
system efficiency decreases.
To understand inter process communication, you can consider the following given diagram that
illustrates the importance of inter-process communication:
It is one of the essential parts of inter process communication. Typically, this is provided by
interposes communication control mechanisms, but sometimes it can also be controlled by
communication processes.
These are the following methods that used to provide the synchronization:
1. Mutual Exclusion
2. Semaphore
3. Barrier
4. Spinlock
Mutual Exclusion: -
It is generally required that only one process thread can enter the critical section at a time. This
also helps in synchronization and creates a stable state to avoid the race condition.
[65]
Advanced Operating System
Semaphore: -
Semaphore is a type of variable that usually controls the access to the shared resources by
several processes. Semaphore is further divided into two types which are as follows:
1. Binary Semaphore
2. Counting Semaphore
Barrier: -
A barrier typically not allows an individual process to proceed unless all the processes does not
reach it. It is used by many parallel languages, and collective routines impose barriers.
Spinlock: -
Spinlock is a type of lock as its name implies. The processes are trying to acquire the spinlock
waits or stays in a loop while checking that the lock is available or not. It is known as busy
waiting because even though the process active, the process does not perform any functional
operation (or task).
We will now discuss some different approaches to inter-process communication which are as
follows:
[66]
Chapter-2 Process and Thread Management
1. Pipes
2. Shared Memory
3. Message Queue
4. Direct Communication
5. Indirect communication
6. Message Passing
7. FIFO
Pipe: -
The pipe is a type of data channel that is unidirectional in nature. It means that the data in this
type of data channel can be moved in only a single direction at a time. Still, one can use two-
channel of this type, so that he can able to send and receive data in two processes. Typically, it
uses the standard methods for input and output. These pipes are used in all types of POSIX
systems and in different versions of window operating systems as well.
Shared Memory: -
It can be referred to as a type of memory that can be used or accessed by multiple processes
simultaneously. It is primarily used so that the processes can communicate with each other.
Therefore, the shared memory is used by almost all POSIX and Windows operating systems as
well.
Message Queue: -
In general, several different messages are allowed to read and write the data to the message
queue. In the message queue, the messages are stored or stay in the queue unless their recipients
retrieve them. In short, we can also say that the message queue is very helpful in inter-process
communication and used by all operating systems.
To understand the concept of Message queue and Shared memory in more detail, let's take a
look at its diagram given below:
[67]
Advanced Operating System
Message Passing: -
It is a type of mechanism that allows processes to synchronize and communicate with each
other. However, by using the message passing, the processes can communicate with each other
without restoring the hared variables.
Usually, the inter-process communication mechanism provides two operations that are as
follows:
o send (message)
o received (message)
Direct Communication: -
In this type of communication process, usually, a link is created or established between two
communicating processes. However, in every pair of communicating processes, only one link
can exist.
Indirect Communication
Indirect communication can only exist or be established when processes share a common
mailbox, and each pair of these processes shares multiple communication links. These shared
links can be unidirectional or bi-directional.
[68]
Chapter-2 Process and Thread Management
FIFO: -
o Socket: -
It acts as a type of endpoint for receiving or sending the data in a network. It is correct for data
sent between processes on the same computer or data sent between different computers on the
same network. Hence, it used by several types of operating systems.
o File: -
A file is a type of data record or a document stored on the disk and can be acquired on demand
by the file server. Another most important thing is that several processes can access that file as
required or needed.
o Signal: -
As its name implies, they are a type of signal used in inter process communication in a minimal
way. Typically, they are the massages of systems that are sent by one process to another.
Therefore, they are not used for sending data but for remote commands between multiple
processes.
Usually, they are not used to send the data but to remote commands in between several
processes.
There are numerous reasons to use inter-process communication for sharing the data. Here are
some of the most important reasons that are given below:
[69]
Chapter-3 Memory Management
MEMORY MANAGEMENT
Virtual memory and address translation - Paging and segmentation - Memory allocation
techniques (e.g., buddy system, slab allocation) - Memory protection and sharing - Memory
management in multiprocessor systems.
Virtual memory and address translation are two fundamental concepts in modern operating
systems. They work together to allow programs to access more memory than is physically
available by creating an illusion of a larger address space for each program. Let's delve deeper
into each concept:
Virtual Memory:
• Imagine an individual workspace on a big table. You access and use only that specific
section, unaware of the entire table surface. Similarly, virtual memory creates a private
address space for each program, independent of other programs or the actual physical
memory available.
• This virtual address space is typically much larger than the available physical memory
(RAM).
• Programs use virtual addresses to access their data and code. These addresses are not
real physical locations in RAM but rather references within the virtual space.
• The operating system acts as a translator, mapping these virtual addresses to the actual
physical locations in RAM at runtime.
Address Translation:
• This mapping between virtual and physical addresses is achieved through a mechanism
called address translation.
• The key players in this process are:
▪ Memory Management Unit (MMU): A hardware component within the CPU
that handles address translation.
▪ Page Tables: Data structures kept in RAM that contain the mapping
information between virtual and physical addresses. Each entry in a page table
[70]
Advanced Operating System
maps a portion of the virtual address space (called a page) to a physical page in
RAM.
• When a program attempts to access a memory location using a virtual address, the
MMU intercepts the request and performs the following steps:
▪ Page number extraction: The virtual address is split into two parts: page
number and page offset.
▪ Page table lookup: The page number is used as an index to search the page
table for the corresponding page table entry (PTE).
▪ Physical address generation: The PTE contains the information needed to find
the actual physical location of the page in RAM. This information is combined
with the page offset to form the final physical address.
▪ Memory access: The MMU uses the final physical address to access the
requested data or code in RAM.
• Increases program size: Allows programs to be larger than the available physical
memory, enabling complex applications to run smoothly.
• Memory protection: Isolates each program's memory space, preventing accidental
access or corruption of data by other programs or the operating system.
• Efficient memory utilization: Pages that are not actively used can be swapped out to
disk, freeing up physical memory for other programs. This enhances overall system
performance.
• Translation Look a side Buffer (TLB): A small cache within the MMU that stores
recently used address translations, speeding up subsequent accesses to the same pages.
• Demand paging: Only pages that are actually needed are brought into RAM from disk,
minimizing unnecessary data transfers.
Understanding virtual memory and address translation is crucial for comprehending how
operating systems manage memory efficiently and provide a secure and robust environment
for running multiple programs simultaneously.
[71]
Chapter-3 Memory Management
Memory management is a core concern for operating systems (OS), and two key techniques
employed are paging and segmentation. Both provide solutions to managing memory for
multiple programs while ensuring efficient memory utilization and access protection. Let's
explore each concept in detail with the help of visuals:
Paging:
• Imagine a large textbook representing a program's address space. Paging divides this
book into equal-sized chapters called pages.
• These pages are independent units and can be stored anywhere in the available physical
memory (RAM).
• The OS maintains a page table, a data structure mapping virtual addresses (chapters) to
physical addresses (page locations in RAM).
• When a program accesses data using a virtual address, the MMU intercepts the request.
It looks up the corresponding page in the page table and translates it to the actual
physical address in RAM.
[72]
Advanced Operating System
Benefits of Paging:
Segmentation:
• Instead of equal-sized chapters, segmentation divides the program into logical units
based on functionality, like code, data, stack, etc. These segments are of variable sizes
and can overlap in the virtual address space.
• Similar to paging, a segment table maps virtual segment addresses to physical memory
locations.
[73]
Chapter-3 Memory Management
• When accessing data, the MMU translates the virtual segment address and offset within
the segment to the corresponding physical location in RAM.
Benefits of Segmentation:
• Logical Grouping: Segments provide a natural way to group related program parts,
simplifying memory management for programmers.
• Protection and Access Control: Different segments can have varying access
permissions, enhancing security.
• Efficient Module Loading: Only needed segments are loaded into RAM, saving
memory.
Comparison:
[74]
Advanced Operating System
Hybrids:
Some OS-es combine the benefits of both techniques. Paged segmentation divides segments
into fixed-size pages, offering the flexibility of segmentation with reduced internal
fragmentation.
Visualizing both techniques helps clarify their different approaches to memory management.
Understanding their strengths and weaknesses allows developers and researchers to choose the
most appropriate technique for specific OS needs.
Memory allocation, the art of assigning memory blocks to processes in an operating system
(OS), lies at the heart of efficient system performance. Different allocation techniques cater to
specific needs, and understanding them is crucial for optimizing memory usage and preventing
fragmentation. Let's explore two popular techniques:
3.3.1 Buddy System: Imagine a large warehouse filled with cardboard boxes of various
sizes. The buddy system operates on a similar principle, dividing memory into power-
of-2 sized blocks (512, 1024, 2048, etc.). These blocks are called buddies and form a
hierarchical tree structure.
Allocation:
▪ When a process requests memory, the system searches for the smallest block
that can fit the need (the block's size is at least as large as the requested
memory).
▪ If a suitable block isn't readily available, the system splits a larger block (buddy)
into two equal halves, creating smaller buddies, until a block matching the
process's size is found.
▪ This splitting process continues until the smallest possible block (smallest
buddy) is reached.
Deallocation:
▪ When a process releases memory, its allocated block is merged back with its
buddy to form a larger block.
▪ This merging continues recursively until the block reaches its highest possible
size in the tree hierarchy.
[75]
Chapter-3 Memory Management
Benefits:
▪ Minimal fragmentation: The power-of-2 sizes and buddy merging mechanism
minimize internal fragmentation, maximizing memory utilization.
▪ Efficient allocation: Finding suitable blocks is fast due to the hierarchical
structure.
▪ Scalability: The system can handle diverse memory requests with varying sizes.
3.3.2 Slab Allocation: Picture a factory producing different types of objects (car parts, toys,
etc.). Slab allocation treats memory as a factory floor, dividing it into fixed-size slabs
dedicated to specific object types (data structures, network buffers, etc.). Each slab
stores multiple instances of the object it's meant for.
Allocation:
▪ When a process requests objects of a specific type, the system allocates them
from the corresponding lab, keeping track of available and used slots within the
slab.
▪ If the slab is full, a new slab for that object type is created.
Deallocation:
Benefits:
▪ The optimal choice between buddy systems and slab allocation depends on the
specific needs of the system:
[76]
Advanced Operating System
Memory Protection:
Imagine each program in the OS as a walled city with its own streets (memory addresses) and
buildings (data and code). Memory protection ensures that each program's city remains under
its control, preventing unwanted access from other programs or the OS itself. This is achieved
through two key mechanisms:
(a) Address Space Separation: Each program is assigned a virtual address space, a separate
map of its memory addresses that exists independent of the actual physical memory layout.
This creates an illusion of each program having all the memory it needs, even if it's
physically shared with other programs.
(b) Access Control Mechanisms: The hardware and OS work together to enforce access
restrictions within each virtual address space. Techniques like:
▪ Memory Management Unit (MMU): Translates virtual addresses to physical
addresses and checks access permissions (read, write, execute) before granting access.
▪ Protection Rings: Implement a hierarchical system of privilege levels, restricting
access to certain memory regions based on program privileges.
▪ Page Tables: Data structures storing access permissions for each page of memory
within a program's virtual address space.
▪ System Stability: Prevents programs from crashing each other or the OS by accessing
unauthorized memory.
[77]
Chapter-3 Memory Management
▪ Security: Protects sensitive data from unauthorized access, enhancing overall system
security.
▪ Resource Management: Ensures fair and efficient allocation of memory resources
among programs.
Memory Sharing:
Despite the separation, programs sometimes need to communicate and share data. Memory
sharing mechanisms come into play here, allowing controlled access to specific portions of a
program's memory by other programs or the OS:
1. Shared Memory Segments: Specific regions of a program's virtual address space can be
marked as shared, allowing other programs or the OS to access them directly using their
own virtual addresses.
[78]
Advanced Operating System
Important Caveats:
▪ Sharing memory introduces additional complexity and security concerns. Proper access
control mechanisms must be implemented to avoid data corruption or leaks.
▪ IPC techniques can be slower than direct memory sharing, making them less suitable
for scenarios requiring high-speed data exchange.
Understanding the balance between memory protection and sharing is essential for designing
secure and efficient operating systems. Choosing the appropriate mechanism depends on the
specific needs of the application and communication requirements between programs.
Multiprocessor systems, with their multiple CPUs working in tandem, bring new challenges to
the already complex world of memory management in operating systems. Let's explore how
memory is juggled in this parallel environment, considering both benefits and complexities:
Challenges:
▪ Shared Memory vs. Distributed Memory: Choosing between a single shared memory
pool accessible by all CPUs or allocating dedicated memory blocks to each CPU
(distributed memory) affects performance and complexity.
▪ Cache Coherence: Maintaining consistency of data across multiple caches when
different CPUs access the same memory location requires efficient protocols to avoid
stale data and ensure data integrity.
▪ Inter-processor Communication: CPUs need to coordinate and synchronize access to
shared resources and data, impacting performance and overhead.
[79]
Chapter-3 Memory Management
Benefits:
1. Uniform Memory Access (UMA): All CPUs share a single physical memory space,
offering fast access and simplicity. However, scalability can be limited due to potential
bottlenecks.
[80]
Advanced Operating System
3. Distributed Shared Memory (DSM): Each CPU has its own local memory, but a
virtual shared address space provides the illusion of a single memory pool. Data
synchronization becomes crucial in this approach.
Efficient Techniques:
Imagine two CPUs in a multiprocessor system using UMA architecture. Both need to access
data stored in the same memory page.
1. Page Fault: When CPU1 accesses the page for the first time, a page fault occurs, and
the page is loaded into main memory and assigned a "color" specific to CPU1.
[81]
Chapter-3 Memory Management
2. Cache Coherence: If CPU2 needs to access the same page, it checks its cache and the
main memory. If not present, a coherence protocol like MESI updates CPU1's cache
state, invalidating its local copy, and fetches the latest data from main memory for both
CPUs.
[82]
Chapter-4 File Systems and Storage
File system concepts and organization, File system implementation and data structures, Disk
management and storage technologies, File system security and access control, Introduction to
solid-state drives (SSDs) and storage virtualization.
A file system is the foundation for organizing and accessing data on storage devices like hard
disk drives (HDDs) and solid-state drives (SSDs). It provides a structured mechanism for
storing, retrieving, and managing files and folders, making it vital for any operating system.
Let's dive deeper into the key concepts and organization of file systems:
Hierarchical Structure:
Imagine a tree, where the root is the main drive (e.g., C: on Windows, / on macOS/Linux).
Branches extending from the root are directories (folders), named containers for files and other
subdirectories. This nesting creates a hierarchical structure, allowing you to group related files
logically.
Directories can have multiple layers, forming "subdirectories" within parent directories. This
organization helps to break down large sets of files into manageable units and simplifies
navigation.
Files are the fundamental units of storage, holding data like documents, images, applications,
etc. Each file has a unique name within its directory and associated metadata.
Metadata is like a file's passport, containing information beyond the data itself, such as:
[83]
Advanced Operating System
• Attributes: Additional information specific to the file type (e.g., image resolution, audio
bitrate).
Naming Conventions:
File and directory names vary depending on the file system. Many systems allow alphanumeric
characters, underscores, and hyphens, while others restrict symbols or enforce case sensitivity.
Extensions typically identify the file type (e.g., .docx for Word documents).
Understanding naming conventions ensures proper organization and prevents naming conflicts.
Different file systems exist with varying features and functionalities. Some common types
include:
• FAT (File Allocation Table): Simple and widely used, but limited to small files and lacks
security features.
• NTFS (New Technology File System): Robust and flexible, supports large files,
security, and journaling.
• ext4 (Fourth Extended File System): Popular in Linux systems, efficient and supports
large files and advanced features.
• HFS+ (Hierarchical File System Plus): Used in macOS, supports journaling and
security.
• ZFS (Zettabyte File System): Focuses on data integrity, fault tolerance, and scalability
for large datasets.
• Pathnames: A unique identifier for a file, specifying its location within the hierarchy
(e.g., C:\Users\Bard\Documents\report.txt).
• Symbolic links: Shortcuts to files located elsewhere in the file system, creating multiple
access points.
• Hidden files: System files or user-defined files kept invisible for easier management.
File systems bridge the gap between the logical world of files and folders you see and the
physical world of sectors on storage devices. This involves intricate data structures and
[84]
Chapter-4 File Systems and Storage
algorithms to manage data efficiently and reliably. Let's delve deeper into the fascinating world
of file system implementation:
Disk Formatting:
Before diving into data structures, it's essential to understand disk formatting. Imagine
formatting a disk as creating a blank canvas on which the file system paints its structures. This
process partitions the disk into sectors (fixed-size chunks) and prepares them for data storage.
Data Structures:
• Inodes (Unix-like): These data structures hold crucial information about each file,
including size, location of data blocks, timestamps, permissions, and access control
lists. Think of them as detailed file passports residing in a dedicated table called the
"inode table."
• File Allocation Table (FAT): Used in simpler file systems like FAT32, the FAT keeps
track of which sectors belong to each file. Like a puzzle map, it links free sectors
together, representing the file's data layout.
• Superblock: This dedicated block stores vital information about the entire file system,
such as block size, total blocks, free blocks, and location of important structures like
the inode table or FAT.
Data structures alone aren't enough. File systems need allocation strategies to efficiently place
file data on the disk. Here are some common approaches:
[85]
Advanced Operating System
• Journaling: To ensure data consistency and recover from unexpected events like
crashes, journaling file systems maintain a log of operations before committing them to
disk. This way, data remains intact even if interrupted during write operations.
Performance Considerations:
File system implementation must optimize data access and minimize overhead. Here are some
crucial factors:
• Caching: Frequently accessed data is stored in memory (cache) for faster future
retrieval.
• Prefetching: Anticipating data needs and reading ahead on the disk can accelerate
future accesses.
• Fragmentation minimization: Allocation strategies and defragmentation tools aim to
reduce scattered file fragments, improving read/write performance.
Disk management and storage technologies are the silent conductors behind the data symphony,
ensuring efficient organization, access, and protection of your precious information. Let's
explore the key instruments in this orchestra:
Disk Partitioning:
Imagine having a single large room for all your belongings. It's chaotic, isn't it? Similarly, a
single unpartitioned disk can become messy and hard to manage. Disk partitioning divides the
physical disk into logical sections called partitions, like creating separate rooms for different
purposes.
• Organized storage: Separate partitions for operating systems, applications, and personal
data keeps things tidy and reduces the risk of accidental deletion.
• Improved performance: Spreading data across partitions can optimize disk access for
different types of files.
• Enhanced security: Isolating critical system files in a separate partition can improve
system integrity and data protection.
[86]
Chapter-4 File Systems and Storage
RAID, or Redundant Array of Independent Disks, is a data storage technology that combines
multiple physical disks into a single logical unit. It's like having multiple bodyguards for your
precious data, ensuring its safety even if one bodyguard (disk) goes down.
Imagine you have important documents stored on a single hard drive. If that drive fails, those
documents are gone forever. That's where RAID comes in.
With RAID, you distribute copies of your data across multiple hard drives. Think of it like
storing the same documents in multiple safes. Even if one safe gets lost or broken, your
documents are still safe in the other safes.
There are different types of RAID levels, each offering different levels of redundancy and
performance:
RAID 0:
(a) Stripes data across multiple disks. This means data is split into chunks and stored on
different disks simultaneously.
(b) Offers the best performance as multiple disks can be accessed simultaneously for
reading and writing.
(c) However, it provides no redundancy. If any disk fails, all data stored on it is lost.
[87]
Advanced Operating System
RAID 1:
(a) Mirrors data on two disks. This means an exact copy of your data is stored on both
disks.
(b) Offers excellent redundancy. If one disk fails, the other disk takes over, and you don't
lose any data.
(c) Performance is slightly slower than RAID 0 as only one disk can be accessed for writing
at a time.
RAID 5:
(a) Distributes data and parity information across multiple disks. Parity information is like
a checksum that allows you to reconstruct the data if any disk fails.
(b) Offers good redundancy and performance. You can lose one disk without losing any
data, and performance is still good as multiple disks can be accessed for reading.
[88]
Chapter-4 File Systems and Storage
RAID 6:
(a) Similar to RAID 5, but adds an extra layer of parity information. This means you can
lose two disks without losing any data.
(b) Offers the best redundancy among common RAID levels. However, performance is
slightly slower than RAID 5 due to the additional calculations required for the extra
parity information.
Choosing the right RAID level depends on your needs. If you prioritize performance and can
afford to lose data if one disk fails, RAID 0 might be a good option. If you prioritize redundancy
and don't mind slightly slower performance, RAID 1 or 5 might be better choices. RAID 6 is
the best option for maximum redundancy, but it comes with the slowest performance.
RAID is a valuable tool for protecting your data. By using RAID, you can ensure that your data
is safe even if a hard drive fails. However, it's important to remember that RAID is not a backup
solution. You should always have a backup of your data on a separate storage device, in case
of multiple drive failures or other disasters.
[89]
Advanced Operating System
Volume Management:
Volume management goes beyond simple partitions, allowing you to dynamically resize,
extend, and shrink existing volumes without losing data. Imagine having walls in your rooms
that you can adjust without knocking everything down!
This provides flexibility for adapting to changing storage needs, such as increasing the size of
your system partition or reclaiming unused space from data-hungry applications.
Imagine a bustling city where multiple businesses (servers) all need access to a shared
warehouse (storage) full of crucial resources (data). That's essentially what a Storage Area
Network (SAN) does! It's a dedicated high-speed network designed to connect servers to shared
storage devices, like disk arrays and tape libraries, offering centralized data management and
improved access.
Think of it as a highway built just for data delivery, separate from the regular traffic network
used by applications and users. This dedicated access translates to several benefits:
Centralized Management:
Imagine running around managing individual warehouses for each business in the city.
Exhausting, right? With a SAN, you have one control center for all your storage needs. This
simplifies administration, provisioning, and maintenance of storage resources.
[90]
Chapter-4 File Systems and Storage
Increased Scalability:
Need to expand your storage capacity? With a SAN, adding more disks or storage arrays is like
building new warehouses in your city. You can easily scale your storage to meet growing data
demands without disrupting existing operations.
Improved Performance:
Dedicated data highways mean faster delivery! SANs typically use high-speed protocols like
Fibre Channel, significantly boosting data transfer rates compared to traditional storage
options. This translates to quicker server access and potentially smoother application
performance.
Imagine a fire in one warehouse. With traditional, scattered storage, disaster could strike
multiple businesses. A SAN allows for data replication across its storage pool, so if one device
fails, data remains accessible from other locations, enabling faster recovery from unforeseen
events.
A SAN lets you mix and match different storage types (disk, tape, etc.) within a single pool,
providing tailored solutions for various needs. Additionally, centralized access control and
security measures on the SAN can help protect your valuable data from unauthorized access or
breaches.
• Cost: Setting up and maintaining a SAN can be more expensive than traditional storage
options due to specialized hardware and software requirements.
• Complexity: Implementing and managing a SAN requires skilled personnel and
technical expertise.
• Integration: Ensuring seamless integration with existing server infrastructure and
applications can be challenging.
Overall, SANs offer a powerful and flexible solution for organizations with large data storage
needs and demanding performance requirements. While the initial investment and technical
complexity might be higher, the long-term benefits of centralized management, scalability, and
performance often outweigh the drawbacks.
[91]
Advanced Operating System
Keeping your files safe and secure is paramount in today's digital world. This is where file
system security and access control come in, acting as the gatekeepers of your digital fortress.
Let's delve into these crucial concepts:
Imagine entering a building with different levels of access depending on your role. Similarly,
users and groups in a file system are assigned specific permissions like:
File Ownership:
Every file has a designated "owner," typically the user who created it. The owner possesses
additional privileges like changing permissions and modifying ownership itself. Ownership
allows for granular control over sensitive files by granting exclusive access to specific users.
These are detailed lists specifying who can access a file and what they can do with it. Think of
them as personalized passports for each file, outlining user access rights beyond basic
permissions. ACLs provide fine-grained control and can be customized to cater to specific user
needs and security requirements.
Encryption:
Consider putting your valuables in a safe for extra protection. Similarly, file encryption
scrambles the data content using an algorithm, making it unreadable without the decryption
key. This provides an additional layer of security, especially for sensitive data like financial
records or personal information.
[92]
Chapter-4 File Systems and Storage
Just like security cameras in a building, file system auditing and logging track users' access
attempts and file modifications. This provides valuable insights into user activity and can help
identify potential security breaches or unauthorized access attempts.
Security Considerations:
Implementing robust file system security requires careful consideration of factors like:
Imagine ditching your dusty record player for a sleek, lightning-fast MP3 player. That's the
revolutionary leap SSDs bring to storage technology. Instead of clunky, spinning disks, SSDs
rely on flash memory chips to store data, offering immense performance gains:
Lower power consumption: Reduced energy usage translates to longer battery life for laptops
and improved efficiency for servers.
[93]
Advanced Operating System
• Flash Memory Chips: Store data electrically in interconnected cells, accessed directly
by the controller.
• Controller: The brain of the SSD, managing data flow, wear leveling, and error
correction.
• DRAM Cache: Temporary buffer for storing frequently accessed data, accelerating
performance.
• NAND Flash Types: Depending on the type (SLC, MLC, TLC, QLC), SSDs offer
varying levels of performance, endurance, and cost.
At its heart, storage virtualization abstracts the physical characteristics of individual storage
devices (HDDs, SSDs, etc.) and presents a logical view of a single, centralized storage pool.
This virtual pool masks the complexities of underlying hardware, offering users and
applications a seamless, unified storage experience.
[94]
Chapter-4 File Systems and Storage
• Hardware: Specialized hardware like storage controllers and SAN fabrics facilitate
communication between servers and the virtual storage pool.
• Software: Virtualization software manages the pool, allocating storage space, handling
data movement, and ensuring data integrity.
• Data Replication: Copies of data are maintained on different physical devices for
redundancy and fault tolerance.
• Thin Provisioning: Virtual volumes are dynamically allocated, exceeding the physical
capacity until actual data usage catches up.
• Tiered Storage: Data is automatically distributed across storage tiers with varying
performance and cost characteristics based on access frequency.
• SAN (Storage Area Network): Dedicated high-speed network for block-level access to
shared storage resources.
[95]
Advanced Operating System
Storage virtualization is a powerful tool that reshapes the data landscape. By understanding its
core principles, benefits, and various types, you can leverage this technology to optimize your
storage resources, improve IT efficiency, and unlock new possibilities for your data center
[96]
Chapter-5 Distributed Operating System
DISTRIBUTED SYSTEMS
AND VIRTUALIZATION
Distributed operating systems and their challenges - Communication and synchronization in distributed
systems - Distributed file systems and naming - Virtual machines and hypervisors - Cloud computing
and virtualization technologies.
[97]
Advanced Operating System
Middleware
Peer-to-Peer System
The nodes play an important role in this system. The task is evenly distributed among
the nodes. Additionally, these nodes can share data and resources as needed. Once
again, they require a network to connect.
[98]
Chapter-5 Distributed Operating System
The Peer-to-Peer System is known as a "Loosely Couple System". This concept is used
in computer network applications since they contain a large number of processors that
do not share memory or clocks.
Each processor has its own local memory, and they interact with one another via a
variety of communication methods like telephone lines or high-speed buses.
Three-tier
The information about the client is saved in the intermediate tier rather than in the client, which
simplifies development. This type of architecture is most commonly used in online
applications.
N-tier
When a server or application has to transmit requests to other enterprise services on the
network, n-tier systems are used.
There are various features of the distributed operating system. Some of them are as follows:
Openness
It means that the system's services are freely displayed through interfaces. Furthermore, these
interfaces only give the service syntax. For example, the type of function, its return type,
parameters, and so on. Interface Definition Languages are used to create these interfaces (IDL).
Scalability
It refers to the fact that the system's efficiency should not vary as new nodes are added to the
system. Furthermore, the performance of a system with 100 nodes should be the same as that
of a system with 1000 nodes.
Resource Sharing
Its most essential feature is that it allows users to share resources. They can also share resources
in a secure and controlled manner. Printers, files, data, storage, web pages, etc., are examples
of shared resources.
[99]
Advanced Operating System
Flexibility
A DOS's flexibility is enhanced by modular qualities and delivers a more advanced range of
high-level services. The kernel/ microkernel's quality and completeness simplify the
implementation of such services.
Transparency
It is the most important feature of the distributed operating system. The primary purpose of a
distributed operating system is to hide the fact that resources are shared. Transparency also
implies that the user should be unaware that the resources he is accessing are shared.
Furthermore, the system should be a separate independent unit for the user.
Heterogeneity
The components of distributed systems may differ and vary in operating systems, networks,
programming languages, computer hardware, and implementations by different developers.
Fault Tolerance
Fault tolerance is that process in which user may continue their work if the software or
hardware fails.
Challenges
There are various disadvantages of the distributed operating system. Some of them are
as follows:
The system must decide which jobs must be executed when they must be executed, and
where they must be executed. A scheduler has limitations, which can lead to
underutilized hardware and unpredictable runtimes.
It is hard to implement adequate security in DOS since the nodes and connections must
be secured.
The database connected to a DOS is relatively complicated and hard to manage in
contrast to a single-user system.
The underlying software is extremely complex and is not understood very well
compared to other systems.
The more widely distributed a system is, the more communication latency can be
expected. As a result, teams and developers must choose between availability,
consistency, and latency.
[100]
Chapter-5 Distributed Operating System
These systems aren't widely available because they're thought to be too expensive.
Gathering, processing, presenting, and monitoring hardware use metrics for big clusters
can be a real issue.
5.2 Communication and synchronization in distributed system:
Distributed System is a collection of computers connected via a high-speed
communication network. In the distributed system, the hardware and software
components communicate and coordinate their actions by message passing
Each node in distributed systems can share its resources with other nodes. So,
there is a need for proper allocation of resources to preserve the state of
resources and help coordinate between the several processes.
To resolve such conflicts, synchronization is used. Synchronization in
distributed systems is achieved via clocks. The physical clocks are used to adjust
the time of nodes.
Each node in the system can share its local time with other nodes in the system.
The time is set based on UTC (Universal Time Coordination). UTC is used as a
reference time clock for the nodes in the system. Clock synchronization can be
achieved by 2 ways
External and Internal Clock Synchronization.
Centralized is the one in which a time server is used as a reference. The single time-
server propagates it’s time to the nodes, and all the nodes adjust the time accordingly.
It is dependent on a single time-server, so if that node fails, the whole system will lose
synchronization. Examples of centralized are-Berkeley the Algorithm, Passive Time
Server, Active Time Server etc.
[101]
Advanced Operating System
Distributed is the one in which there is no centralized time-server present. Instead, the
nodes adjust their time by using their local time and then, taking the average of the
differences in time with other nodes. Distributed algorithms overcome the issue of
centralized algorithms like scalability and single point failure. Examples of Distributed
algorithms are – Global Averaging Algorithm, Localized Averaging Algorithm, NTP
(Network time protocol), etc.
• Fixed-size :
1. Examples
IP addresses, memory addresses, phone numbers (sort of).
2. Pros
Easier to handle.
3. Con
Finite range.
• Infinite :
1. Example
Email addresses (sort of).
2. Pros
Allows for indefinite range.
3. Pros
Allows integration of name spaces.
4. Cons
It can be harder to deal with.
Naming Characteristics -Presentation:
• User-oriented
Formatted in away that users can understand and use.
Examples
‘google.com’, ‘print server’
Example
216.58.198.110.
Naming Characteristics -Purity
• A pure name has no structure to it. (It can only be used to compare equality with another
name).
• An impure name has some structure. (A valid name is something about the object)
Examples
IP addresses (network and host IDs), absolute file paths (which give location), room
numbers like ‘S4.01’.
• It can be physically – or organisationally – oriented.
• Physically-oriented
Some physical layout is encoded in the name (such as ‘room S4.01’)
[103]
Advanced Operating System
• Organisationally-oriented
some information about how objects and resources are organised is included (such as
file paths).
Naming Characteristics -Scope
• Global scope
The name alone gives the type of object it defines.
Example
google.com, a book’s ISBN.
• Namespace-specific:
The same name may be used in another namespace for a different object.
Example
‘room 4-12’ has relevance in one building, but not worldwide.
Naming Characteristics Context
Let’s Considering paths within a naming hierarchy.
• Context-dependent
A path may resolve to a different identifier depending on the context.
Example
phone extension numbers.
• Absolute:
A path resolves to the same identifier, regardless of context.
Example – a full postal address.
VIRTUAL MACHINE:
• A virtual machine is designed to mimic the function of a physical computing device or
server using cloud-based parts known as virtual hardware devices.
• These virtual hardware devices mimic physical computing components to function as a
traditional computer and server setup.
• Using virtual RAM, a virtual desktop, and a cloud-based CPU, the virtual machine can
perform many of the tasks a traditional machine does without the need for as much
physical equipment to power it. The virtual machine is stored within a part of the host
computer separately from other resources. The VM software and any apps that run
together with it are known as guests.
[104]
Chapter-5 Distributed Operating System
• There can be two types of virtual machines. While process VMs only run a single
process at a time, system VMs work to replicate an entire operating system and any
required applications to give the same functions as a desktop. Several process VMs may
run at once within the system VM setup to make a virtual desktop setup possible.
• While many platforms exist to build and run a virtual machine, each with its own
features and tools, there are some common traits and characteristics that each
virtualization vendor possesses.
• These typically include the ability to build and run several virtual machines
simultaneously on the same host but keep them apart for the average user’s purposes.
• These machines are isolated so they run separately, with the users of one virtual
machine unable to access or view another running on the same host. This sandbox, as
it’s known, allows operating systems and apps to run independently.
• One of the best reasons for virtualization within an enterprise can be the ability to add
more machines to the same server easily. Deploying and managing a new machine is
much simpler when using virtual resources than physical ones, as the administrator
doesn’t need any new equipment.
• Resources can simply be reallocated to allow another virtual machine to run on the
same server. To create a new virtual machine, most administrators will use a hypervisor.
This allows for easy creation and management of multiple virtual machines as needed.
HYPERVISOR
A hypervisor is a mandatory component that makes it possible for the virtual machine
to run and is sometimes called a virtual machine monitor. A hypervisor’s main job is to
decouple the hardware powering the network from the operating system and other
software running on the virtual machine.
It isolates each virtual machine, allowing them to run separate operating systems
simultaneously and protecting them from potential security issues. It also allows each
virtual hardware setup to share the physical resources that power the machines that keep
them running by dynamically sharing resources like RAM and bandwidth.
With enough historical data gathered from the system, the software can begin to predict
how the need for resources changes through the day and identify patterns, automating
the movement of resources to and from the cloud to the VMs that need them. By setting
[105]
Advanced Operating System
protocols for each operating system to follow, the hypervisor keeps the virtual machine
from crashing while this is happening. Proper allocation of resources keeps the virtual
machine running and prevents any issues with programs competing for resources like
RAM.
The hypervisor keeps each virtual machine separate so that if one crashes or experiences
a fatal error, the others continue to run without issue. Through sandboxing, users of a
VM can try using new apps or operating systems without fear of them crashing the
whole system. With cybercrime being a critical issue for enterprises, this also ensures
that if a malware attack is effective on one machine, the others will be protected from
the same fate. This separation between virtual machines enforced by the hypervisor
provides an extra layer of security by preventing one machine from causing an issue
with another one.
Three modules exist within the hypervisor. First, the dispatcher routes the directions of
each virtual machine instance to the allocator or the interpreter for execution. With these
directions sent, the allocator responds to the dispatcher’s commands to determine the
resources needed and allocates them. Last, the interpreter module has stored routines
that are executed based on the allocator’s commands.
• Type 1, also called native or bare metal. This type of hypervisor runs on the host
directly, using simple programming that doesn’t require its own operating system to
function. With a Type 1 hypervisor running, the host machine must be dedicated to this
task.
• Type 2, a hosted hypervisor. This software runs through an app instead of on the host
itself by using the host computer’s operating system to carry out its commands. A Type
2 hypervisor may function slower than a Type 1, as all of its commands
[106]
Chapter-5 Distributed Operating System
• Host Machine: The machine on which the virtual machine is going to be built is
known as Host Machine.
• Guest Machine: The virtual machine is referred to as a Guest Machine.
[107]
Advanced Operating System
Virtualization has a prominent impact on Cloud Computing. In the case of cloud computing,
users store data in the cloud, but with the help of Virtualization, users have the extra benefit of
sharing the infrastructure. Cloud Vendors take care of the required physical resources, but these
cloud providers charge a huge amount for these services which impacts every user or
organization. Virtualization helps Users or Organisations in maintaining those services which
are required by a company through external (third-party) people, which helps in reducing costs
to the company. This is the way through which Virtualization works in Cloud Computing.
Benefits of Virtualization
Drawback of Virtualization
• High Initial Investment: Clouds have a very high initial investment, but it is also
true that it will help in reducing the cost of companies.
• Learning New Infrastructure: As the companies shifted from Servers to Cloud, it
requires highly skilled staff who have skills to work with the cloud easily, and for
this, you have to hire new staff or provide training to current staff.
• Risk of Data: Hosting data on third-party resources can lead to putting the data at
risk, it has the chance of getting attacked by any hacker or cracker very easily.
Types of Virtualization
Application Virtualization
Network Virtualization
Desktop Virtualization
Storage Virtualization
[108]
Chapter-5 Distributed Operating System
Server Virtualization
Data virtualization
[109]
Advanced Operating System
The main benefits of desktop virtualization are user mobility, portability, and easy
management of software installation, updates, and patches.
• Storage Virtualization: Storage virtualization is an array of servers that are
managed by a virtual storage system. The servers aren’t aware of exactly where
their data is stored and instead function more like worker bees in a hive. It makes
managing storage from multiple sources be managed and utilized as a single
repository. Storage virtualization software maintains smooth operations, consistent
performance, and a continuous suite of advanced functions despite changes, breaks
down, and differences in the underlying equipment.
• Server Virtualization: This is a kind of virtualization in which the masking of
server resources takes place. Here, the central server (physical server) is divided
into multiple different virtual servers by changing the identity number, and
processors. So, each system can operate its operating systems in an isolated manner.
Where each sub-server knows the identity of the central server. It causes an increase
in performance and reduces the operating cost by the deployment of main server
resources into a sub-server resource. It’s beneficial in virtual migration, reducing
energy consumption, reducing infrastructural costs, etc.
[110]
Chapter-5 Distributed Operating System
more about the technical information like how data is collected, stored &
formatted then arranged that data logically so that its virtual view can be
accessed by its interested people and stakeholders, and users through the various
cloud services remotely. Many big giant companies are providing their services
like Oracle, IBM, At scale, Cdata, etc.
[111]
Example Lab programs
UNIX Commands
Shell Programming
[112]
Example Lab programs
These demonstrations illustrate how each command works and what results you can expect
when using them in a UNIX-like operating system. Remember to replace placeholders like
source_file, destination_directory, and file_to_delete with actual file names or directory paths
when using these commands.
[113]
Advanced Operating System
Shell Programming
Aim: To create a shell script that takes two numbers as input and outputs their sum.
Algorithm:
✓ Start
✓ Accept two numbers as input from the user.
✓ Add the two numbers together.
✓ Print the result.
✓ Stop
Program:
#!/bin/bash
# Prompt the user to enter the first number
echo "Enter the first number:"
read num1
# Prompt the user to enter the second number
echo "Enter the second number:"
read num2
# Add the two numbers together
sum=$((num1 + num2))
# Print the result
echo "The sum of $num1 and $num2 is: $sum"
Output:
Enter the first number:
5
7
The sum of 5 and 7 is: 12
Result: The shell script successfully takes two numbers as input from the user, calculates their
sum, and prints the result. This demonstrates a basic example of shell programming for
performing arithmetic operations.
[114]
Example Lab programs
Aim: Create a C program using UNIX system calls to demonstrate file I/O operations.
Algorithm:
✓ Start
✓ Open a file using the open() system call.
✓ Write some data to the file using the write() system call.
✓ Close the file descriptor.
✓ Open the file again for reading.
✓ Read the data from the file using the read() system call.
✓ Close the file descriptor.
✓ Print the data read from the file.
✓ Stop
Program:
#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
#include <unistd.h>
#define BUF_SIZE 1024
int main() {
int fd;
ssize_t num_read;
char buffer[BUF_SIZE];
// Open file for writing
fd = open("example.txt", O_WRONLY | O_CREAT | O_TRUNC, S_IRUSR |
S_IWUSR);
if (fd == -1) {
perror("open");
exit(EXIT_FAILURE);
}
// Write data to the file
if (write(fd, "Hello, World!\n", 14) == -1) {
perror("write");
exit(EXIT_FAILURE);
[115]
Advanced Operating System
Result: The program successfully demonstrates file I/O operations using UNIX system calls.
It opens a file, writes data to it, reads the data back, and prints it to the console.
[116]
Example Lab programs
[117]
Advanced Operating System
Algorithm:
✓ Start
✓ Initialize semaphores for empty and full buffer.
✓ Create producer and consumer processes.
✓ Implement producer function:
✓ Wait on empty semaphore (decrement).
✓ Produce an item and add it to the buffer.
✓ Signal full semaphore (increment).
✓ Implement consumer function:
✓ Wait on full semaphore (decrement).
✓ Consume an item from the buffer.
✓ Signal empty semaphore (increment).
✓ Repeat the producer and consumer processes.
✓ Stop
Program:
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <semaphore.h>
#include <unistd.h>
#define BUFFER_SIZE 5
sem_t empty, full;
int buffer[BUFFER_SIZE];
int in = 0, out = 0;
void *producer(void *arg) {
int item = 1;
while (1) {
// Produce item
[119]
Advanced Operating System
sleep(1);
// Wait for empty buffer
sem_wait(&empty);
pthread_join(consumer_thread, NULL);
// Destroy semaphores
sem_destroy(&empty);
sem_destroy(&full);
return 0;
}
Output:
Produced item: 1
Consumed item: 1
Produced item: 2
Consumed item: 2
Produced item: 3
Consumed item: 3
Produced item: 4
Consumed item: 4
Produced item: 5
Consumed item: 5
Result: The program successfully solves the Producer-Consumer problem using semaphores
for process synchronization. It ensures that the producer produces items only when the buffer
has space, and the consumer consumes items only when the buffer is not empty. This prevents
issues such as race conditions and buffer overflow/underflow.
[121]
Advanced Operating System
Aim: Implement the Dining Philosophers problem using the Resource Allocation Graph
approach to avoid deadlock.
Algorithm:
✓ Start
✓ Each philosopher is represented as a thread.
✓ Use mutex locks to represent the forks. Each fork is initially available.
✓ When a philosopher wants to eat, they need to acquire both the left and right forks.
✓ Use a resource allocation graph to ensure that no philosopher holds a fork while
waiting for another fork.
✓ Implement a solution to prevent deadlock by ensuring that a philosopher only picks
up both forks if both are available.
✓ After eating, the philosopher releases both forks.
✓ Stop
Program:
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <unistd.h>
#define NUM_PHILOSOPHERS 5
pthread_mutex_t forks[NUM_PHILOSOPHERS];
void *philosopher(void *arg) {
int id = *((int *) arg);
int left_fork = id;
int right_fork = (id + 1) % NUM_PHILOSOPHERS;
while (1) {
// Thinking
printf("Philosopher %d is thinking\n", id);
// Pick up left fork
pthread_mutex_lock(&forks[left_fork]);
printf("Philosopher %d picks up left fork %d\n", id, left_fork);
[122]
Example Lab programs
// Eating
printf("Philosopher %d is eating\n", id);
// Put down right fork
pthread_mutex_unlock(&forks[right_fork]);
printf("Philosopher %d puts down right fork %d\n", id, right_fork);
// Put down left fork
pthread_mutex_unlock(&forks[left_fork]);
printf("Philosopher %d puts down left fork %d\n", id, left_fork);
}
}
int main() {
pthread_t philosophers[NUM_PHILOSOPHERS];
int philosopher_ids[NUM_PHILOSOPHERS];
// Initialize mutex locks for each fork
for (int i = 0; i < NUM_PHILOSOPHERS; i++) {
pthread_mutex_init(&forks[i], NULL);
}
// Create philosopher threads
for (int i = 0; i < NUM_PHILOSOPHERS; i++) {
philosopher_ids[i] = i;
pthread_create(&philosophers[i], NULL, philosopher, &philosopher_ids[i]);
}
// Join philosopher threads
for (int i = 0; i < NUM_PHILOSOPHERS; i++) {
pthread_join(philosophers[i], NULL);
}
// Destroy mutex locks
for (int i = 0; i < NUM_PHILOSOPHERS; i++) {
pthread_mutex_destroy(&forks[i]);
}
[123]
Advanced Operating System
return 0;
}
Output:
Philosopher 0 is thinking
Philosopher 0 picks up left fork 0
Philosopher 0 picks up right fork 1
Philosopher 0 is eating
Philosopher 0 puts down right fork 1
Philosopher 0 puts down left fork 0
Philosopher 1 is thinking
Philosopher 1 picks up left fork 1
Philosopher 1 picks up right fork 2
Philosopher 1 is eating
Philosopher 1 puts down right fork 2
Philosopher 1 puts down left fork 1
Result: The program simulates the Dining Philosophers problem using mutex locks to
represent the forks and implements the Resource Allocation Graph approach to avoid deadlock.
Each philosopher successfully picks up both forks only if they are both available and releases
them after eating, preventing deadlock.
[124]
Example Lab programs
return 0;
}
Output:
Enter the size of the array: 5
Enter 5 elements:
1
2
3
4
5
Array elements are:
12345
Result: The program demonstrates dynamic memory allocation and deallocation in C. It asks the
user for the size of an array, allocates memory for the array dynamically, initializes the array
elements, prints them, and finally deallocates the dynamically allocated memory to prevent
memory leaks.
[126]
Example Lab programs
Aim: Implement the Least Recently Used (LRU) page replacement algorithm.
Algorithm:
✓ Start
✓ Maintain a data structure, such as a queue or a list, to keep track of the pages currently in
memory and their access history.
✓ When a page needs to be replaced:
✓ Identify the page that has not been accessed for the longest time (least recently used).
✓ Remove that page from memory and replace it with the new page.
✓ Update the access history of pages whenever a page is accessed.
✓ Stop
Program:
#include <stdio.h>
#include <stdlib.h>
#define MAX_FRAMES 3
typedef struct Node {
int data;
struct Node *next;
} Node;
Node *head = NULL;
int frame_count = 0;
void display() {
Node *temp = head;
while (temp != NULL) {
printf("%d ", temp->data);
temp = temp->next;
}
printf("\n");
}
void add_page(int page) {
if (frame_count < MAX_FRAMES) {
// Add page to an empty frame
Node *new_node = (Node *)malloc(sizeof(Node));
[127]
Advanced Operating System
new_node->data = page;
new_node->next = head;
head = new_node;
frame_count++;
} else {
// Replace the least recently used page
Node *temp = head;
Node *prev = NULL;
while (temp->next != NULL) {
prev = temp;
temp = temp->next;
}
prev->next = NULL;
free(temp);
// Add the new page to the front
Node *new_node = (Node *)malloc(sizeof(Node));
new_node->data = page;
new_node->next = head;
head = new_node;
}
}
void access_page(int page) {
Node *temp = head;
Node *prev = NULL;
while (temp != NULL && temp->data != page) {
prev = temp;
temp = temp->next;
}
if (temp != NULL) {
// Move accessed page to the front
if (prev != NULL) {
prev->next = temp->next;
temp->next = head;
head = temp;
[128]
Example Lab programs
}
} else {
// Page fault: Page not found in memory
printf("Page %d not found in memory. Adding page...\n", page);
add_page(page);
}
}
int main() {
// Access pages
access_page(1);
display();
access_page(2);
display();
access_page(3);
display();
access_page(4);
display();
access_page(2);
display();
access_page(1);
display();
return 0;
}
Output:
Page 1 not found in memory. Adding page...
1
Page 2 not found in memory. Adding page...
21
Page 3 not found in memory. Adding page...
321
Page 4 not found in memory. Adding page...
432
243
243
[129]
Advanced Operating System
Program:
#include <stdio.h>
#include <stdlib.h>
#define MAX_REQUESTS 100
void sort_requests(int n, int requests[]) {
// Bubble sort algorithm to sort disk requests in ascending order
for (int i = 0; i < n - 1; i++) {
for (int j = 0; j < n - i - 1; j++) {
if (requests[j] > requests[j + 1]) {
// Swap elements
int temp = requests[j];
requests[j] = requests[j + 1];
requests[j + 1] = temp;
}
}
}
}
void scan(int n, int requests[], int start, int max_cylinder) {
int total_movement = 0;
int current_cylinder = start;
int direction = 1; // 1: Right, -1: Left
// Sort requests
sort_requests(n, requests);
[130]
Example Lab programs
[133]
ADVANCED OPERATING SYSTEMS
Dr. A. Karunamurthy, MCA., M.Tech., Ph.D., is an Associate Professor in Sri
Manakula Vinayagar Engineering College (Autonomous), Pondicherry. He received his
Ph.D., from Bharathiyar University, Coimbatore and M.Tech., from Manonmaniam
Sundaranar University, Tirunelveli. He pursued his P.G in Pondicherry University. He
is an academician in Computer Science and Engineering with more than a decade of
accomplished experience in teaching. He has published over 37 articles in National and
International referred. He has participated in numerous conference and workshop in
academic platform. He is an active member of ISTE, CSI, IAENG & IAEPT.
Furthermore, he pursued postdoctoral research at the Singapore Institute of Technology
(SIT).
₹ 600