KEMBAR78
Advanced Operating Systems EBook | PDF | Operating System | Computer Data Storage
100% found this document useful (1 vote)
192 views144 pages

Advanced Operating Systems EBook

Uploaded by

raguljdt.st
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
192 views144 pages

Advanced Operating Systems EBook

Uploaded by

raguljdt.st
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 144

ADVANCED OPERATING

SYSTEMS

Dr. A. Karunamurthy
Mr.R.Ramakrishnan
Mr. V. Udhayakumar
Mr. P. Rajapandian
ADVANCED OPERATING SYSTEMS

Authors
DR. A. KARUNAMURTHY
MR. R. RAMAKRISHNAN
MR. V. UDHAYAKUMAR
MR. P. RAJAPANDIAN

Published by

CHENDUR PUBLISHING HOUSE


(SB RESEARCH DEVELOPMENT AND INNOVATION TEAM)
Chendur Publishing House is an authorized organization and certify by
Government of India,
UDYAM REGISTRATION NUMBER: UDYAM-TN-34-0004497
GSTIN: 33APLPB3691N2ZK
Address: No-6, Parasakthi Nagar, Camproad, Selaiyur, Chennai-600073,
Tamil Nadu, India.
Ph: 8838702557
Website: https://chendurph.com
Email ID: chendurpublishinghouse@gmail.com

i
Copyright©2024 by CHENDUR PUBLISHING HOUSE
All Rights Reserved
First Edition: 2024
M.R.P: Rs. 600
International Standard Book Number (ISBN): 978-81-971220-2-6
The International Standard Book Number, commonly known as ISBN, is a
unique numeric identifier assigned to every commercially published book.
Serving as a universal method of identifying books, ISBNs are crucial for
book distribution, inventory management, and sales tracking worldwide.
Consisting of either 10 or 13 digits, ISBNs typically denote specific elements
such as the book's edition, publisher, and format. They facilitate efficient
cataloguing and accessibility across libraries, bookstores, and online
retailers, enabling streamlined book searches and purchases by providing a
standardized identification system for books across diverse markets globally.
Trademark Notice:
A trademark notice is a symbol or phrase used to assert ownership and alert
the public about the legal protection of a trademark. It typically consists of
the symbols ™ (for an unregistered trademark) or ® (for a registered
trademark) placed next to the trademarked name, logo, or slogan. The use of
these notices serves as a cautionary measure to inform others that the mark
is either actively claimed as a trademark (with ™) or officially registered with
the appropriate government authority (with ®). Including these notices helps
establish rights, discourage unauthorized use, and reinforce the owner's claim
to the mark, contributing to safeguarding the brand's identity and value while
deterring potential infringement.
Chendur Publishing House also publishes its books in a variety of
electronic formats. Some content that appears in print may not be available
in electronic formats. For more information visit our publication website:
www.chendurph.com

ii
PREFACE
Welcome to the enlightening journey through the realm of
Advanced Operating Systems! This syllabus is meticulously
crafted to provide a comprehensive understanding of the intricate
workings and advanced concepts that underpin modern operating
systems.
Chapter 1 serves as the gateway to this exploration, offering an in-
depth Introduction to Operating Systems. Here, you will embark
on a voyage to uncover the definition and purpose of operating
systems, unravel the multifaceted functions and components that
constitute their essence, and trace the fascinating history and
evolution of these pivotal systems. Moreover, you will delve into
the diverse types of operating systems, from real-time to batch
processing, multi-user to embedded, gaining insights into their
unique characteristics and applications. The chapter culminates
with an exploration of various operating system architectures,
shedding light on the underlying structures that enable efficient
resource management and system operation.
In Chapter 2, we delve into the intricacies of Process and Thread
Management. You will delve into the foundational concepts of
processes and their control blocks, and explore a myriad of
process scheduling algorithms, including FCFS, SJF, and Round
Robin, among others. Multithreading and thread management take
center stage as you uncover techniques for optimizing resource
utilization and enhancing system responsiveness. Additionally,
you will unravel the complexities of thread synchronization,
communication, and inter-process communication mechanisms,
essential for building robust and efficient concurrent systems.

iii
Memory Management takes center stage in Chapter 3, where you
will embark on a deep dive into the intricacies of virtual memory,
address translation, paging, segmentation, and memory allocation
techniques. With a focus on memory protection, sharing, and
management in multiprocessor systems, you will gain insights
into the challenges and strategies involved in efficiently managing
memory resources in modern computing environments.
Chapter 4 delves into the critical domain of File Systems and
Storage. Here, you will explore fundamental concepts of file
system organization, implementation, and data structures. From
disk management and storage technologies to file system security
and access control, you will gain a comprehensive understanding
of the complexities involved in managing and securing data
storage. Additionally, you will be introduced to emerging storage
technologies such as solid-state drives (SSDs) and storage
virtualization, paving the way for future advancements in storage
systems.
The final chapter, Chapter 5, navigates the dynamic landscape of
Distributed Systems and Virtualization. You will unravel the
challenges and opportunities presented by distributed operating
systems, exploring communication and synchronization
mechanisms, distributed file systems, and virtualization
technologies. From virtual machines and hypervisors to cloud
computing and virtualization technologies, you will gain insights
into the transformative impact of distributed systems and
virtualization on modern computing infrastructures.
This syllabus aims to provide you with a comprehensive
understanding of Advanced Operating Systems, equipping you
with the knowledge and skills to navigate the complexities of
modern computing environments. Whether you are a student,

iv
researcher, or practitioner in the field, we invite you to embark on
this enlightening journey and explore the fascinating world of
Advanced Operating Systems.
Authors
Dr. A. Karunamurthy
Mr. R. Ramakrishnan
Mr. V. Udhayakumar
Mr. P. Rajapandian

v
ACKNOWLEDGEMENT
I wish to express my sincere gratitude to
Dr. V.S.K. Venkatachalapathy, our esteemed Director Cum
Principal at Sri Manakula Vinayagar Engineering College. His
unwavering support was Instrumental in the successful
completion of my book.
I extend my thanks to all the faculty members of the Department
of MCA at Sri Manakula Vinayagar Engineering College for the
valuable information they provided in their respective fields.
I would also like to express my gratitude to The Management,
staff, and non-teaching staff for their significant contributions to
this book.
Special thanks go to My Parents & Friends for their unwavering
support, motivation, and Encouragement throughout the process
of completing this book and the associated course.
Finally, I express my sincere thanks to The Almighty for the
successful completion of my book.

Authors
Dr. A. Karunamurthy
Mr. R. Ramakrishnan
Mr. V. Udhayakumar
Mr. P. Rajapandian

vi
CONTENT
Chapter-1. Introduction to Operating Systems
1.1 Definition and Purpose of an Operating System 2
1.2 Functions and Components of an Operating System 2
1.3 History and Evolution of Operating Systems 17
1.4 Types of Operating Systems 20
1.4.1 Real-time Operating Systems 21
1.4.2 Batch Operating Systems 23
1.4.3 Multi-user Operating Systems 25
1.4.4 Others 28
1.5 Operating System Architectures 33
Chapter-2 Process and Thread Management
2.1 Process Concept and Process Control Block 45
2.2 Process Scheduling Algorithms 48
2.2.1 First Come, First Served (FCFS) 49
2.2.2 Shortest Job First (SJF) 51
2.2.3 Round Robin 53
2.2.4 Others 54
2.3 Multithreading and Thread Management 56
2.4 Thread Synchronization and Communication 63
2.5 Interprocess Communication (IPC) Mechanisms 65
Chapter-3 Memory Management
3.1 Virtual Memory and Address Translation 70
3.2 Paging and Segmentation 72
3.3 Memory Allocation Techniques 75

vii
3.3.1 Buddy System 75
3.3.2 Slab Allocation 76
3.3.3 Others 76
3.4 Memory Protection and Sharing 77
3.5 Memory Management in Multiprocessor Systems 78
Chapter-4 File Systems and Storage
4.1 File System Concepts and Organization 83
4.2 File System Implementation and Data Structures 84
4.3 Disk Management and Storage Technologies 86
4.4 File System Security and Access Control 91
4.5 Introduction to Solid-State Drives (SSDs) and Storage 93
Virtualization
Chapter-5 Distributed Systems and Virtualization
5.1 Distributed Operating Systems and Their Challenges 97
5.2 Communication and Synchronization in Distributed 101
Systems
5.3 Distributed File Systems and Naming 102
5.4 Virtual Machines and Hypervisors 104
5.5 Cloud Computing and Virtualization Technologies 106

viii
SYLLABUS
Chapter-1 Introduction to Operating Systems
Definition and purpose of an operating system - Functions and components
of an operating system - History and evolution of operating systems - Types
of operating systems (real-time, batch, multi-user, etc.) - Operating system
architectures
Chapter-2: Process and Thread Management
Process concept and process control block - Process scheduling algorithms
(e.g., FCFS, SJF, Round Robin) - Multithreading and thread management -
Thread synchronization and communication - Inter process communication
(IPC) mechanisms
Chapter-3: Memory Management
Virtual memory and address translation - Paging and segmentation - Memory
allocation techniques (e.g., buddy system, slab allocation) - Memory
protection and sharing - Memory management in multiprocessor systems.
Chapter-4: File Systems and Storage
File system concepts and organization - File system implementation and data
structures - Disk management and storage technologies - File system security
and access control - Introduction to solid-state drives (SSDs) and storage
virtualization
Chapter-5: Distributed Systems and Virtualization
Distributed operating systems and their challenges - Communication and
synchronization in distributed systems - Distributed file systems and naming
- Virtual machines and hypervisors - Cloud computing and virtualization
technologies.

ix
INTRODUCTION TO
OPERATING SYSTEMS

Definition and purpose of an operating system - Functions and


components of an operating system - History and evolution of operating
systems - Types of operating systems (real-time, batch, multi-user, etc.) -
Operating system architectures

1.1 Definition:

Operating System lies in the category of system software. It basically


manages all the resources of the computer. An operating system acts as an
interface between the software and different parts of the computer or the
computer hardware. The operating system is designed in such a way that
it can manage the overall resources and operations of the computer.

Operating System is a fully integrated set of specialized programs that


handle all the operations of the computer. It controls and monitors the
execution of all other programs that reside in the computer, which also
includes application programs and other system software of the computer.
Examples of Operating Systems are Windows, Linux, Mac OS, etc.

An Operating System (OS) is a collection of software that manages


computer hardware resources and provides common services for computer
programs. The operating system is the most important type of system
software in a computer system.

[1]
Chapter-1 Introduction to Operating System

Purpose of Operating System:

The operating system helps in improving the computer software as


well as hardware. Without OS, it became very difficult for any application to
be user-friendly. The Operating System provides a user with an interface that
makes any application attractive and user-friendly. The operating System
comes with a large number of device drivers that make OS services reachable
to the hardware environment. Each and every application present in the
system requires the Operating System. The operating system works as a
communication channel between system hardware and system software. The
operating system helps an application with the hardware part without
knowing about the actual hardware configuration. It is one of the most
important parts of the system and hence it is present in every device, whether
large or small device.

1.2 Functions of the Operating System

• Resource Management: The operating system manages and


allocates memory, CPU time, and other hardware resources among
the various programs and processes running on the computer.
• Process Management: The operating system is responsible for
starting, stopping, and managing processes and programs. It also
controls the scheduling of processes and allocates resources to them.
• Memory Management: The operating system manages the
computer’s primary memory and provides mechanisms for
optimizing memory usage.

[2]
Advanced Operating System

• Security: The operating system provides a secure environment for


the user, applications, and data by implementing security policies and
mechanisms such as access controls and encryption.
• Job Accounting: It keeps track of time and resources used by various
jobs or users.
• File Management: The operating system is responsible for
organizing and managing the file system, including the creation,
deletion, and manipulation of files and directories.
• Device Management: The operating system manages input/output
devices such as printers, keyboards, mice, and displays. It provides
the necessary drivers and interfaces to enable communication
between the devices and the computer.
• Networking: The operating system provides networking capabilities
such as establishing and managing network connections, handling
network protocols, and sharing resources such as printers and files
over a network.
• User Interface: The operating system provides a user interface that
enables users to interact with the computer system. This can be a
Graphical User Interface (GUI), a Command-Line Interface (CLI), or
a combination of both.
• Backup and Recovery: The operating system provides mechanisms
for backing up data and recovering it in case of system failures, errors,
or disasters.
• Virtualization: The operating system provides virtualization
capabilities that allow multiple operating systems or applications to

[3]
Chapter-1 Introduction to Operating System

run on a single physical machine. This can enable efficient use of


resources and flexibility in managing workloads.
• Performance Monitoring: The operating system provides tools for
monitoring and optimizing system performance, including
identifying bottlenecks, optimizing resource usage, and analysing
system logs and metrics.
• Time-Sharing: The operating system enables multiple users to share
a computer system and its resources simultaneously by providing
time-sharing mechanisms that allocate resources fairly and
efficiently.
• System Calls: The operating system provides a set of system calls
that enable applications to interact with the operating system and
access its resources. System calls provide a standardized interface
between applications and the operating system, enabling portability
and compatibility across different hardware and software platforms.
• Error-detecting Aids: These contain methods that include the
production of dumps, traces, error messages, and other debugging and
error-detecting methods.

Objectives of Operating Systems

• Convenient to use: One of the objectives is to make the computer


system more convenient to use in an efficient manner.
• User Friendly: To make the computer system more interactive with
a more convenient interface for the users.
• Easy Access: To provide easy access to users for using resources by
acting as an intermediary between the hardware and its users.

[4]
Advanced Operating System

• Management of Resources: For managing the resources of a


computer in a better and faster way.
• Controls and Monitoring: By keeping track of who is using which
resource, granting resource requests, and mediating conflicting
requests from different programs and users.
• Fair Sharing of Resources: Providing efficient and fair sharing of
resources between the users and programs.

Types of Operating Systems

• Batch Operating System: A Batch Operating System is a type of


operating system that does not interact with the computer directly.
There is an operator who takes similar jobs having the same
requirements and groups them into batches.
• Time-sharing Operating System: Time-sharing Operating System
is a type of operating system that allows many users to share computer
resources (maximum utilization of the resources).
• Distributed Operating System: Distributed Operating System is a
type of operating system that manages a group of different computers
and makes appear to be a single computer. These operating systems
are designed to operate on a network of computers. They allow
multiple users to access shared resources and communicate with each
other over the network. Examples include Microsoft Windows Server
and various distributions of Linux designed for servers.
• Network Operating System: Network Operating System is a type of
operating system that runs on a server and provides the capability to

[5]
Chapter-1 Introduction to Operating System

manage data, users, groups, security, applications, and other


networking functions.
• Real-time Operating System: Real-time Operating System is a type
of operating system that serves a real-time system and the time
interval required to process and respond to inputs is very small. These
operating systems are designed to respond to events in real time. They
are used in applications that require quick and deterministic
responses, such as embedded systems, industrial control systems, and
robotics.
• Multiprocessing Operating System: Multiprocessor Operating
Systems are used in operating systems to boost the performance of
multiple CPUs within a single computer system. Multiple CPUs are
linked together so that a job can be divided and executed more
quickly.
• Single-User Operating Systems: Single-User Operating Systems
are designed to support a single user at a time. Examples include
Microsoft Windows for personal computers and Apple macOS.
• Multi-User Operating Systems: Multi-User Operating Systems are
designed to support multiple users simultaneously. Examples include
Linux and Unix.
• Embedded Operating Systems: Embedded Operating Systems are
designed to run on devices with limited resources, such as
smartphones, wearable devices, and household appliances. Examples
include Google’s Android and Apple’s iOS.

[6]
Advanced Operating System

• Cluster Operating Systems: Cluster Operating Systems are


designed to run on a group of computers, or a cluster, to work together
as a single system. They are used for high-performance computing
and for applications that require high availability and reliability.
Examples include Rocks Cluster Distribution and OpenMPI.

Components of Operating System

• An operating system is a large and complex system that can only be


created by partitioning into small parts. These pieces should be a well-
defined part of the system, carefully defining inputs, outputs, and
functions.
• Although Windows, Mac, UNIX, Linux, and other OS do not have
the same structure, most operating systems share similar OS system
components, such as file, memory, process, I/O device management.
• The components of an operating system play a key role to make a
variety of computer system parts work together.
• There are the following components of an operating system, such as:
1. Process Management
2. File Management
3. Network Management
4. Main Memory Management
5. Secondary Storage Management
6. I/O Device Management
7. Security Management
8. Command Interpreter System

[7]
Chapter-1 Introduction to Operating System

Operating system components help you get the correct computing by


detecting CPU and memory hardware errors.

Fig. 1.1: Operating System Components

1. Process Management

The process management component is a procedure for managing


many processes running simultaneously on the operating system. Every
running software application program has one or more processes
associated with them.

For example: when you use a search engine like Chrome, there is a
process running for that browser program.

Process management keeps processes running efficiently. It also uses


memory allocated to them and shutting them down when needed. The
execution of a process must be sequential so, at least one instruction
should be executed on behalf of the process.

[8]
Advanced Operating System

Fig. 1.2: Operating System Process Lifecycle

Functions of process management

Here are the following functions of process management in the operating


system, such as:

• Process creation and deletion.


• Suspension and resumption.
• Synchronization process
• Communication process
2. File Management

A file is a set of related information defined by its creator. It


commonly represents programs (both source and object forms) and data.
Data files can be alphabetic, numeric, or alphanumeric.

[9]
Chapter-1 Introduction to Operating System

Fig. 1.3: process state transition

Function of file management

The operating system has the following important activities in connection


with file management:

• File and directory creation and deletion.


• For manipulating files and directories.
• Mapping files onto secondary storage.
• Backup files on stable storage media.

3. Network Management

Network management is the process of administering and managing


computer networks. It includes performance management, provisioning
of networks, fault analysis, and maintaining the quality of service.

[10]
Advanced Operating System

Fig. 1.4: peer-to-peer (P2P) network.

A distributed system is a collection of computers or processors that


never share their memory and clock. In this type of system, all the
processors have their local memory, and the processors communicate
with each other using different communication cables, such as fibre
optics or telephone lines.

The computers in the network are connected through a


communication network, which can configure in many different ways.
The network can fully or partially connect in network management,
which helps users design routing and connection strategies that overcome
connection and security issues.

Functions of Network management

Network management provides the following functions, such as:

• Distributed systems help you to various computing resources in


size and function. They may involve minicomputers,
microprocessors, and many general-purpose computer systems.
• A distributed system also offers the user access to the various
resources the network shares.

[11]
Chapter-1 Introduction to Operating System

• It helps to access shared resources that help computation to speed


up or offers data availability and reliability.
4. Main Memory management

Main memory is a large array of storage or bytes, which has an


address. The memory management process is conducted by using a
sequence of reads or writes of specific memory addresses. It should be
mapped to absolute addresses and loaded inside the memory to execute a
program. The selection of a memory management method depends on
several factors.

However, it is mainly based on the hardware design of the system.


Each algorithm requires corresponding hardware support. Main memory
offers fast storage that can be accessed directly by the CPU. It is costly
and hence has a lower storage capacity. However, for a program to be
executed, it must be in the main memory.

Fig. 1.5: Memory Management with Swapping

[12]
Advanced Operating System

Functions of Memory management

An Operating System performs the following functions for Memory


Management in the operating system:

• It helps you to keep track of primary memory.


• Determine what part of it are in use by whom, what part is not
in use.
• In a multiprogramming system, the OS decides which process
will get memory and how much.
• Allocates the memory when a process requests.
• It also de-allocates the memory when a process no longer
requires or has been terminated.
5. Secondary-Storage Management
The most important task of a computer system is to execute programs.
These programs help you to access the data from the main memory
during execution. This memory of the computer is very small to store
all data and programs permanently. The computer system offers
secondary storage to back up the main memory.

Fig. 1.6: Computer System Model

[13]
Chapter-1 Introduction to Operating System

Today modern computers use hard drives/SSD as the primary storage


of both programs and data. However, the secondary storage management
also works with storage devices, such as USB flash drives and CD/DVD
drives. Programs like assemblers and compilers are stored on the disk
until it is loaded into memory, and then use the disk is used as a source
and destination for processing.

Functions of Secondary storage management

Here are some major functions of secondary storage management in the


operating system:

• Storage allocation
• Free space management
• Disk scheduling

6. I/O Device Management

One of the important uses of an operating system that helps to hide


the variations of specific hardware devices from the user.

Fig. 1.7: Von Neumann Architecture

[14]
Advanced Operating System

Functions of I/O management

The I/O management system offers the following functions, such as:

• It offers a buffer caching system


• It provides general device driver code
• It provides drivers for particular hardware devices.
• I/O helps you to know the individualities of a specific device.
7. Security Management

The various processes in an operating system need to be secured from


other activities. Therefore, various mechanisms can ensure those
processes that want to operate files, memory CPU, and other hardware
resources should have proper authorization from the operating system.

Security refers to a mechanism for controlling the access of programs,


processes, or users to the resources defined by computer controls to be
imposed, together with some means of enforcement.

Fig. 1.8: network security server

[15]
Chapter-1 Introduction to Operating System

For example: memory addressing hardware helps to confirm that a


process can be executed within its own address space. The time ensures
that no process has control of the CPU without renouncing it. Lastly, no
process is allowed to do its own I/O to protect, which helps you to keep
the integrity of the various peripheral devices.

Security can improve reliability by detecting latent errors at the


interfaces between component subsystems. Early detection of interface
errors can prevent the foulness of a healthy subsystem by a
malfunctioning subsystem. An unprotected resource cannot misuse by an
unauthorized or incompetent user.

8. Command Interpreter System

One of the most important components of an operating system is its


command interpreter. The command interpreter is the primary interface
between the user and the rest of the system.

Fig. 1.9: decision-making process

Many commands are given to the operating system by control


statements. A program that reads and interprets control statements is

[16]
Advanced Operating System

automatically executed when a new job is started in a batch system or a


user logs in to a time-shared system.

This program is variously called.

• The control card interpreter,


• The command-line interpreter,
• The shell (in UNIX), and so on.

Its function is quite simple, get the next command statement, and
execute it. The command statements deal with process management, I/O
handling, secondary storage management, main memory management,
file system access, protection, and networking.

1.3 History and Evolution of Operating Systems


An operating system is a type of software that acts as an interface
between the user and the hardware. It is responsible to handle various critical
functions of the computer or any other machine. Various tasks that are
handled by OS are file management, task management, garbage
management, memory management, process management, disk
management, I/O management, peripherals management, etc.

Evolution of Operating Systems


Operating Systems have evolved in past years. It went through several
changes before getting its original form. These changes in the operating
system are known as the evolution of operating systems. OS improve itself
with the invention of new technology. Basically, OS added the feature of new
technology and making itself more powerful. Let us see the evolution of
operating system year-wise in detail:

[17]
Chapter-1 Introduction to Operating System

• No OS – (0s to 1940s):
o As we know that before 1940s, there was no use of OS.
Earlier, people are lacking OS in their computer system so
they had to manually type instructions for each task in
machine language (0-1 based language). And at that time, it
was very hard for users to implement even a simple task. And
it was very time consuming and also not user-friendly.
Because not everyone had that much level of understanding
to understand the machine language and it required a deep
understanding.
• Batch Processing Systems - (1940s to 1950s):
o With the growth of time, batch processing system came into
the market. Now Users had facility to write their programs on
punch cards and load it to the computer operator. And then
operator make different batches of similar types of jobs and
then serve the different batch (group of jobs) one by one to the
CPU. CPU first executes jobs of one batch and them jump to
the jobs of other batch in a sequence manner.
• Multiprogramming Systems - (1950s to 1960s):
o Multiprogramming was the first operating system where
actual revolution began. It provides user facility to load the
multiple programs into the memory and provide a specific
portion of memory to each program. When one program is
waiting for any I/O operations (which take much time) at that
time the OS give permission to CPU to switch from previous

[18]
Advanced Operating System

program to other program (which is first in ready queue) for


continuous execution of program with interrupt.
• Time-Sharing Systems - (1960s to 1970s):
o Time-sharing systems is extended version of
multiprogramming system. Here one extra feature was added
to avoid the use of CPU for long time by any single program
and give access of CPU to every program after a certain
interval of time. Basically, OS switches from one program to
another program after a certain interval of time so that every
program can get access of CPU and complete their work.
• Introduction of GUI - (1970s to 1980s):
o With the growth of time, Graphical User Interfaces (GUIs)
came. First time OS became more user-friendly and changed
the way of people to interact with computer. GUI provides
computer system visual elements which made user’s
interaction with computer more comfortable and user-
friendly. User can just click on visual elements rather than
typing commands. Here are some feature of GUI in
Microsoft’s windows icons, menus and windows.
• Networked Systems – (1980s to 1990s):
o At 1980s, the craze of computer networks at its peak. A special
type of Operating Systems needed to manage the network
communication. The OS like Novell NetWare and Windows
NT were developed to manage network communication which
provide users facility to work in collaborative environment
and made file sharing and remote access very easy.

[19]
Chapter-1 Introduction to Operating System

• Mobile Operating Systems – (Late 1990s to Early 2000s):


o Invention of smartphones create a big revolution in software
industry, to handle the operation of smartphones, a special
type of operating systems was developed. Some of them are:
iOS and Android etc. These operating systems were optimized
with the time and became more powerful.
• AI Integration – (2010s to ongoing):
o With the growth of time, Artificial intelligence came into
picture. Operating system integrates features of AI technology
like Siri, Google Assistant, and Alexa and became more
powerful and efficient in many ways. These AI features with
operating system create a entire new feature like voice
commands, predictive text, and personalized
recommendations.

1.4 Types of Operating Systems

An Operating System performs all the basic tasks like managing files,
processes, and memory. Thus operating system acts as the manager of all the
resources, i.e. resource manager. Thus, the operating system becomes an
interface between the user and the machine. It is one of the most required
software that is present in the device.

Operating System is a type of software that works as an interface


between the system program and the hardware. There are several types of
Operating Systems in which many of which are mentioned below. Let’s have
a look at them.

[20]
Advanced Operating System

There are several types of Operating Systems. They are,

1. Batch Operating System


2. Multi-Programming System
3. Multi-Processing System
4. Multi-Tasking Operating System
5. Time-Sharing Operating System
6. Distributed Operating System
7. Network Operating System
8. Real-Time Operating System
1.4.1 Real-Time Operating System

These types of OSs serve real-time systems. The time interval required to
process and respond to inputs is very small. This time interval is called
response time.

Real-time systems are used when there are time requirements that are
very strict like missile systems, air traffic control systems, robots, etc.

Fig. 1.16: Real-Time Operating System


[21]
Chapter-1 Introduction to Operating System

Types of Real-Time Operating Systems

• Hard Real-Time Systems: Hard Real-Time OSs are meant for


applications where time constraints are very strict and even the
shortest possible delay is not acceptable. These systems are built for
saving life like automatic parachutes or airbags which are required to
be readily available in case of an accident. Virtual memory is rarely
found in these systems.
• Soft Real-Time Systems: These OSs are for applications where time-
constraint is less strict.

Advantages of RTOS

• Maximum Consumption: Maximum utilization of devices and


systems, thus more output from all the resources.
• Task Shifting: The time assigned for shifting tasks in these
systems is very less. For example, in older systems, it takes about
10 microseconds in shifting from one task to another, and in the
latest systems, it takes 3 microseconds.
• Focus on Application: Focus on running applications and less
importance on applications that are in the queue.
• Real-time operating system in the embedded system: Since the
size of programs is small, RTOS can also be used in embedded
systems like in transport and others.
• Error Free: These types of systems are error-free.
• Memory Allocation: Memory allocation is best managed in these
types of systems.

[22]
Advanced Operating System

Disadvantages of RTOS

• Limited Tasks: Very few tasks run at the same time and their
concentration is very less on a few applications to avoid errors.
• Use heavy system resources: Sometimes the system resources are
not so good and they are expensive as well.
• Complex Algorithms: The algorithms are very complex and
difficult for the designer to write on.
• Device driver and interrupt signals: It needs specific device
drivers and interrupts signal to respond earliest to interrupts.
• Thread Priority: It is not good to set thread priority as these
systems are very less prone to switching tasks.

Examples of Real-Time Operating Systems: Scientific experiments,


medical imaging systems, industrial control systems, weapon systems,
robots, air traffic control systems, etc.

1.4.2 Batch Operating System

This type of operating system does not interact with the computer
directly. There is an operator which takes similar jobs having the same
requirement and groups them into batches. It is the responsibility of the
operator to sort jobs with similar needs.

[23]
Chapter-1 Introduction to Operating System

Fig. 1.10: Batch Operating System

Advantages of Batch Operating System

• It is very difficult to guess or know the time required for any


job to complete. Processors of the batch systems know how
long the job would be when it is in the queue.
• Multiple users can share the batch systems.
• The idle time for the batch system is very less.
• It is easy to manage large work repeatedly in batch systems.

Disadvantages of Batch Operating System

• The computer operators should be well known with batch


systems.
• Batch systems are hard to debug.
• It is sometimes costly.
• The other jobs will have to wait for an unknown time if any
job fails.

[24]
Advanced Operating System

Examples of Batch Operating Systems: Payroll Systems, Bank


Statements, etc.

1.4.3 Multi-Programming Operating System

Multiprogramming Operating Systems can be simply illustrated as


more than one program is present in the main memory and any one of
them can be kept in execution. This is basically used for better execution
of resources.

Fig. 1.11: Multiprogramming

Advantages of Multi-Programming Operating System

• Multi Programming increases the Throughput of the System.


• It helps in reducing the response time.

Disadvantages of Multi-Programming Operating System

• There is not any facility for user interaction of system resources


with the system.

[25]
Chapter-1 Introduction to Operating System

1.4.3.1 Multi-Processing Operating System

Multi-Processing Operating System is a type of Operating System in


which more than one CPU is used for the execution of resources. It betters
the throughput of the System.

Fig. 1.12: Multiprocessing

Advantages of Multi-Processing Operating System

• It increases the throughput of the system.


• As it has several processors, so, if one processor fails, we can
proceed with another processor.

Disadvantages of Multi-Processing Operating System

• Due to the multiple CPU, it can be more complex and somehow


difficult to understand.

[26]
Advanced Operating System

1.4.3.2 Multi-Tasking Operating System

Multitasking Operating System is simply a multiprogramming


Operating System with having facility of a Round-Robin Scheduling
Algorithm. It can run multiple programs simultaneously.

There are two types of Multi-Tasking Systems which are listed below.

• Pre-emptive Multi-Tasking
• Cooperative Multi-Tasking

Fig. 1.13: Multitasking

Advantages of Multi-Tasking Operating System

• Multiple Programs can be executed simultaneously in Multi-


Tasking Operating System.
• It comes with proper memory management.

[27]
Chapter-1 Introduction to Operating System

Disadvantages of Multi-Tasking Operating System

• The system gets heated in case of heavy programs multiple times.


1.4.4 Others
1.4.4.1 Time-Sharing Operating Systems

Each task is given some time to execute so that all the tasks work
smoothly. Each user gets the time of the CPU as they use a single system.
These systems are also known as Multitasking Systems. The task can be
from a single user or different users also. The time that each task gets to
execute is called quantum. After this time interval is over OS switches
over to the next task.

Fig. 1.14: Time-Sharing

Advantages of Time-Sharing OS

• Each task gets an equal opportunity.


• Fewer chances of duplication of software.

[28]
Advanced Operating System

• CPU idle time can be reduced.


• Resource Sharing: Time-sharing systems allow multiple users to
share hardware resources such as the CPU, memory, and peripherals,
reducing the cost of hardware and increasing efficiency.
• Improved Productivity: Time-sharing allows users to work
concurrently, thereby reducing the waiting time for their turn to use
the computer. This increased productivity translates to more work
getting done in less time.
• Improved User Experience: Time-sharing provides an interactive
environment that allows users to communicate with the computer in
real time, providing a better user experience than batch processing.

Disadvantages of Time-Sharing OS

• Reliability problem.
• One must have to take care of the security and integrity of user
programs and data.
• Data communication problem.
• High Overhead: Time-sharing systems have a higher overhead than
other operating systems due to the need for scheduling, context
switching, and other overheads that come with supporting multiple
users.
• Complexity: Time-sharing systems are complex and require
advanced software to manage multiple users simultaneously. This
complexity increases the chance of bugs and errors.
• Security Risks: With multiple users sharing resources, the risk of
security breaches increases. Time-sharing systems require careful

[29]
Chapter-1 Introduction to Operating System

management of user access, authentication, and authorization to


ensure the security of data and software.

Examples of Time-Sharing OS with explanation

• IBM VM/CMS: IBM VM/CMS is a time-sharing operating


system that was first introduced in 1972. It is still in use today,
providing a virtual machine environment that allows multiple
users to run their own instances of operating systems and
applications.
• TSO (Time Sharing Option): TSO is a time-sharing operating
system that was first introduced in the 1960s by IBM for the IBM
System/360 mainframe computer. It allowed multiple users to
access the same computer simultaneously, running their own
applications.
• Windows Terminal Services: Windows Terminal Services is a
time-sharing operating system that allows multiple users to access
a Windows server remotely. Users can run their own applications
and access shared resources, such as printers and network storage,
in real-time.

1.4.4.2 Distributed Operating System

These types of operating system are a recent advancement in the


world of computer technology and are being widely accepted all over the
world and, that too, at a great pace. Various autonomous interconnected
computers communicate with each other using a shared communication
network. Independent systems possess their own memory unit and CPU.

[30]
Advanced Operating System

These are referred to as loosely coupled systems or distributed systems.


These systems’ processors differ in size and function. The major benefit
of working with these types of the operating system is that it is always
possible that one user can access the files or software which are not
actually present on his system but some other system connected within
this network i.e., remote access is enabled within the devices connected
in that network.

Advantages of Distributed Operating System

• Failure of one will not affect the other network communication,


as all systems are independent of each other.
• Electronic mail increases the data exchange speed.
• Since resources are being shared, computation is highly fast and
durable.
• Load on host computer reduces.
• These systems are easily scalable as many systems can be easily
added to the network.
• Delay in data processing reduces.

Disadvantages of Distributed Operating System

• Failure of the main network will stop the entire communication.


• To establish distributed systems the language is used not well-
defined yet.
• These types of systems are not readily available as they are very
expensive. Not only that the underlying software is highly
complex and not understood well yet.

[31]
Chapter-1 Introduction to Operating System

• Examples of Distributed Operating Systems are LOCUS, etc.

1.4.4.3 Network Operating System

These systems run on a server and provide the capability to manage


data, users, groups, security, applications, and other networking
functions. These types of operating systems allow shared access to files,
printers, security, applications, and other networking functions over a
small private network. One more important aspect of Network Operating
Systems is that all the users are well aware of the underlying
configuration, of all other users within the network, their individual
connections, etc. and that’s why these computers are popularly known as
tightly coupled systems.

Fig. 1.15: Network Operating System

[32]
Advanced Operating System

Advantages of Network Operating System

• Highly stable centralized servers.


• Security concerns are handled through servers.
• New technologies and hardware up-gradation are easily
integrated into the system.
• Server access is possible remotely from different locations and
types of systems.

Disadvantages of Network Operating System

• Servers are costly.


• User has to depend on a central location for most operations.
• Maintenance and updates are required regularly.

Examples of Network Operating Systems: Microsoft Windows Server


2003, Microsoft Windows Server 2008, UNIX, Linux, Mac OS X, Novell
NetWare, BSD, etc.

1.5 Operating System Architectures

An operating system is a design that enables user application


programs to communicate with the hardware of the machine. The operating
system should be built with the utmost care because it is such a complicated
structure and should be simple to use and modify. Partially developing the
operating system is a simple approach to accomplish this. Each of these
components needs to have distinct inputs, outputs, and functionalities.

[33]
Chapter-1 Introduction to Operating System

This article discusses many sorts of structures that implement


operating systems, as listed below, as well as how and why they work. It also
defines the operating system structure.

1. Simple Structure
2. Monolithic Structure
3. Layered Approach Structure
4. Micro-Kernel Structure
5. Exo-Kernel Structure
6. Virtual Machines

Operating system structure can be thought of as the strategy for


connecting and incorporating various operating system components within
the kernel. Operating systems are implemented using many types of
structures.

1. Simple Structure

It is the most straightforward operating system structure, but it lacks


definition and is only appropriate for usage with tiny and restricted
systems. Since the interfaces and degrees of functionality in this structure
are clearly defined, programs are able to access I/O routines, which may
result in unauthorized access to I/O procedures.

This organizational structure is used by the MS-DOS operating system:

• There are four layers that make up the MS-DOS operating system,
and each has its own set of features.

[34]
Advanced Operating System

• These layers include ROM BIOS device drivers, MS-DOS device


drivers, application programs, and system programs.
• The MS-DOS operating system benefits from layering because
each level can be defined independently and, when necessary, can
interact with one another.
• If the system is built in layers, it will be simpler to design,
manage, and update. Because of this, simple structures can be
used to build constrained systems that are less complex.
• When a user program fails, the operating system as whole crashes.
• Because MS-DOS systems have a low level of abstraction,
programs and I/O procedures are visible to end users, giving them
the potential for unwanted access.

Fig. 1.17: decision-making process

Advantages of Simple Structure:

• Because there are only a few interfaces and levels, it is simple to


develop.
• Because there are fewer layers between the hardware and the
applications, it offers superior performance.

[35]
Chapter-1 Introduction to Operating System

Disadvantages of Simple Structure:

• The entire operating system breaks if just one user program


malfunctions.
• Since the layers are interconnected, and in communication with
one another, there is no abstraction or data hiding.
• The operating system's operations are accessible to layers, which
can result in data tampering and system failure.

2. Monolithic Structure

The monolithic operating system controls all aspects of the operating


system's operation, including file management, memory management,
device management, and operational operations.

The core of an operating system for computers is called the kernel


(OS). All other System components are provided with fundamental
services by the kernel. The operating system and the hardware use it as
their main interface. When an operating system is built into a single piece
of hardware, such as a keyboard or mouse, the kernel can directly access
all of its resources.

The monolithic operating system is often referred to as the monolithic


kernel. Multiple programming techniques such as batch processing and
time-sharing increase a processor's usability. Working on top of the
operating system and under complete command of all hardware, the
monolithic kernel performs the role of a virtual computer.

[36]
Advanced Operating System

This is an old operating system that was used in banks to carry out
simple tasks like batch processing and time-sharing, which allows
numerous users at different terminals to access the Operating System.

The following diagram represents the monolithic structure:

Fig. 1.18: monolithic structure

Advantages of Monolithic Structure:

• Because layering is unnecessary and the kernel alone is


responsible for managing all operations, it is easy to design and
execute.
• Due to the fact that functions like memory management, file
management, process scheduling, etc., are implemented in the
same address area, the monolithic kernel runs rather quickly when
compared to other systems. Utilizing the same address speeds up

[37]
Chapter-1 Introduction to Operating System

and reduces the time required for address allocation for new
processes.

Disadvantages of Monolithic Structure:

• The monolithic kernel's services are interconnected in address


space and have an impact on one another, so if any of them
malfunctions, the entire system does as well.
• It is not adaptable. Therefore, launching a new service is difficult.
3. Layered Structure

The OS is separated into layers or levels in this kind of arrangement.


Layer 0 (the lowest layer) contains the hardware, and layer 1 (the highest
layer) contains the user interface (layer N). These layers are organized
hierarchically, with the top-level layers making use of the capabilities of
the lower-level ones.

The functionalities of each layer are separated in this method, and


abstraction is also an option. Because layered structures are hierarchical,
debugging is simpler, therefore all lower-level layers are debugged before
the upper layer is examined. As a result, the present layer alone has to be
reviewed since all the lower layers have already been examined.

The image below shows how OS is organized into layers:

[38]
Advanced Operating System

Fig. 1.19: Kernel and User Space Interaction in a Layered OS

Advantages of Layered Structure:

• Work duties are separated since each layer has its own
functionality, and there is some amount of abstraction.
• Debugging is simpler because the lower layers are examined first,
followed by the top layers.

Disadvantages of Layered Structure:

• Performance is compromised in layered structures due to


layering.
• Construction of the layers requires careful design because upper
layers only make use of lower layers' capabilities.
4. Micro-Kernel Structure

The operating system is created using a micro-kernel framework that


strips the kernel of any unnecessary parts. Systems and user applications
are used to implement these optional kernel components. So, Micro-
Kernels is the name given to these systems that have been developed.

[39]
Chapter-1 Introduction to Operating System

Each Micro-Kernel is created separately and is kept apart from the


others. As a result, the system is now more trustworthy and secure. If one
Micro-Kernel malfunctions, the remaining operating system is unaffected
and continues to function normally.

The image below shows Micro-Kernel Operating System Structure:

Advantages of Micro-Kernel Structure:

• It enables portability of the operating system across platforms.


• Due to the isolation of each Micro-Kernel, it is reliable and
secure.
• The reduced size of Micro-Kernels allows for successful testing.
• The remaining operating system remains unaffected and keeps
running properly even if a component or Micro-Kernel fails.

Disadvantages of Micro-Kernel Structure:

• The performance of the system is decreased by increased inter-


module communication.
• The construction of a system is complicated.

5. Exokernel:

An operating system called Exokernel was created at MIT with the


goal of offering application-level management of hardware resources.
The exokernel architecture's goal is to enable application-specific
customization by separating resource management from protection.
Exokernel size tends to be minimal due to its limited operability.

[40]
Advanced Operating System

Because the OS sits between the programs and the actual hardware,
it will always have an effect on the functionality, performance, and
breadth of the apps that are developed on it. By rejecting the idea that an
operating system must offer abstractions upon which to base applications,
the exokernel operating system makes an effort to solve this issue. The
goal is to give developers as few restrictions on the use of abstractions as
possible while yet allowing them the freedom to do so when necessary.
Because of the way the exokernel architecture is designed, a single tiny
kernel is responsible for moving all hardware abstractions into unreliable
libraries known as library operating systems. Exokernels differ from
micro- and monolithic kernels in that their primary objective is to prevent
forced abstraction.

Exokernel operating systems have a number of features, including:

• Enhanced application control support.


• Splits management and security apart.
• A secure transfer of abstractions is made to an unreliable library
operating system.
• Brings up a low-level interface.
• Operating systems for libraries provide compatibility and
portability.

Advantages of Exokernel Structure:

• Application performance is enhanced by it.


• Accurate resource allocation and revocation enable more
effective utilisation of hardware resources.

[41]
Chapter-1 Introduction to Operating System

• New operating systems can be tested and developed more easily.


• Every user-space program is permitted to utilise its own
customised memory management.

Disadvantages of Exokernel Structure:

• A decline in consistency
• Exokernel interfaces have a complex architecture.

6. Virtual Machines (VMS)

The hardware of our personal computer, including the CPU, disc


drives, RAM, and NIC (Network Interface Card), is abstracted by a
virtual machine into a variety of various execution contexts based on our
needs, giving us the impression that each execution environment is a
separate computer. A virtual box is an example of it.

Using CPU scheduling and virtual memory techniques, an operating


system allows us to execute multiple processes simultaneously while
giving the impression that each one is using a separate processor and
virtual memory. System calls and a file system are examples of extra
functionalities that a process can have that the hardware is unable to give.
Instead of offering these extra features, the virtual machine method just
offers an interface that is similar to that of the most fundamental
hardware. A virtual duplicate of the computer system underneath is made
available to each process.

We can develop a virtual machine for a variety of reasons, all of which


are fundamentally connected to the capacity to share the same underlying

[42]
Advanced Operating System

hardware while concurrently supporting various execution environments,


i.e., various operating systems.

Disk systems are the fundamental problem with the virtual machine
technique. If the actual machine only has three-disc drives but needs to
host seven virtual machines, let's imagine that. It is obvious that it is
impossible to assign a disc drive to every virtual machine because the
program that creates virtual machines would require a sizable amount of
disc space in order to offer virtual memory and spooling. The provision
of virtual discs is the solution.

The result is that users get their own virtual machines. They can then
use any of the operating systems or software programs that are installed
on the machine below. Virtual machine software is concerned with
programming numerous virtual machines simultaneously into a physical
machine; it is not required to take into account any user-support software.
With this configuration, it may be possible to break the challenge of
building an interactive system for several users into two manageable
chunks.

Advantages of Virtual Machines:

• Due to total isolation between each virtual machine and every


other virtual machine, there are no issues with security.
• A virtual machine may offer an architecture for the instruction set
that is different from that of actual computers.
• Simple availability, accessibility, and recovery convenience.

[43]
Chapter-1 Introduction to Operating System

Disadvantages of Virtual Machines:

• Depending on the workload, operating numerous virtual machines


simultaneously on a host computer may have an adverse effect on one of
them.
• When it comes to hardware access, virtual computers are less effective
than physical ones.

[44]
Chapter-2 Process and Thread Management

PROCESS AND THREAD


MANAGEMENT

Chapter-2 Process and Thread Management


Process concept and process control block - Process scheduling algorithms (e.g., FCFS, SJF,
Round Robin) - Multithreading and thread management - Thread synchronization and
communication - Inter process communication (IPC) mechanism.

2.1 Introduction Process

The active program which running now on the Operating System is known as the process. The
Process is the base of all computing things. Although process is relatively similar to the
computer code but, the method is not the same as computer code.

A process is actively running software or a computer code. Any procedure must be


carried out in a precise order
An entity that helps in describing the fundamental work unit that must be implemented
in any system is referred to as a process
When a program is loaded into memory, it may be divided into the four components
stack, heap, text, and data to form a process

Fig. 2.1: Layered OS Structure

[45]
Advanced Operating System

Stack
The process stack stores temporary information such as method or function arguments, the
return address, and local variables.
Heap
This is the memory where a process is dynamically allotted while it is running.
Text
This consists of the information stored in the processor's registers as well as the most recent
activity indicated by the program counter's value.
Data
Both global and static variables.
2.1.1 Process Control Block (PCB)

It is a data structure that is used by an Operating System to manage and regulate how processes
are carried out. In operating systems, managing the process and scheduling them properly play
the most significant role in the efficient usage of memory and other system resources. In the
process control block, all the details regarding the process corresponding to it like its current
status, its program counter, its memory use, its open files, and details about CPU scheduling
are stored.

Fig. 2.2: Process controls block Architecture.

[46]
Chapter-2 Process and Thread Management

PCB is helpful in doing that as it helps the OS to actively monitor the process and redirect
system resources to each process accordingly. The OS creates a PCB for every process which

Fig. 2.3: Process controls block Architecture

is created, and it contains all the important information about the process. All this information
is afterward used by the OS to manage processes and run them efficiently.

Primary Terminologies Related to Process Control Block:

Process State: The state of the process is stored in the PCB which helps to manage the
processes and schedule them. There are different states for a process which are
“running,” “waiting,” “ready,” or “terminated.”

Process ID: The OS assigns a unique identifier to every process as soon as it is created
which is known as Process ID, this helps to distinguish between processes.

Program Counter: While running processes when the context switch occurs the last
instruction to be executed is stored in the program counter which helps in resuming the
execution of the process from where it left off.

CPU Registers: The CPU registers of the process helps to restore the state of the
process so the PCB stores a copy of them.

[47]
Advanced Operating System

Memory Information: The information like the base address or total memory
allocated to a process is stored in PCB which helps in efficient memory allocation to
the processes.

Process Scheduling Information: The priority of the processes or the algorithm of


scheduling is stored in the PCB to help in making scheduling decisions of the OS.

Accounting Information: The information such as CPU time, memory usage, etc
helps the OS to monitor the performance of the process.

Advantages of Using Process Control Block

As, PCB stores all the information about the process so it lets the operating system execute
different tasks like process scheduling, context switching, etc.
Using PCB helps in scheduling the processes and it ensures that the CPU resources are
allocated efficiently.
When the different resource utilization information about a process is used from the PCB,
they help in efficient resource utilization and resource sharing.
The CPU registers and stack pointers information helps the OS to save the process state
which helps in Context switching.

Disadvantages of using Process Control Block

To store the PCB for each and every process there is a significant usage of the memory
in there can be a large number of processes available simultaneously in the OS. So using
PCB adds extra memory usage.
Using PCB reduces the scalability of the process in the OS as the whole process of
using the PCB adds some complexity to the user so it makes it tougher to scale the
system further.

2.2 What is Process Scheduling?

The act of determining which process is in the ready state, and should be moved to
the running state is known as Process Scheduling.
The prime aim of the process scheduling system is to keep the CPU busy all the time and to
deliver minimum response time for all programs. For achieving this, the scheduler must apply
appropriate rules for swapping processes IN and OUT of CPU.

[48]
Chapter-2 Process and Thread Management

2.2.1 First Come First Serve Scheduling


In the "First come first serve" scheduling algorithm, as the name suggests, the process which
arrives first, gets executed first, or we can say that the process which requests the CPU first,
gets the CPU allocated first.

Fig. 2.4: First Come First Serve Scheduling

First Come First Serve, is just like FIFO (First in First out) Queue data structure, where
the data element which is added to the queue first, is the one who leaves the queue first.
This is used in Batch Systems.
It's easy to understand and implement programmatically, using a Queue data
structure, where a new process enters through the tail of the queue, and the scheduler
selects process from the head of the queue.
A perfect real-life example of FCFS scheduling is buying tickets at ticket counter.

Calculating Average Waiting Time


For every scheduling algorithm, Average waiting time is a crucial parameter to judge its
performance.
AWT or Average waiting time is the average of the waiting times of the processes in the queue,
waiting for the scheduler to pick them for execution.
Lower the Average Waiting Time, better the scheduling algorithm.
Consider the processes P1, P2, P3, P4 given in the below table, arrives for execution in the
same order, with Arrival Time 0, and given Burst Time, let's find the average waiting time
using the FCFS scheduling algorithm.

[49]
Advanced Operating System

The average waiting time will be 18.75 Ms


For the above given processes, first P1 will be provided with the CPU resources,
Hence, waiting time for P1 will be 0
P1 requires 21 Ms for completion, hence waiting time for P2 will be 21 MS
Similarly, waiting time for process P3 will be execution time of P1 + execution time
for P2, which will be (21 + 3) Ms = 24 Ms.
For process P4 it will be the sum of execution times of P1, P2 and P3.
The GANTT chart above perfectly represents the waiting time for each
The average waiting time will be 18.75 Ms
For the above given processes, first P1 will be provided with the CPU resources,

Hence, waiting time for P1 will be 0


P1 requires 21 Ms for completion, hence waiting time for P2 will be 21 ms
Similarly, waiting time for process P3 will be execution time of P1 + execution time
for P2, which will be (21 + 3) Ms = 24 Ms.
For process P4 it will be the sum of execution times of P1, P2 and P3.

The GANTT chart above perfectly represents the waiting time for each process.

Problems with FCFS Scheduling


Below we have a few shortcomings or problems with the FCFS scheduling algorithm:
1. It is non-pre-emptive algorithm, which means the process priority doesn't matter.

If a process with very least priority is being executed, more like daily routine backup process,
which takes more time, and all of a sudden, some other high priority process arrives,
like interrupt to avoid system crash, the high priority process will have to wait, and hence in
this case, the system will crash, just because of improper process scheduling.

2. Not optimal Average Waiting Time.

3. Resources utilization in parallel is not possible, which leads to Convoy Effect, and
hence poor resource (CPU, I/O etc) utilization.

What is Convoy Effect?

Convoy Effect is a situation where many processes, who need to use a resource for short time
are blocked by one process holding that resource for a long time.

This essentially leads to poor utilization of resources and hence poor performance.

[50]
Chapter-2 Process and Thread Management

Here we have simple formulae for calculating various times for given processes:

Completion Time: Time taken for the execution to complete, starting from arrival time.

Turn Around Time: Time taken to complete after arrival. In simple words, it is the difference
between the Completion time and the Arrival time.

Waiting Time: Total time the process has to wait before its execution begins. It is the difference
between the Turn Around time and the Burst time of the process.

2.2.2 Shortest Job First (SJF) Scheduling

Shortest Job First scheduling works on the process with the shortest burst
time or duration first.
This is the best approach to minimize waiting time.
This is used in Batch Systems.
It is of two types:
• Non-Pre-emptive
• Pre-emptive
To successfully implement it, the burst time/duration time of the processes should be
known to the processor in advance, which is practically not feasible all the time.
This scheduling algorithm is optimal if all the jobs/processes are available at the same
time. (either Arrival time is 0 for all, or Arrival time is same for all)

Non-Pre-Emptive Shortest Job First

Consider the below processes available in the ready queue for execution, with arrival
time as 0 for all and given burst times.

[51]
Advanced Operating System

Fig. 2.5: SJF (Gantt Chart)

As you can see in the GANTT chart above, the process P4 will be picked up first as it has the
shortest burst time, then P2, followed by P3 and at last P1.

We scheduled the same set of processes using the First come first serve algorithm in the
previous tutorial, and got average waiting time to be 18.75 Ms, whereas with SJF, the average
waiting time comes out 4.5 Ms.

Problem with Non Pre-emptive SJF

If the arrival time for processes are different, which means all the processes are not available
in the ready queue at time 0, and some jobs arrive after some time, in such situation, sometimes
process with short burst time have to wait for the current process's execution to finish, because
in Non-Pre-emptive SJF, on arrival of a process with short duration, the existing job/process's
execution is not halted/stopped to execute the short job first.

This leads to the problem of Starvation, where a shorter process has to wait for a long time
until the current longer process gets executed. This happens if shorter jobs keep coming, but
this can be solved using the concept of aging.

Pre-emptive Shortest Job First

In Pre-emptive Shortest Job First Scheduling, jobs are put into ready queue as they arrive, but
as a process with short burst time arrives, the existing process is pre-empted or removed from
execution, and the shorter job is executed first.

[52]
Chapter-2 Process and Thread Management

The average waiting time will be, ((5-3) +(6-2) +(12-1))/4=8.75

The average waiting time for pre-emptive shortest job first scheduling is less than both, non-
pre-emptive SJF scheduling and FCFS scheduling

As you can see in the GANTT chart above, as P1 arrives first, hence it's execution starts
immediately, but just after 1 Ms, process P2 arrives with a burst time of 3 Ms which is less
than the burst time of P1, hence the process P1(1 Ms done, 20 Ms left) is pre-empted and
process P2 is executed.

As P2 is getting executed, after 1 Ms, P3 arrives, but it has a burst time greater than that of P2,
hence execution of P2 continues. But after another millisecond, P4 arrives with a burst time
of 2 Ms, as a result P2(2 Ms done, 1 Ms left) is pre-empted and P4 is executed.

After the completion of P4, process P2 is picked up and finishes, then P2 will get executed and
at last P1.

The Pre-emptive SJF is also known as Shortest Remaining Time First, because at any given
point of time, the job with the shortest remaining time is executed first.

2.2.3 Round Robin Scheduling

Round Robin (RR) scheduling algorithm is mainly designed for time-sharing systems. This
algorithm is similar to FCFS scheduling, but in Round Robin (RR) scheduling, pre-emption is
added which enables the system to switch between processes.

A fixed time is allotted to each process, called a quantum, for execution.


Once a process is executed for the given time period that process is pre-empted and
another process executes for the given time period.
Context switching is used to save states of pre-empted processes.
This algorithm is simple and easy to implement and the most important is thing is this
algorithm is starvation-free as all processes get a fair share of CPU.
It is important to note here that the length of time quantum is generally from 10 to 100
milliseconds in length.

Some important characteristics of the Round Robin (RR) Algorithm are as follows:

Round Robin Scheduling algorithm resides under the category of Pre-emptive


Algorithms.

[53]
Advanced Operating System

This algorithm is one of the oldest, easiest, and fairest algorithms.

This Algorithm is a real-time algorithm because it responds to the event within a


specific time limit.

In this algorithm, the time slice should be the minimum that is assigned to a specific
task that needs to be processed. Though it may vary for different operating systems.

This is a hybrid model and is clock-driven in nature.

This is a widely used scheduling method in the traditional operating system.

Important terms

1. CompletionTime
It is the time at which any process completes its execution.

2. TurnAroundTime
This mainly indicates the time Difference between completion time and arrival time.
The Formula to calculate the same is: Turn Around Time = Completion Time –
Arrival Time

3. WaitingTime(W.T):
It Indicates the time Difference between turnaround time and burst time.
And is calculated as Waiting Time = Turn Around Time – Burst Time

Let us now cover an example for the same:

Fig. 2.6: Round Robin

[54]
Chapter-2 Process and Thread Management

In the above diagram, arrival time is not mentioned so it is taken as 0 for all processes.

Note: If arrival time is not given for any problem statement, then it is taken as 0 for all
processes; if it is given then the problem can be solved accordingly.

Explanation

The value of time quantum in the above example is 5. Let us now calculate the Turnaround
time and waiting time for the above example:

Turn Around Time Waiting Time


Burst
Processes Turn Around Time = Completion Waiting Time = Turn Around
Time
Time – Arrival Time Time – Burst Time

P1 21 32-0=32 32-21=11

P2 3 8-0=8 8-3=5

P3 6 21-0=21 21-6=15

P4 2 15-0=15 15-2=13

Average waiting time is calculated by adding the waiting time of all processes and then dividing
them by no. of processes.

average waiting time = waiting time of all processes/ no. of processes

average waiting time=11+5+15+13/4 = 44/4= 11ms

Advantages of Round Robin Scheduling Algorithm

Some advantages of the Round Robin scheduling algorithm are as follows:

While performing this scheduling algorithm, a particular time quantum is allocated to


different jobs.
In terms of average response time, this algorithm gives the best performance.
With the help of this algorithm, all the jobs get a fair allocation of CPU.
In this algorithm, there are no issues of starvation or convoy effect.
This algorithm deals with all processes without any priority.

[55]
Advanced Operating System

This algorithm is cyclic in nature.


In this, the newly created process is added to the end of the ready queue.
Also, in this, a round-robin scheduler generally employs time-sharing which means
providing each job a time slot or quantum.
In this scheduling algorithm, each process gets a chance to reschedule after a particular
quantum time.

Disadvantages of Round Robin Scheduling Algorithm

Some disadvantages of the Round Robin scheduling algorithm are as follows:

This algorithm spends more time on context switches.


For small quantum, it is time-consuming scheduling.
This algorithm offers a larger waiting time and response time.
In this, there is low throughput.
If time quantum is less for scheduling, then its Gantt chart seems to be too big.

2.3 What are Threads?

Thread is an execution unit that consists of its own program counter, a stack, and a set of
registers where the program counter mainly keeps track of which instruction to execute next, a
set of registers mainly hold its current working variables, and a stack mainly contains the
history of execution

Threads are also known as Lightweight processes. Threads are a popular way to
improve the performance of an application through parallelism. Threads are mainly
used to represent a software approach in order to improve the performance of an
operating system just by reducing the overhead thread that is mainly equivalent to a
classical process.
The CPU switches rapidly back and forth among the threads giving the illusion that the
threads are running in parallel.
As each thread has its own independent resource for process execution; thus Multiple
processes can be executed parallelly by increasing the number of threads.
It is important to note here that each thread belongs to exactly one process and outside
a process no threads exist. Each thread basically represents the flow of control
separately. In the implementation of network servers and web servers’ threads have

[56]
Chapter-2 Process and Thread Management

been successfully used. Threads provide a suitable foundation for the parallel execution
of applications on shared-memory multiprocessors.

The given below figure shows the working of a single-threaded and a multithreaded
process:

Before moving on further let us first understand the difference between a process and thread.

Process Thread

A Process simply means any program in execution. Thread simply means a segment of a process.

The process consumes more resources Thread consumes fewer resources.

Thread requires comparatively less time for


The process requires more time for creation.
creation than process.

The process is a heavyweight process Thread is known as a lightweight process

The process takes more time to terminate The thread takes less time to terminate.

A thread mainly shares the data segment, code


Processes have independent data and code segments
segment, files, etc. with its peer threads.

The thread takes less time for context


The process takes more time for context switching.
switching.

Communication between processes needs more Communication between threads needs less
time as compared to thread. time as compared to processes.

For some reason, if a process gets blocked then the In case if a user-level thread gets blocked, all
remaining processes can continue their execution of its peer threads also get blocked.

[57]
Advanced Operating System

Fig. 2.7: Threads

Advantages of Thread

Some advantages of thread are given below:

• Responsiveness

• Resource sharing, hence allowing better utilization of resources.

• Economy. Creating and managing threads becomes easier.

• Scalability. One thread runs on one CPU. In Multithreaded processes, threads can be
distributed over a series of processors to scale.

• Context Switching is smooth. Context switching refers to the procedure followed by


the CPU to change from one task to another.

• Enhanced Throughput of the system. Let us take an example for this: suppose a process is
divided into multiple threads, and the function of each thread is considered as one job, then
the number of jobs completed per unit of time increases which then leads to an increase in
the throughput of the system.

[58]
Chapter-2 Process and Thread Management

Types of Thread

There are two types of threads:

1. User Threads

2. Kernel Threads

User threads are above the kernel and without kernel support. These are the threads that
application programmers use in their programs.

Kernel threads are supported within the kernel of the OS itself. All modern OSs support
kernel-level threads, allowing the kernel to perform multiple simultaneous tasks and/or to
service multiple kernel system calls simultaneously.

Let us now understand the basic difference between User level Threads and Kernel level
threads:

User Level threads Kernel Level Threads

These threads are implemented by Operating


These threads are implemented by users.
systems

These threads are not recognized by These threads are recognized by operating
operating systems, systems,

In User Level threads, the Context switch In Kernel Level threads, hardware support is
requires no hardware support. needed.

These threads are mainly designed as These threads are mainly designed as
dependent threads. independent threads.

In User Level threads, if one user-level On the other hand, if one kernel thread
thread performs a blocking operation then performs a blocking operation then another
the entire process will be blocked. thread can continue the execution.

[59]
Advanced Operating System

User Level threads Kernel Level Threads

Example of User Level threads: Java thread, Example of Kernel level threads: Window
POSIX threads. Solaris.

While the Implementation of the kernel-level


Implementation of User Level thread is done
thread is done by the operating system and is
by a thread library and is easy.
complex.

This thread is generic in nature and can run


This is specific to the operating system.
on any operating system.

Multithreading Models

The user threads must be mapped to kernel threads, by one of the following strategies:

• Many to One Model

• One to One Model

• Many to Many Model

Many to One Model

• In the many to one model, many user-level threads are all mapped onto a single kernel
thread.

• Thread management is handled by the thread library in user space, which is efficient in
nature.

• In this case, if user-level thread libraries are implemented in the operating system in
some way that the system does not support them, then the Kernel threads use this many-
to-one relationship model.

[60]
Chapter-2 Process and Thread Management

Fig. 2.8: Multithreading

One to One Model

The one-to-one model creates a separate kernel thread to handle each and every user
thread.
Most implementations of this model place a limit on how many threads can be created.
Linux and Windows from 95 to XP implement the one-to-one model for threads.
This model provides more concurrency than that of many to one Model.

Fig. 2.9: one-to-one model

Many to Many Model

The many to many models multiplex any number of user threads onto an equal or
smaller number of kernel threads, combining the best features of the one-to-one and
many-to-one models.
Users can create any number of threads.
Blocking the kernel system calls does not block the entire process.
Processes can be split across multiple processors.
[61]
Advanced Operating System

Fig. 2.10: many to many model

What are Thread Libraries?

Thread libraries provide programmers with API for the creation and management of threads.
Thread libraries may be implemented either in user space or in kernel space. The user space
involves API functions implemented solely within the user space, with no kernel support. The
kernel space involves system calls and requires a kernel with thread library support.

Three types of Thread

1. POSIX Pitheads may be provided as either a user or kernel library, as an extension to


the POSIX standard.

2. Win32 threads are provided as a kernel-level library on Windows systems.

3. Java threads: Since Java generally runs on a Java Virtual Machine, the implementation
of threads is based upon whatever OS and hardware the JVM is running on, i.e. either
Pitheads or Win32 threads depending on the system.

Multithreading Issues

Below we have mentioned a few issues related to multithreading. Well, it's an old saying, All
good things, come at a price.

Thread Cancellation

Thread cancellation means terminating a thread before it has finished working. There can be
two approaches for this, one is Asynchronous cancellation, which terminates the target thread

[62]
Chapter-2 Process and Thread Management

immediately. The other is Deferred cancellation allows the target thread to periodically check
if it should be cancelled.

Signal Handling

Signals are used in UNIX systems to notify a process that a particular event has occurred. Now
in when a multithreaded process receives a signal, to which thread it must be delivered? It can
be delivered to all or a single thread.

fork () System Call

fork () is a system call executed in the kernel through which a process creates a copy of itself.
Now the problem in the Multithreaded process is, if one thread forks, will the entire process be
copied or not?

Security Issues

Yes, there can be security issues because of the extensive sharing of resources between multiple
threads.

There are many other issues that you might face in a multithreaded process, but there are
appropriate solutions available for them. Pointing out some issues here was just to study both
sides of the coin.

2.4 THREAD PROCESS SYNCHRONIZATION:

Process Synchronization means sharing system resources by processes in a such a way that,
Concurrent access to shared data is handled thereby minimizing the chance of inconsistent data.
Maintaining data consistency demands mechanisms to ensure synchronized execution of
cooperating processes.

Process Synchronization was introduced to handle problems that arose while multiple process
executions. Some of the problems are discussed below.

Critical Section Problem

A Critical Section is a code segment that accesses shared variables and has to be executed as
an atomic action. It means that in a group of cooperating processes, at a given point of time,
only one process must be executing its critical section. If any other process also wants to
execute its critical section, it must wait until the first one finishes.

[63]
Advanced Operating System

Fig. 2.11: Critical Section Problem

Solution to Critical Section Problem

A solution to the critical section problem must satisfy the following three conditions:

Mutual Exclusion

Out of a group of cooperating processes, only one process can be in its critical section at a
given point of time.

Progress

If no process is in its critical section, and if one or more threads want to execute their critical
section then any one of these threads must be allowed to get into its critical section.

Bounded Waiting

After a process makes a request for getting into its critical section, there is a limit for how many
other processes can get into their critical section, before this process's request is granted. So,
after the limit is reached, system must grant the process permission to get into its critical
section.

Synchronization Hardware

Many systems provide hardware support for critical section code. The critical section
problem could be solved easily in a single-processor environment if we could disallow
interrupts to occur while a shared variable or resource is being modified.
In this manner, we could be sure that the current sequence of instructions would be
allowed to execute in order without pre-emption. Unfortunately, this solution is not
feasible in a multiprocessor environment.
Disabling interrupt on a multiprocessor environment can be time consuming as the
message is passed to all the processors.

[64]
Chapter-2 Process and Thread Management

This message transmission lag, delays entry of threads into critical section and the
system efficiency decreases.

2.5 Inter Process Communication (IPC)

Inter-process communication is used for exchanging useful information between numerous


threads in one or more processes (or programs)."

To understand inter process communication, you can consider the following given diagram that
illustrates the importance of inter-process communication:

Fig. 2.12: Inter Process Communication

Role of Synchronization in Inter Process Communication

It is one of the essential parts of inter process communication. Typically, this is provided by
interposes communication control mechanisms, but sometimes it can also be controlled by
communication processes.

These are the following methods that used to provide the synchronization:

1. Mutual Exclusion
2. Semaphore
3. Barrier
4. Spinlock
Mutual Exclusion: -

It is generally required that only one process thread can enter the critical section at a time. This
also helps in synchronization and creates a stable state to avoid the race condition.

[65]
Advanced Operating System

Semaphore: -

Semaphore is a type of variable that usually controls the access to the shared resources by
several processes. Semaphore is further divided into two types which are as follows:

1. Binary Semaphore

2. Counting Semaphore

Barrier: -

A barrier typically not allows an individual process to proceed unless all the processes does not
reach it. It is used by many parallel languages, and collective routines impose barriers.

Spinlock: -

Spinlock is a type of lock as its name implies. The processes are trying to acquire the spinlock
waits or stays in a loop while checking that the lock is available or not. It is known as busy
waiting because even though the process active, the process does not perform any functional
operation (or task).

Approaches to Interposes Communication:

We will now discuss some different approaches to inter-process communication which are as
follows:

Fig. 2.13: Approaches for ICP

[66]
Chapter-2 Process and Thread Management

These are a few different approaches for Inter- Process Communication:

1. Pipes

2. Shared Memory

3. Message Queue

4. Direct Communication

5. Indirect communication

6. Message Passing

7. FIFO

To understand them in more detail, we will discuss each of them individually.

Pipe: -

The pipe is a type of data channel that is unidirectional in nature. It means that the data in this
type of data channel can be moved in only a single direction at a time. Still, one can use two-
channel of this type, so that he can able to send and receive data in two processes. Typically, it
uses the standard methods for input and output. These pipes are used in all types of POSIX
systems and in different versions of window operating systems as well.

Shared Memory: -

It can be referred to as a type of memory that can be used or accessed by multiple processes
simultaneously. It is primarily used so that the processes can communicate with each other.
Therefore, the shared memory is used by almost all POSIX and Windows operating systems as
well.

Message Queue: -

In general, several different messages are allowed to read and write the data to the message
queue. In the message queue, the messages are stored or stay in the queue unless their recipients
retrieve them. In short, we can also say that the message queue is very helpful in inter-process
communication and used by all operating systems.

To understand the concept of Message queue and Shared memory in more detail, let's take a
look at its diagram given below:

[67]
Advanced Operating System

Fig. 2.14: Approaches to Interprocess Communication

Message Passing: -

It is a type of mechanism that allows processes to synchronize and communicate with each
other. However, by using the message passing, the processes can communicate with each other
without restoring the hared variables.

Usually, the inter-process communication mechanism provides two operations that are as
follows:

o send (message)

o received (message)

Note: The size of the message can be fixed or variable.

Direct Communication: -

In this type of communication process, usually, a link is created or established between two
communicating processes. However, in every pair of communicating processes, only one link
can exist.

Indirect Communication

Indirect communication can only exist or be established when processes share a common
mailbox, and each pair of these processes shares multiple communication links. These shared
links can be unidirectional or bi-directional.

[68]
Chapter-2 Process and Thread Management

FIFO: -

It is a type of general communication between two unrelated processes. It can also be


considered as full-duplex, which means that one process can communicate with another process
and vice versa.

Some other different approaches

o Socket: -

It acts as a type of endpoint for receiving or sending the data in a network. It is correct for data
sent between processes on the same computer or data sent between different computers on the
same network. Hence, it used by several types of operating systems.

o File: -

A file is a type of data record or a document stored on the disk and can be acquired on demand
by the file server. Another most important thing is that several processes can access that file as
required or needed.

o Signal: -

As its name implies, they are a type of signal used in inter process communication in a minimal
way. Typically, they are the massages of systems that are sent by one process to another.
Therefore, they are not used for sending data but for remote commands between multiple
processes.

Usually, they are not used to send the data but to remote commands in between several
processes.

Why we need interprocesses communication?

There are numerous reasons to use inter-process communication for sharing the data. Here are
some of the most important reasons that are given below:

It helps to speedup modularity


Computational
Privilege separation
Convenience
Helps operating system to communicate with each other and synchronize their actions as
well.

[69]
Chapter-3 Memory Management

MEMORY MANAGEMENT

Virtual memory and address translation - Paging and segmentation - Memory allocation
techniques (e.g., buddy system, slab allocation) - Memory protection and sharing - Memory
management in multiprocessor systems.

4.1 VIRTUAL MEMORY AND ADDRESS TRANSLATION

Virtual memory and address translation are two fundamental concepts in modern operating
systems. They work together to allow programs to access more memory than is physically
available by creating an illusion of a larger address space for each program. Let's delve deeper
into each concept:

Virtual Memory:

• Imagine an individual workspace on a big table. You access and use only that specific
section, unaware of the entire table surface. Similarly, virtual memory creates a private
address space for each program, independent of other programs or the actual physical
memory available.
• This virtual address space is typically much larger than the available physical memory
(RAM).
• Programs use virtual addresses to access their data and code. These addresses are not
real physical locations in RAM but rather references within the virtual space.
• The operating system acts as a translator, mapping these virtual addresses to the actual
physical locations in RAM at runtime.

Address Translation:

• This mapping between virtual and physical addresses is achieved through a mechanism
called address translation.
• The key players in this process are:
▪ Memory Management Unit (MMU): A hardware component within the CPU
that handles address translation.
▪ Page Tables: Data structures kept in RAM that contain the mapping
information between virtual and physical addresses. Each entry in a page table

[70]
Advanced Operating System

maps a portion of the virtual address space (called a page) to a physical page in
RAM.
• When a program attempts to access a memory location using a virtual address, the
MMU intercepts the request and performs the following steps:
▪ Page number extraction: The virtual address is split into two parts: page
number and page offset.
▪ Page table lookup: The page number is used as an index to search the page
table for the corresponding page table entry (PTE).
▪ Physical address generation: The PTE contains the information needed to find
the actual physical location of the page in RAM. This information is combined
with the page offset to form the final physical address.
▪ Memory access: The MMU uses the final physical address to access the
requested data or code in RAM.

Benefits of Virtual Memory:

• Increases program size: Allows programs to be larger than the available physical
memory, enabling complex applications to run smoothly.
• Memory protection: Isolates each program's memory space, preventing accidental
access or corruption of data by other programs or the operating system.
• Efficient memory utilization: Pages that are not actively used can be swapped out to
disk, freeing up physical memory for other programs. This enhances overall system
performance.

Additional Layers of Optimization:

• Translation Look a side Buffer (TLB): A small cache within the MMU that stores
recently used address translations, speeding up subsequent accesses to the same pages.
• Demand paging: Only pages that are actually needed are brought into RAM from disk,
minimizing unnecessary data transfers.

Understanding virtual memory and address translation is crucial for comprehending how
operating systems manage memory efficiently and provide a secure and robust environment
for running multiple programs simultaneously.

3.2 PAGING AND SEGMENTATION:

[71]
Chapter-3 Memory Management

Memory management is a core concern for operating systems (OS), and two key techniques
employed are paging and segmentation. Both provide solutions to managing memory for
multiple programs while ensuring efficient memory utilization and access protection. Let's
explore each concept in detail with the help of visuals:

Paging:

• Imagine a large textbook representing a program's address space. Paging divides this
book into equal-sized chapters called pages.
• These pages are independent units and can be stored anywhere in the available physical
memory (RAM).
• The OS maintains a page table, a data structure mapping virtual addresses (chapters) to
physical addresses (page locations in RAM).

Fig. 3.1: Paging

• When a program accesses data using a virtual address, the MMU intercepts the request.
It looks up the corresponding page in the page table and translates it to the actual
physical address in RAM.

[72]
Advanced Operating System

Fig. 3.2: Representation of Paging

Benefits of Paging:

• Reduced Fragmentation: Paging minimizes internal fragmentation, as even partially


used pages can be allocated.
• Efficient Memory Utilization: Unused pages can be swapped to disk, freeing up RAM
for other programs.
• Protection: Program memory is independent, enhancing security and preventing
conflicts.

Segmentation:

• Instead of equal-sized chapters, segmentation divides the program into logical units
based on functionality, like code, data, stack, etc. These segments are of variable sizes
and can overlap in the virtual address space.
• Similar to paging, a segment table maps virtual segment addresses to physical memory
locations.

[73]
Chapter-3 Memory Management

Fig. 3.3: Representation of Segmentation

• When accessing data, the MMU translates the virtual segment address and offset within
the segment to the corresponding physical location in RAM.

Benefits of Segmentation:

• Logical Grouping: Segments provide a natural way to group related program parts,
simplifying memory management for programmers.
• Protection and Access Control: Different segments can have varying access
permissions, enhancing security.
• Efficient Module Loading: Only needed segments are loaded into RAM, saving
memory.

Comparison:

FEATURE PAGING SEGMENTATION

Page Size Fixed Variable

Flexibility Less flexible More flexible

Fragmentation Internal only Internal and external

Mostly based on page More granular control within


Protection
boundaries segments

Slower due to segment table


Performance Faster address translation
lookup

[74]
Advanced Operating System

Hybrids:

Some OS-es combine the benefits of both techniques. Paged segmentation divides segments
into fixed-size pages, offering the flexibility of segmentation with reduced internal
fragmentation.

Visualizing both techniques helps clarify their different approaches to memory management.
Understanding their strengths and weaknesses allows developers and researchers to choose the
most appropriate technique for specific OS needs.

3.3 MEMORY ALLOCATION TECHNIQUES:

Memory allocation, the art of assigning memory blocks to processes in an operating system
(OS), lies at the heart of efficient system performance. Different allocation techniques cater to
specific needs, and understanding them is crucial for optimizing memory usage and preventing
fragmentation. Let's explore two popular techniques:

3.3.1 Buddy System: Imagine a large warehouse filled with cardboard boxes of various
sizes. The buddy system operates on a similar principle, dividing memory into power-
of-2 sized blocks (512, 1024, 2048, etc.). These blocks are called buddies and form a
hierarchical tree structure.
Allocation:
▪ When a process requests memory, the system searches for the smallest block
that can fit the need (the block's size is at least as large as the requested
memory).
▪ If a suitable block isn't readily available, the system splits a larger block (buddy)
into two equal halves, creating smaller buddies, until a block matching the
process's size is found.
▪ This splitting process continues until the smallest possible block (smallest
buddy) is reached.
Deallocation:
▪ When a process releases memory, its allocated block is merged back with its
buddy to form a larger block.
▪ This merging continues recursively until the block reaches its highest possible
size in the tree hierarchy.

[75]
Chapter-3 Memory Management

Benefits:
▪ Minimal fragmentation: The power-of-2 sizes and buddy merging mechanism
minimize internal fragmentation, maximizing memory utilization.
▪ Efficient allocation: Finding suitable blocks is fast due to the hierarchical
structure.
▪ Scalability: The system can handle diverse memory requests with varying sizes.
3.3.2 Slab Allocation: Picture a factory producing different types of objects (car parts, toys,
etc.). Slab allocation treats memory as a factory floor, dividing it into fixed-size slabs
dedicated to specific object types (data structures, network buffers, etc.). Each slab
stores multiple instances of the object it's meant for.

Allocation:

▪ When a process requests objects of a specific type, the system allocates them
from the corresponding lab, keeping track of available and used slots within the
slab.
▪ If the slab is full, a new slab for that object type is created.

Deallocation:

▪ When an object is no longer needed, it is marked as free within its slab.


▪ If all objects in a slab are free, the entire slab can be recycled and used for other
purposes.

Benefits:

▪ Cache-friendliness: Objects within a slab are typically placed contiguously in


memory, improving cache efficiency for related data.
▪ Reduced overhead: Managing allocation and deallocation within a slab is
faster than handling individual memory blocks.
▪ Simplified memory organization: Grouping similar objects simplifies
memory management tasks for specific data types.

3.3.3 Choosing the Right Technique:

▪ The optimal choice between buddy systems and slab allocation depends on the
specific needs of the system:

[76]
Advanced Operating System

▪ General-purpose memory allocation: Buddy systems are often preferred due


to their flexibility and minimal fragmentation.
▪ Performance-critical applications: Slab allocation can be advantageous for its
cache-friendliness and speed of object handling.
▪ Data-intensive applications: Slab allocation excels when managing large
numbers of similar objects due to its efficient organization and reduced
overhead.
3.4 MEMORY PROTECTION AND SHARING:
In the bustling city of an operating system, where programs are citizens and memory is
their territory, memory protection and sharing play crucial roles in maintaining order and
efficiency. Let's dive deeper into these concepts with the help of some visuals:

Memory Protection:

Imagine each program in the OS as a walled city with its own streets (memory addresses) and
buildings (data and code). Memory protection ensures that each program's city remains under
its control, preventing unwanted access from other programs or the OS itself. This is achieved
through two key mechanisms:

(a) Address Space Separation: Each program is assigned a virtual address space, a separate
map of its memory addresses that exists independent of the actual physical memory layout.
This creates an illusion of each program having all the memory it needs, even if it's
physically shared with other programs.
(b) Access Control Mechanisms: The hardware and OS work together to enforce access
restrictions within each virtual address space. Techniques like:
▪ Memory Management Unit (MMU): Translates virtual addresses to physical
addresses and checks access permissions (read, write, execute) before granting access.
▪ Protection Rings: Implement a hierarchical system of privilege levels, restricting
access to certain memory regions based on program privileges.
▪ Page Tables: Data structures storing access permissions for each page of memory
within a program's virtual address space.

Benefits of Memory Protection:

▪ System Stability: Prevents programs from crashing each other or the OS by accessing
unauthorized memory.

[77]
Chapter-3 Memory Management

▪ Security: Protects sensitive data from unauthorized access, enhancing overall system
security.
▪ Resource Management: Ensures fair and efficient allocation of memory resources
among programs.

Memory Sharing:

Despite the separation, programs sometimes need to communicate and share data. Memory
sharing mechanisms come into play here, allowing controlled access to specific portions of a
program's memory by other programs or the OS:

1. Shared Memory Segments: Specific regions of a program's virtual address space can be
marked as shared, allowing other programs or the OS to access them directly using their
own virtual addresses.

Fig. 3.4: Memory Sharing

2. Inter-Process Communication (IPC): Techniques like pipes, semaphores, and message


queues facilitate controlled communication and data exchange between programs without
directly sharing memory space.

[78]
Advanced Operating System

Benefits of Memory Sharing:

▪ Efficiency: Enables efficient data exchange between programs, avoiding unnecessary


data copying.
▪ Collaboration: Allows programs to work together and share resources, enhancing
functionality.
▪ Resource Optimization: Reduces duplication of data across programs, maximizing
memory utilization.

Important Caveats:

▪ Sharing memory introduces additional complexity and security concerns. Proper access
control mechanisms must be implemented to avoid data corruption or leaks.
▪ IPC techniques can be slower than direct memory sharing, making them less suitable
for scenarios requiring high-speed data exchange.

Understanding the balance between memory protection and sharing is essential for designing
secure and efficient operating systems. Choosing the appropriate mechanism depends on the
specific needs of the application and communication requirements between programs.

3.5 MEMORY MANAGEMENT IN MULTIPROCESSOR SYSTEMS:

Multiprocessor systems, with their multiple CPUs working in tandem, bring new challenges to
the already complex world of memory management in operating systems. Let's explore how
memory is juggled in this parallel environment, considering both benefits and complexities:

Challenges:

▪ Shared Memory vs. Distributed Memory: Choosing between a single shared memory
pool accessible by all CPUs or allocating dedicated memory blocks to each CPU
(distributed memory) affects performance and complexity.
▪ Cache Coherence: Maintaining consistency of data across multiple caches when
different CPUs access the same memory location requires efficient protocols to avoid
stale data and ensure data integrity.
▪ Inter-processor Communication: CPUs need to coordinate and synchronize access to
shared resources and data, impacting performance and overhead.

[79]
Chapter-3 Memory Management

Benefits:

▪ Parallelism: Multiple CPUs can work on independent tasks simultaneously,


significantly improving system performance for parallel applications.
▪ Increased Scalability: Adding more CPUs can further boost processing power and
handle larger workloads.
▪ Fault Tolerance: If one CPU fails, others can continue functioning, enhancing system
reliability.

Memory Management Strategies:

1. Uniform Memory Access (UMA): All CPUs share a single physical memory space,
offering fast access and simplicity. However, scalability can be limited due to potential
bottlenecks.

Fig. 3.5: Uniform Memory Access(UMA)

2. Non-Uniform Memory Access (NUMA): Memory is physically distributed across


multiple nodes, with CPUs closer to their local memory experiencing faster access. This
scales better than UMA but introduces cache coherence complexities.

Fig. 3.6: Non-Uniform Memory Access (NUMA)

[80]
Advanced Operating System

3. Distributed Shared Memory (DSM): Each CPU has its own local memory, but a
virtual shared address space provides the illusion of a single memory pool. Data
synchronization becomes crucial in this approach.

Fig. 3.7: Distributed Shared Memory(DSM)

Efficient Techniques:

▪ Page coloring: Assigning different "colors" to memory pages accessed by different


CPUs helps track data coherence and reduce cache invalidation overhead.
▪ Lock-free algorithms: Implementing concurrency techniques like lock-free
algorithms for critical sections can minimize synchronization delays and improve
performance.
▪ Memory coherence protocols: Protocols like MESI (Modified, Exclusive, Shared,
Invalid) maintain data consistency across caches in various states, ensuring data
integrity.

Memory Management in Action:

Imagine two CPUs in a multiprocessor system using UMA architecture. Both need to access
data stored in the same memory page.

1. Page Fault: When CPU1 accesses the page for the first time, a page fault occurs, and
the page is loaded into main memory and assigned a "color" specific to CPU1.

[81]
Chapter-3 Memory Management

2. Cache Coherence: If CPU2 needs to access the same page, it checks its cache and the
main memory. If not present, a coherence protocol like MESI updates CPU1's cache
state, invalidating its local copy, and fetches the latest data from main memory for both
CPUs.

Complexity and Trade-offs:

Optimizing memory management in multiprocessor systems involves balancing performance,


scalability, and data integrity. Choosing the right architecture, employing efficient techniques,
and addressing concurrency considerations are all crucial elements.

[82]
Chapter-4 File Systems and Storage

FILE SYSTEMS AND


STORAGE

File system concepts and organization, File system implementation and data structures, Disk
management and storage technologies, File system security and access control, Introduction to
solid-state drives (SSDs) and storage virtualization.

4.1 FILE SYSTEM CONCEPTS AND ORGANIZATION:

A file system is the foundation for organizing and accessing data on storage devices like hard
disk drives (HDDs) and solid-state drives (SSDs). It provides a structured mechanism for
storing, retrieving, and managing files and folders, making it vital for any operating system.
Let's dive deeper into the key concepts and organization of file systems:

Hierarchical Structure:

Imagine a tree, where the root is the main drive (e.g., C: on Windows, / on macOS/Linux).
Branches extending from the root are directories (folders), named containers for files and other
subdirectories. This nesting creates a hierarchical structure, allowing you to group related files
logically.

Directories can have multiple layers, forming "subdirectories" within parent directories. This
organization helps to break down large sets of files into manageable units and simplifies
navigation.

Files and Metadata:

Files are the fundamental units of storage, holding data like documents, images, applications,
etc. Each file has a unique name within its directory and associated metadata.

Metadata is like a file's passport, containing information beyond the data itself, such as:

• File size: Number of bytes it occupies.


• Creation and modification times: When the file was created and last edited.
• File type: Identifies the format of the data (e.g., .txt, .jpg, .exe).
• Permissions: Who can access, read, write, or execute the file.

[83]
Advanced Operating System

• Attributes: Additional information specific to the file type (e.g., image resolution, audio
bitrate).

Naming Conventions:

File and directory names vary depending on the file system. Many systems allow alphanumeric
characters, underscores, and hyphens, while others restrict symbols or enforce case sensitivity.

Extensions typically identify the file type (e.g., .docx for Word documents).

Understanding naming conventions ensures proper organization and prevents naming conflicts.

4.1 Types of File Systems:

Different file systems exist with varying features and functionalities. Some common types
include:

• FAT (File Allocation Table): Simple and widely used, but limited to small files and lacks
security features.
• NTFS (New Technology File System): Robust and flexible, supports large files,
security, and journaling.
• ext4 (Fourth Extended File System): Popular in Linux systems, efficient and supports
large files and advanced features.
• HFS+ (Hierarchical File System Plus): Used in macOS, supports journaling and
security.
• ZFS (Zettabyte File System): Focuses on data integrity, fault tolerance, and scalability
for large datasets.

4.1.1 Additional Organizational Features:

• Pathnames: A unique identifier for a file, specifying its location within the hierarchy
(e.g., C:\Users\Bard\Documents\report.txt).
• Symbolic links: Shortcuts to files located elsewhere in the file system, creating multiple
access points.
• Hidden files: System files or user-defined files kept invisible for easier management.

4.2 FILE SYSTEM IMPLEMENTATION AND DATA STRUCTURES:

File systems bridge the gap between the logical world of files and folders you see and the
physical world of sectors on storage devices. This involves intricate data structures and

[84]
Chapter-4 File Systems and Storage

algorithms to manage data efficiently and reliably. Let's delve deeper into the fascinating world
of file system implementation:

Disk Formatting:

Before diving into data structures, it's essential to understand disk formatting. Imagine
formatting a disk as creating a blank canvas on which the file system paints its structures. This
process partitions the disk into sectors (fixed-size chunks) and prepares them for data storage.

Data Structures:

• Inodes (Unix-like): These data structures hold crucial information about each file,
including size, location of data blocks, timestamps, permissions, and access control
lists. Think of them as detailed file passports residing in a dedicated table called the
"inode table."
• File Allocation Table (FAT): Used in simpler file systems like FAT32, the FAT keeps
track of which sectors belong to each file. Like a puzzle map, it links free sectors
together, representing the file's data layout.
• Superblock: This dedicated block stores vital information about the entire file system,
such as block size, total blocks, free blocks, and location of important structures like
the inode table or FAT.

File Allocation Strategies:

Data structures alone aren't enough. File systems need allocation strategies to efficiently place
file data on the disk. Here are some common approaches:

• Contiguous allocation: Ideally, files reside in consecutive sectors, minimizing seek


time and maximizing performance. However, fragmentation (unused gaps) can become
an issue.
• Linked allocation: Each block stores a pointer to the next block containing the file's
data, forming a chain. While flexible for dynamic file growth, it can be slow due to
multiple disk seeks.
• Indexed allocation: An index table maps logical file blocks to physical disk sectors,
offering flexibility and better fragmentation management compared to linked
allocation.

[85]
Advanced Operating System

• Journaling: To ensure data consistency and recover from unexpected events like
crashes, journaling file systems maintain a log of operations before committing them to
disk. This way, data remains intact even if interrupted during write operations.

Performance Considerations:

File system implementation must optimize data access and minimize overhead. Here are some
crucial factors:

• Caching: Frequently accessed data is stored in memory (cache) for faster future
retrieval.
• Prefetching: Anticipating data needs and reading ahead on the disk can accelerate
future accesses.
• Fragmentation minimization: Allocation strategies and defragmentation tools aim to
reduce scattered file fragments, improving read/write performance.

4.3 DISK MANAGEMENT AND STORAGE TECHNOLOGIES:

Disk management and storage technologies are the silent conductors behind the data symphony,
ensuring efficient organization, access, and protection of your precious information. Let's
explore the key instruments in this orchestra:

Disk Partitioning:

Imagine having a single large room for all your belongings. It's chaotic, isn't it? Similarly, a
single unpartitioned disk can become messy and hard to manage. Disk partitioning divides the
physical disk into logical sections called partitions, like creating separate rooms for different
purposes.

Here are some benefits of partitioning:

• Organized storage: Separate partitions for operating systems, applications, and personal
data keeps things tidy and reduces the risk of accidental deletion.
• Improved performance: Spreading data across partitions can optimize disk access for
different types of files.
• Enhanced security: Isolating critical system files in a separate partition can improve
system integrity and data protection.

[86]
Chapter-4 File Systems and Storage

4.3.1 RAID (Redundant Array of Independent Disks):

RAID, or Redundant Array of Independent Disks, is a data storage technology that combines
multiple physical disks into a single logical unit. It's like having multiple bodyguards for your
precious data, ensuring its safety even if one bodyguard (disk) goes down.

Here's how RAID works:

Imagine you have important documents stored on a single hard drive. If that drive fails, those
documents are gone forever. That's where RAID comes in.

With RAID, you distribute copies of your data across multiple hard drives. Think of it like
storing the same documents in multiple safes. Even if one safe gets lost or broken, your
documents are still safe in the other safes.

There are different types of RAID levels, each offering different levels of redundancy and
performance:

RAID 0:

(a) Stripes data across multiple disks. This means data is split into chunks and stored on
different disks simultaneously.
(b) Offers the best performance as multiple disks can be accessed simultaneously for
reading and writing.
(c) However, it provides no redundancy. If any disk fails, all data stored on it is lost.

Fig. 4.1: RAID 0 data striping

[87]
Advanced Operating System

RAID 1:

(a) Mirrors data on two disks. This means an exact copy of your data is stored on both
disks.
(b) Offers excellent redundancy. If one disk fails, the other disk takes over, and you don't
lose any data.
(c) Performance is slightly slower than RAID 0 as only one disk can be accessed for writing
at a time.

Fig. 4.2: RAID 1 data mirroring

RAID 5:

(a) Distributes data and parity information across multiple disks. Parity information is like
a checksum that allows you to reconstruct the data if any disk fails.
(b) Offers good redundancy and performance. You can lose one disk without losing any
data, and performance is still good as multiple disks can be accessed for reading.

Fig. 4.3: RAID 5 data distribution and parity

[88]
Chapter-4 File Systems and Storage

Fig. 4.4: RAID 5 data distribution and parity

RAID 6:

(a) Similar to RAID 5, but adds an extra layer of parity information. This means you can
lose two disks without losing any data.
(b) Offers the best redundancy among common RAID levels. However, performance is
slightly slower than RAID 5 due to the additional calculations required for the extra
parity information.

Fig. 4.5: RAID 6 data distribution and dual parity

Choosing the right RAID level depends on your needs. If you prioritize performance and can
afford to lose data if one disk fails, RAID 0 might be a good option. If you prioritize redundancy
and don't mind slightly slower performance, RAID 1 or 5 might be better choices. RAID 6 is
the best option for maximum redundancy, but it comes with the slowest performance.

RAID is a valuable tool for protecting your data. By using RAID, you can ensure that your data
is safe even if a hard drive fails. However, it's important to remember that RAID is not a backup
solution. You should always have a backup of your data on a separate storage device, in case
of multiple drive failures or other disasters.

[89]
Advanced Operating System

Volume Management:

Volume management goes beyond simple partitions, allowing you to dynamically resize,
extend, and shrink existing volumes without losing data. Imagine having walls in your rooms
that you can adjust without knocking everything down!

This provides flexibility for adapting to changing storage needs, such as increasing the size of
your system partition or reclaiming unused space from data-hungry applications.

Storage Area Networks (SANs):

Imagine a bustling city where multiple businesses (servers) all need access to a shared
warehouse (storage) full of crucial resources (data). That's essentially what a Storage Area
Network (SAN) does! It's a dedicated high-speed network designed to connect servers to shared
storage devices, like disk arrays and tape libraries, offering centralized data management and
improved access.

Think of it as a highway built just for data delivery, separate from the regular traffic network
used by applications and users. This dedicated access translates to several benefits:

Centralized Management:

Imagine running around managing individual warehouses for each business in the city.
Exhausting, right? With a SAN, you have one control center for all your storage needs. This
simplifies administration, provisioning, and maintenance of storage resources.

Fig. 4.6: Centralized Management

[90]
Chapter-4 File Systems and Storage

Increased Scalability:

Need to expand your storage capacity? With a SAN, adding more disks or storage arrays is like
building new warehouses in your city. You can easily scale your storage to meet growing data
demands without disrupting existing operations.

Improved Performance:

Dedicated data highways mean faster delivery! SANs typically use high-speed protocols like
Fibre Channel, significantly boosting data transfer rates compared to traditional storage
options. This translates to quicker server access and potentially smoother application
performance.

Enhanced Disaster Recovery:

Imagine a fire in one warehouse. With traditional, scattered storage, disaster could strike
multiple businesses. A SAN allows for data replication across its storage pool, so if one device
fails, data remains accessible from other locations, enabling faster recovery from unforeseen
events.

Flexibility and Security:

A SAN lets you mix and match different storage types (disk, tape, etc.) within a single pool,
providing tailored solutions for various needs. Additionally, centralized access control and
security measures on the SAN can help protect your valuable data from unauthorized access or
breaches.

However, deploying a SAN also comes with certain considerations:

• Cost: Setting up and maintaining a SAN can be more expensive than traditional storage
options due to specialized hardware and software requirements.
• Complexity: Implementing and managing a SAN requires skilled personnel and
technical expertise.
• Integration: Ensuring seamless integration with existing server infrastructure and
applications can be challenging.

Overall, SANs offer a powerful and flexible solution for organizations with large data storage
needs and demanding performance requirements. While the initial investment and technical
complexity might be higher, the long-term benefits of centralized management, scalability, and
performance often outweigh the drawbacks.

[91]
Advanced Operating System

4.4 FILE SYSTEM SECURITY AND ACCESS CONTROL:

Keeping your files safe and secure is paramount in today's digital world. This is where file
system security and access control come in, acting as the gatekeepers of your digital fortress.
Let's delve into these crucial concepts:

User and Group Permissions:

Imagine entering a building with different levels of access depending on your role. Similarly,
users and groups in a file system are assigned specific permissions like:

• Read: Ability to view the file content.


• Write: Ability to modify the file content.
• Execute: Ability to run the file (applicable to programs).

These permissions can be assigned individually to users or collectively to groups, simplifying


access management for groups of users with similar needs.

File Ownership:

Every file has a designated "owner," typically the user who created it. The owner possesses
additional privileges like changing permissions and modifying ownership itself. Ownership
allows for granular control over sensitive files by granting exclusive access to specific users.

Access Control Lists (ACLs):

These are detailed lists specifying who can access a file and what they can do with it. Think of
them as personalized passports for each file, outlining user access rights beyond basic
permissions. ACLs provide fine-grained control and can be customized to cater to specific user
needs and security requirements.

Encryption:

Consider putting your valuables in a safe for extra protection. Similarly, file encryption
scrambles the data content using an algorithm, making it unreadable without the decryption
key. This provides an additional layer of security, especially for sensitive data like financial
records or personal information.

[92]
Chapter-4 File Systems and Storage

Auditing and Logging:

Just like security cameras in a building, file system auditing and logging track users' access
attempts and file modifications. This provides valuable insights into user activity and can help
identify potential security breaches or unauthorized access attempts.

Security Considerations:

Implementing robust file system security requires careful consideration of factors like:

• Password Strength: Enforcing strong password policies discourages unauthorized


access attempts.
• User Authentication: Multi-factor authentication adds an extra layer of security beyond
simple passwords.
• Vulnerability Management: Keeping software and firmware updated helps patch
security vulnerabilities that attackers might exploit.

4.5 INTRODUCTION TO SOLID-STATE DRIVES (SSDS) AND STORAGE


VIRTUALIZATION:

Solid-State Drives (SSDs): Redefining Storage Speed

Imagine ditching your dusty record player for a sleek, lightning-fast MP3 player. That's the
revolutionary leap SSDs bring to storage technology. Instead of clunky, spinning disks, SSDs
rely on flash memory chips to store data, offering immense performance gains:

• Blazing-fast read/write speeds: Up to 100x faster than traditional HDDs, meaning


applications load instantly and file transfers take seconds.
• Minimal access time: No need to wait for a disk to spin, resulting in instant
responsiveness and snappy performance.
• Improved reliability: No moving parts make SSDs less prone to physical damage and
data corruption.

Lower power consumption: Reduced energy usage translates to longer battery life for laptops
and improved efficiency for servers.

[93]
Advanced Operating System

Fig. 4.7: Solid State Drive(SSD)

SSD structure: Flash memory chips connected to a controller

• Flash Memory Chips: Store data electrically in interconnected cells, accessed directly
by the controller.
• Controller: The brain of the SSD, managing data flow, wear leveling, and error
correction.
• DRAM Cache: Temporary buffer for storing frequently accessed data, accelerating
performance.
• NAND Flash Types: Depending on the type (SLC, MLC, TLC, QLC), SSDs offer
varying levels of performance, endurance, and cost.

Storage Virtualization: Unifying the Digital Landscape

Storage virtualization transcends the limitations of physical storage devices by creating a


unified pool of accessible resources. Imagine scattered islands of data transformed into a vast,
interconnected continent, readily accessible by multiple servers and applications. This
transformative technology unlocks a range of benefits and revolutionizes data management.

Understanding the Core Concept:

At its heart, storage virtualization abstracts the physical characteristics of individual storage
devices (HDDs, SSDs, etc.) and presents a logical view of a single, centralized storage pool.
This virtual pool masks the complexities of underlying hardware, offering users and
applications a seamless, unified storage experience.

[94]
Chapter-4 File Systems and Storage

Key Technologies and Strategies:

Several technologies and strategies enable storage virtualization:

• Hardware: Specialized hardware like storage controllers and SAN fabrics facilitate
communication between servers and the virtual storage pool.
• Software: Virtualization software manages the pool, allocating storage space, handling
data movement, and ensuring data integrity.
• Data Replication: Copies of data are maintained on different physical devices for
redundancy and fault tolerance.
• Thin Provisioning: Virtual volumes are dynamically allocated, exceeding the physical
capacity until actual data usage catches up.
• Tiered Storage: Data is automatically distributed across storage tiers with varying
performance and cost characteristics based on access frequency.

Benefits of Storage Virtualization:

• Centralized Management: Simplified administration and provisioning of storage


resources from a single console.
• Increased Scalability: Seamlessly add or remove storage capacity without significant
downtime.
• Improved Resource Utilization: Shared storage eliminates the need for dedicated disks
on each server, maximizing utilization.
• Enhanced Disaster Recovery: Data replication and redundancy ensure fast recovery
from hardware failures.
• Greater Flexibility: Dynamic allocation and customizable virtual disks provide
flexibility for diverse storage needs.
• Cost Reduction: Efficient resource utilization and centralized management often lead
to cost savings.

Types of Storage Virtualization:

Several types of storage virtualization cater to different needs:

• SAN (Storage Area Network): Dedicated high-speed network for block-level access to
shared storage resources.

[95]
Advanced Operating System

• NAS (Network Attached Storage): File-level sharing of storage resources over a


standard network.
• DAS (Direct Attached Storage): Traditional physical connection of storage to a single
server.
• Converged Infrastructure: Integrates storage, compute, and networking resources into
a single, virtualized platform.

Choosing the Right Solution:

Selecting the optimal storage virtualization solution depends on several factors:

• Infrastructure: Existing hardware, network capabilities, and application requirements.


• Budget: Cost of hardware, software, and implementation.
• Performance and Capacity Needs: Required data access speeds and total storage
capacity.
• Security and Compliance: Data protection and regulatory considerations.

Storage virtualization is a powerful tool that reshapes the data landscape. By understanding its
core principles, benefits, and various types, you can leverage this technology to optimize your
storage resources, improve IT efficiency, and unlock new possibilities for your data center

[96]
Chapter-5 Distributed Operating System

DISTRIBUTED SYSTEMS
AND VIRTUALIZATION
Distributed operating systems and their challenges - Communication and synchronization in distributed
systems - Distributed file systems and naming - Virtual machines and hypervisors - Cloud computing
and virtualization technologies.

5.1 DISTRIBUTED OPERATING SYSTEM

A distributed operating system (DOS) is an essential type of operating system.


Distributed systems use many central processors to serve multiple real-time applications
and users. As a result, data processing jobs are distributed between the processors.
It connects multiple computers via a single communication channel. Furthermore, each of
these systems has its own processor and memory.
Additionally, these CPUs communicate via high-speed buses or telephone lines. Individual
systems that communicate via a single channel are regarded as a single entity. They're also
known as loosely coupled systems.

Fig. 4.6: Distributed Operating System

[97]
Advanced Operating System

Types of Distributed Operating System:


➢ Client-Server Systems
➢ Peer-to-Peer Systems
➢ Middleware
➢ Three-tier
➢ N-tier
Client-Server System
This type of system requires the client to request a resource, after which the server gives
the requested resource. When a client connects to a server, the server may serve multiple
clients at the same time.
Client-Server Systems are also referred to as "Tightly Coupled Operating Systems".
This system is primarily intended for multiprocessors and homogenous multicomputer.
Client-Server Systems function as a centralized server since they approve all requests
issued by client systems.
Server systems can be divided into two parts:
1. Computer Server System
This system allows the interface, and the client then sends its own requests to be executed as
an action. After completing the activity, it sends a back response and transfers the result to the
client.

2. File Server System


It provides a file system interface for clients, allowing them to execute actions like file creation,
updating, deletion, and more.

Middleware

Middleware enables the interoperability of all applications running on different operating


systems. Those programs are capable of transferring all data to one other by using these
services.

Peer-to-Peer System

The nodes play an important role in this system. The task is evenly distributed among
the nodes. Additionally, these nodes can share data and resources as needed. Once
again, they require a network to connect.

[98]
Chapter-5 Distributed Operating System

The Peer-to-Peer System is known as a "Loosely Couple System". This concept is used
in computer network applications since they contain a large number of processors that
do not share memory or clocks.
Each processor has its own local memory, and they interact with one another via a
variety of communication methods like telephone lines or high-speed buses.

Three-tier

The information about the client is saved in the intermediate tier rather than in the client, which
simplifies development. This type of architecture is most commonly used in online
applications.

N-tier

When a server or application has to transmit requests to other enterprise services on the
network, n-tier systems are used.

Features of Distributed Operating System:

There are various features of the distributed operating system. Some of them are as follows:

Openness

It means that the system's services are freely displayed through interfaces. Furthermore, these
interfaces only give the service syntax. For example, the type of function, its return type,
parameters, and so on. Interface Definition Languages are used to create these interfaces (IDL).

Scalability

It refers to the fact that the system's efficiency should not vary as new nodes are added to the
system. Furthermore, the performance of a system with 100 nodes should be the same as that
of a system with 1000 nodes.

Resource Sharing

Its most essential feature is that it allows users to share resources. They can also share resources
in a secure and controlled manner. Printers, files, data, storage, web pages, etc., are examples
of shared resources.

[99]
Advanced Operating System

Flexibility

A DOS's flexibility is enhanced by modular qualities and delivers a more advanced range of
high-level services. The kernel/ microkernel's quality and completeness simplify the
implementation of such services.

Transparency

It is the most important feature of the distributed operating system. The primary purpose of a
distributed operating system is to hide the fact that resources are shared. Transparency also
implies that the user should be unaware that the resources he is accessing are shared.
Furthermore, the system should be a separate independent unit for the user.

Heterogeneity

The components of distributed systems may differ and vary in operating systems, networks,
programming languages, computer hardware, and implementations by different developers.

Fault Tolerance

Fault tolerance is that process in which user may continue their work if the software or
hardware fails.

Challenges

There are various disadvantages of the distributed operating system. Some of them are
as follows:

The system must decide which jobs must be executed when they must be executed, and
where they must be executed. A scheduler has limitations, which can lead to
underutilized hardware and unpredictable runtimes.
It is hard to implement adequate security in DOS since the nodes and connections must
be secured.
The database connected to a DOS is relatively complicated and hard to manage in
contrast to a single-user system.
The underlying software is extremely complex and is not understood very well
compared to other systems.
The more widely distributed a system is, the more communication latency can be
expected. As a result, teams and developers must choose between availability,
consistency, and latency.

[100]
Chapter-5 Distributed Operating System

These systems aren't widely available because they're thought to be too expensive.
Gathering, processing, presenting, and monitoring hardware use metrics for big clusters
can be a real issue.
5.2 Communication and synchronization in distributed system:
Distributed System is a collection of computers connected via a high-speed
communication network. In the distributed system, the hardware and software
components communicate and coordinate their actions by message passing
Each node in distributed systems can share its resources with other nodes. So,
there is a need for proper allocation of resources to preserve the state of
resources and help coordinate between the several processes.
To resolve such conflicts, synchronization is used. Synchronization in
distributed systems is achieved via clocks. The physical clocks are used to adjust
the time of nodes.
Each node in the system can share its local time with other nodes in the system.
The time is set based on UTC (Universal Time Coordination). UTC is used as a
reference time clock for the nodes in the system. Clock synchronization can be
achieved by 2 ways
External and Internal Clock Synchronization.

External clock synchronization is the one in which an external reference clock is


present. It is used as a reference and the nodes in the system can set and adjust their
time accordingly.
Internal clock synchronization is the one in which each node shares its time with
other nodes and all the nodes set and adjust their times accordingly.

There are 2 types of clock synchronization algorithms: Centralized and Distributed.

Centralized is the one in which a time server is used as a reference. The single time-
server propagates it’s time to the nodes, and all the nodes adjust the time accordingly.
It is dependent on a single time-server, so if that node fails, the whole system will lose
synchronization. Examples of centralized are-Berkeley the Algorithm, Passive Time
Server, Active Time Server etc.

[101]
Advanced Operating System

Distributed is the one in which there is no centralized time-server present. Instead, the
nodes adjust their time by using their local time and then, taking the average of the
differences in time with other nodes. Distributed algorithms overcome the issue of
centralized algorithms like scalability and single point failure. Examples of Distributed
algorithms are – Global Averaging Algorithm, Localized Averaging Algorithm, NTP
(Network time protocol), etc.

Centralized clock synchronization algorithms suffer from two major drawbacks:


They are subject to a single-point failure. If the time-server node fails, the clock
synchronization operation cannot be performed. This makes the system unreliable.
Ideally, a distributed system should be more reliable than its individual nodes. If one
goes down, the rest should continue to function correctly.
From a scalability point of view, it is generally not acceptable to get all the time requests
serviced by a single-time server. In a large system, such a solution puts a heavy burden
on that one process.
Note:
Distributed algorithms overcome these drawbacks as there is no centralized time-server
present. Instead, a simple method for clock synchronization may be to equip each node of the
system with a real-time receiver so that each node’s clock can be independently synchronized
in real-time. Multiple real-time clocks (one for each node) are normally used for this purpose

5.3 Distributed File Systems and Naming


Naming:
Names are data items used to refer to resources, in order to access them, specify their use,
describe them, look up details about them, etc. Names may be typed, depending on the object
they refer to.
Naming Terms:
• Namespace
The range of names supported by a schema or system.
• Naming domain
The name space with a single authority.
• Naming scheme
The structure or syntax from which names are constructed.
Naming Characteristics -Size:
[102]
Chapter-5 Distributed Operating System

• Fixed-size :
1. Examples
IP addresses, memory addresses, phone numbers (sort of).
2. Pros
Easier to handle.
3. Con
Finite range.
• Infinite :
1. Example
Email addresses (sort of).
2. Pros
Allows for indefinite range.
3. Pros
Allows integration of name spaces.
4. Cons
It can be harder to deal with.
Naming Characteristics -Presentation:
• User-oriented
Formatted in away that users can understand and use.
Examples
‘google.com’, ‘print server’
Example
216.58.198.110.
Naming Characteristics -Purity
• A pure name has no structure to it. (It can only be used to compare equality with another
name).
• An impure name has some structure. (A valid name is something about the object)
Examples
IP addresses (network and host IDs), absolute file paths (which give location), room
numbers like ‘S4.01’.
• It can be physically – or organisationally – oriented.
• Physically-oriented
Some physical layout is encoded in the name (such as ‘room S4.01’)

[103]
Advanced Operating System

• Organisationally-oriented
some information about how objects and resources are organised is included (such as
file paths).
Naming Characteristics -Scope
• Global scope
The name alone gives the type of object it defines.
Example
google.com, a book’s ISBN.
• Namespace-specific:
The same name may be used in another namespace for a different object.
Example
‘room 4-12’ has relevance in one building, but not worldwide.
Naming Characteristics Context
Let’s Considering paths within a naming hierarchy.
• Context-dependent
A path may resolve to a different identifier depending on the context.
Example
phone extension numbers.
• Absolute:
A path resolves to the same identifier, regardless of context.
Example – a full postal address.

5.4 WHY USE A VIRTUAL MACHINE AND HYPERVISOR?

VIRTUAL MACHINE:
• A virtual machine is designed to mimic the function of a physical computing device or
server using cloud-based parts known as virtual hardware devices.
• These virtual hardware devices mimic physical computing components to function as a
traditional computer and server setup.
• Using virtual RAM, a virtual desktop, and a cloud-based CPU, the virtual machine can
perform many of the tasks a traditional machine does without the need for as much
physical equipment to power it. The virtual machine is stored within a part of the host
computer separately from other resources. The VM software and any apps that run
together with it are known as guests.

[104]
Chapter-5 Distributed Operating System

• There can be two types of virtual machines. While process VMs only run a single
process at a time, system VMs work to replicate an entire operating system and any
required applications to give the same functions as a desktop. Several process VMs may
run at once within the system VM setup to make a virtual desktop setup possible.
• While many platforms exist to build and run a virtual machine, each with its own
features and tools, there are some common traits and characteristics that each
virtualization vendor possesses.
• These typically include the ability to build and run several virtual machines
simultaneously on the same host but keep them apart for the average user’s purposes.
• These machines are isolated so they run separately, with the users of one virtual
machine unable to access or view another running on the same host. This sandbox, as
it’s known, allows operating systems and apps to run independently.
• One of the best reasons for virtualization within an enterprise can be the ability to add
more machines to the same server easily. Deploying and managing a new machine is
much simpler when using virtual resources than physical ones, as the administrator
doesn’t need any new equipment.
• Resources can simply be reallocated to allow another virtual machine to run on the
same server. To create a new virtual machine, most administrators will use a hypervisor.
This allows for easy creation and management of multiple virtual machines as needed.

HYPERVISOR

A hypervisor is a mandatory component that makes it possible for the virtual machine
to run and is sometimes called a virtual machine monitor. A hypervisor’s main job is to
decouple the hardware powering the network from the operating system and other
software running on the virtual machine.

It isolates each virtual machine, allowing them to run separate operating systems
simultaneously and protecting them from potential security issues. It also allows each
virtual hardware setup to share the physical resources that power the machines that keep
them running by dynamically sharing resources like RAM and bandwidth.

With enough historical data gathered from the system, the software can begin to predict
how the need for resources changes through the day and identify patterns, automating
the movement of resources to and from the cloud to the VMs that need them. By setting

[105]
Advanced Operating System

protocols for each operating system to follow, the hypervisor keeps the virtual machine
from crashing while this is happening. Proper allocation of resources keeps the virtual
machine running and prevents any issues with programs competing for resources like
RAM.

The hypervisor keeps each virtual machine separate so that if one crashes or experiences
a fatal error, the others continue to run without issue. Through sandboxing, users of a
VM can try using new apps or operating systems without fear of them crashing the
whole system. With cybercrime being a critical issue for enterprises, this also ensures
that if a malware attack is effective on one machine, the others will be protected from
the same fate. This separation between virtual machines enforced by the hypervisor
provides an extra layer of security by preventing one machine from causing an issue
with another one.

Three modules exist within the hypervisor. First, the dispatcher routes the directions of
each virtual machine instance to the allocator or the interpreter for execution. With these
directions sent, the allocator responds to the dispatcher’s commands to determine the
resources needed and allocates them. Last, the interpreter module has stored routines
that are executed based on the allocator’s commands.

There are two types of hypervisors in popular use:

• Type 1, also called native or bare metal. This type of hypervisor runs on the host
directly, using simple programming that doesn’t require its own operating system to
function. With a Type 1 hypervisor running, the host machine must be dedicated to this
task.
• Type 2, a hosted hypervisor. This software runs through an app instead of on the host
itself by using the host computer’s operating system to carry out its commands. A Type
2 hypervisor may function slower than a Type 1, as all of its commands

5.5 Cloud Computing and Virtualization Technologies


Moving to virtualization is becoming standard for enterprises that want to stay remote in a post-
COVID-19 world. Not only will virtualizing machines and the user desktop give employees
more flexibility, but executives can also see lowered operational expenses and better reliability
from a virtual setup.

[106]
Chapter-5 Distributed Operating System

Virtualization in Cloud Computing


Virtualization is a technique how to separate a service from the underlying physical
delivery of that service. It is the process of creating a virtual version of something like
computer hardware.
It was initially developed during the mainframe era. It involves using specialized
software to create a virtual or software-created version of a computing resource rather
than the actual version of the same resource. With the help of Virtualization, multiple
operating systems and applications can run on the same machine and its same hardware
at the same time, increasing the utilization and flexibility of hardware. In other words,
one of the main cost-effective, hardware-reducing, and energy-saving techniques used
by cloud providers is Virtualization. Virtualization allows sharing of a single physical
instance of a resource or an application among multiple customers and organizations at
one time. It does this by assigning a logical name to physical storage and providing a
pointer to that physical resource on demand. The term virtualization is often
synonymous with hardware virtualization, which plays a fundamental role in efficiently
delivering Infrastructure-as-a-Service (IaaS) solutions for cloud computing. Moreover,
virtualization technologies provide a virtual environment for not only executing
applications but also for storage, memory, and networking.

Fig. 4.7: Virtualization

• Host Machine: The machine on which the virtual machine is going to be built is
known as Host Machine.
• Guest Machine: The virtual machine is referred to as a Guest Machine.

[107]
Advanced Operating System

Work of Virtualization in Cloud Computing

Virtualization has a prominent impact on Cloud Computing. In the case of cloud computing,
users store data in the cloud, but with the help of Virtualization, users have the extra benefit of
sharing the infrastructure. Cloud Vendors take care of the required physical resources, but these
cloud providers charge a huge amount for these services which impacts every user or
organization. Virtualization helps Users or Organisations in maintaining those services which
are required by a company through external (third-party) people, which helps in reducing costs
to the company. This is the way through which Virtualization works in Cloud Computing.

Benefits of Virtualization

• More flexible and efficient allocation of resources.


• Enhance development productivity.
• It lowers the cost of IT infrastructure.
• Remote access and rapid scalability.
• High availability and disaster recovery.
• Pay peruse of the IT infrastructure on demand.
• Enables running multiple operating systems.

Drawback of Virtualization

• High Initial Investment: Clouds have a very high initial investment, but it is also
true that it will help in reducing the cost of companies.
• Learning New Infrastructure: As the companies shifted from Servers to Cloud, it
requires highly skilled staff who have skills to work with the cloud easily, and for
this, you have to hire new staff or provide training to current staff.
• Risk of Data: Hosting data on third-party resources can lead to putting the data at
risk, it has the chance of getting attacked by any hacker or cracker very easily.

Types of Virtualization

 Application Virtualization

 Network Virtualization

 Desktop Virtualization

 Storage Virtualization

[108]
Chapter-5 Distributed Operating System

 Server Virtualization

 Data virtualization

• Application Virtualization: Application virtualization helps a user to have


remote access to an application from a server. The server stores all personal
information and other characteristics of the application but can still run on a
local workstation through the internet. An example of this would be a user who
needs to run two different versions of the same software. Technologies that use
application virtualization are hosted applications and packaged applications.
• Network Virtualization: The ability to run multiple virtual networks with each
having a separate control and data plan. It co-exists together on top of one
physical network. It can be managed by individual parties that are potentially
confidential to each other. Network virtualization provides a facility to create
and provision virtual networks, logical switches, routers, firewalls, load
balancers, Virtual Private Networks (VPN), and workload security within days
or even weeks.

Fig. 4.8: Types of Virtualization

• Desktop Virtualization: Desktop virtualization allows the users’ OS to be remotely


stored on a server in the data center. It allows the user to access their desktop
virtually, from any location by a different machine. Users who want specific
operating systems other than Windows Server will need to have a virtual desktop.

[109]
Advanced Operating System

The main benefits of desktop virtualization are user mobility, portability, and easy
management of software installation, updates, and patches.
• Storage Virtualization: Storage virtualization is an array of servers that are
managed by a virtual storage system. The servers aren’t aware of exactly where
their data is stored and instead function more like worker bees in a hive. It makes
managing storage from multiple sources be managed and utilized as a single
repository. Storage virtualization software maintains smooth operations, consistent
performance, and a continuous suite of advanced functions despite changes, breaks
down, and differences in the underlying equipment.
• Server Virtualization: This is a kind of virtualization in which the masking of
server resources takes place. Here, the central server (physical server) is divided
into multiple different virtual servers by changing the identity number, and
processors. So, each system can operate its operating systems in an isolated manner.
Where each sub-server knows the identity of the central server. It causes an increase
in performance and reduces the operating cost by the deployment of main server
resources into a sub-server resource. It’s beneficial in virtual migration, reducing
energy consumption, reducing infrastructural costs, etc.

Fig. 4.9: Server Virtualization

• Data Virtualization: This is the kind of virtualization in which the data is


collected from various sources and managed at a single place without knowing

[110]
Chapter-5 Distributed Operating System

more about the technical information like how data is collected, stored &
formatted then arranged that data logically so that its virtual view can be
accessed by its interested people and stakeholders, and users through the various
cloud services remotely. Many big giant companies are providing their services
like Oracle, IBM, At scale, Cdata, etc.

[111]
Example Lab programs

Example Lab programs for Advanced Operating System

UNIX Commands

Shell Programming

UNIX System Calls

CPU Scheduling Algorithms

Process Synchronization (Semaphores, Producer-Consumer Problem)

Deadlock Avoidance (Dining Philosophers, Banker's Algorithm)

Memory Allocation and Management

Page Replacement Techniques

Disk Scheduling Algorithms

[112]
Example Lab programs

DEMONSTRATION UNIX COMMANDS


1. ls (List) Command:
Aim: To list files and directories in the current directory or specified directory.
Demonstration: ls
Expected Result : List of files and directories in the current directory.
2. cd (Change Directory) Command:
Aim: To change the current working directory.
Demonstration: cd /path/to/directory
3. mkdir (Make Directory) Command:
Aim: To create a new directory.
Demonstration: mkdir new_directory
Expected Result: Create a new directory named "new_directory".
4. cp (Copy) Command:
Aim: To copy files and directories.
Demonstration: cp source_file destination_directory
Expected Result: Copy the source file to the destination directory.
5. mv (Move) Command:
Aim: To move or rename files and directories.
Demonstration: mv source_file destination_directory
Expected Result : Move the source file to the destination directory or rename it.
6. rm (Remove) Command:
Aim: To delete files and directories.
Demonstration: rm file_to_delete
Expected Result: Delete the specified file.

These demonstrations illustrate how each command works and what results you can expect
when using them in a UNIX-like operating system. Remember to replace placeholders like
source_file, destination_directory, and file_to_delete with actual file names or directory paths
when using these commands.

[113]
Advanced Operating System

Shell Programming
Aim: To create a shell script that takes two numbers as input and outputs their sum.

Algorithm:

✓ Start
✓ Accept two numbers as input from the user.
✓ Add the two numbers together.
✓ Print the result.
✓ Stop
Program:
#!/bin/bash
# Prompt the user to enter the first number
echo "Enter the first number:"
read num1
# Prompt the user to enter the second number
echo "Enter the second number:"
read num2
# Add the two numbers together
sum=$((num1 + num2))
# Print the result
echo "The sum of $num1 and $num2 is: $sum"

Output:
Enter the first number:
5

Enter the second number:

7
The sum of 5 and 7 is: 12

Result: The shell script successfully takes two numbers as input from the user, calculates their
sum, and prints the result. This demonstrates a basic example of shell programming for
performing arithmetic operations.

[114]
Example Lab programs

UNIX System Calls

Aim: Create a C program using UNIX system calls to demonstrate file I/O operations.
Algorithm:
✓ Start
✓ Open a file using the open() system call.
✓ Write some data to the file using the write() system call.
✓ Close the file descriptor.
✓ Open the file again for reading.
✓ Read the data from the file using the read() system call.
✓ Close the file descriptor.
✓ Print the data read from the file.
✓ Stop
Program:
#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
#include <unistd.h>
#define BUF_SIZE 1024
int main() {
int fd;
ssize_t num_read;
char buffer[BUF_SIZE];
// Open file for writing
fd = open("example.txt", O_WRONLY | O_CREAT | O_TRUNC, S_IRUSR |
S_IWUSR);
if (fd == -1) {
perror("open");
exit(EXIT_FAILURE);
}
// Write data to the file
if (write(fd, "Hello, World!\n", 14) == -1) {
perror("write");
exit(EXIT_FAILURE);
[115]
Advanced Operating System

// Close the file


if (close(fd) == -1) {
perror("close");
exit(EXIT_FAILURE);
}
// Open file for reading
fd = open("example.txt", O_RDONLY);
if (fd == -1) {
perror("open");
exit(EXIT_FAILURE);
}
// Read data from the file
num_read = read(fd, buffer, BUF_SIZE);
if (num_read == -1) {
perror("read");
exit(EXIT_FAILURE);
}
// Close the file
if (close(fd) == -1) {
perror("close");
exit(EXIT_FAILURE);
}
// Print the data read from the file
printf("Data read from file: %.*s", (int)num_read, buffer);
return 0;
}
Output:
Data read from file: Hello, World!

Result: The program successfully demonstrates file I/O operations using UNIX system calls.
It opens a file, writes data to it, reads the data back, and prints it to the console.

[116]
Example Lab programs

CPU Scheduling Algorithms


Aim: Implement the First Come First Serve (FCFS) CPU scheduling algorithm.
✓ Algorithm:
✓ Start
✓ Read the number of processes and their burst times.
✓ Sort the processes based on their arrival time.
✓ Execute the processes in the order of their arrival time (FCFS).
✓ Calculate waiting time and turnaround time for each process.
✓ Calculate average waiting time and average turnaround time.
✓ Stop
Program
#include <stdio.h>
void fcfs(int n, int burst_time[]) {
int waiting_time[n], turnaround_time[n];
float avg_waiting_time = 0, avg_turnaround_time = 0;
// Calculate waiting time and turnaround time for the first process
waiting_time[0] = 0;
turnaround_time[0] = burst_time[0];
// Calculate waiting time and turnaround time for the rest of the processes
for (int i = 1; i < n; i++) {
waiting_time[i] = waiting_time[i - 1] + burst_time[i - 1];
turnaround_time[i] = waiting_time[i] + burst_time[i];
}
// Calculate average waiting time and average turnaround time
for (int i = 0; i < n; i++) {
avg_waiting_time += waiting_time[i];
avg_turnaround_time += turnaround_time[i];
}
avg_waiting_time /= n;
avg_turnaround_time /= n;
// Print the results
printf("Process\t Burst Time\t Waiting Time\t Turnaround Time\n");
for (int i = 0; i < n; i++) {

[117]
Advanced Operating System

printf("P%d\t %d\t\t %d\t\t %d\n", i + 1, burst_time[i], waiting_time[i],


turnaround_time[i]);
}
printf("Average Waiting Time: %.2f\n", avg_waiting_time);
printf("Average Turnaround Time: %.2f\n", avg_turnaround_time);
}
int main() {
int n;
printf("Enter the number of processes: ");
scanf("%d", &n);
int burst_time[n];
printf("Enter burst time for each process:\n");
for (int i = 0; i < n; i++) {
printf("Process %d: ", i + 1);
scanf("%d", &burst_time[i]);
}
fcfs(n, burst_time);
return 0;
}
Output:
Enter the number of processes: 3
Enter burst time for each process:
Process 1: 10
Process 2: 5
Process 3: 8
Process Burst Time Waiting Time Turnaround Time
P1 10 0 10
P2 5 10 15
P3 8 15 23
Average Waiting Time: 8.33
Average Turnaround Time: 16.00
Result: The program calculates and displays the waiting time and turnaround time for each
process using the First Come First Serve (FCFS) CPU scheduling algorithm. It also calculates
the average waiting time and average turnaround time for all processes.
[118]
Example Lab programs

Process Synchronization (Semaphores, Producer-Consumer Problem)

Aim: Implement the Producer-Consumer problem using semaphores for process


synchronization.

Algorithm:
✓ Start
✓ Initialize semaphores for empty and full buffer.
✓ Create producer and consumer processes.
✓ Implement producer function:
✓ Wait on empty semaphore (decrement).
✓ Produce an item and add it to the buffer.
✓ Signal full semaphore (increment).
✓ Implement consumer function:
✓ Wait on full semaphore (decrement).
✓ Consume an item from the buffer.
✓ Signal empty semaphore (increment).
✓ Repeat the producer and consumer processes.
✓ Stop
Program:
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <semaphore.h>
#include <unistd.h>
#define BUFFER_SIZE 5
sem_t empty, full;
int buffer[BUFFER_SIZE];
int in = 0, out = 0;
void *producer(void *arg) {
int item = 1;
while (1) {
// Produce item

[119]
Advanced Operating System

sleep(1);
// Wait for empty buffer
sem_wait(&empty);

// Produce item and add it to the buffer


buffer[in] = item;
printf("Produced item: %d\n", item++);
in = (in + 1) % BUFFER_SIZE;
// Signal that the buffer is full
sem_post(&full);
}
}
void *consumer(void *arg) {
while (1) {
// Wait for full buffer
sem_wait(&full);
// Consume item from the buffer
int item = buffer[out];
printf("Consumed item: %d\n", item);
out = (out + 1) % BUFFER_SIZE;
// Signal that the buffer is empty
sem_post(&empty);
}
}
int main() {
// Initialize semaphores
sem_init(&empty, 0, BUFFER_SIZE);
sem_init(&full, 0, 0);
// Create producer and consumer threads
pthread_t producer_thread, consumer_thread;
pthread_create(&producer_thread, NULL, producer, NULL);
pthread_create(&consumer_thread, NULL, consumer, NULL);
// Join threads
pthread_join(producer_thread, NULL);
[120]
Example Lab programs

pthread_join(consumer_thread, NULL);

// Destroy semaphores
sem_destroy(&empty);
sem_destroy(&full);
return 0;
}
Output:
Produced item: 1
Consumed item: 1
Produced item: 2
Consumed item: 2
Produced item: 3
Consumed item: 3
Produced item: 4
Consumed item: 4
Produced item: 5
Consumed item: 5
Result: The program successfully solves the Producer-Consumer problem using semaphores
for process synchronization. It ensures that the producer produces items only when the buffer
has space, and the consumer consumes items only when the buffer is not empty. This prevents
issues such as race conditions and buffer overflow/underflow.

[121]
Advanced Operating System

Deadlock Avoidance (Dining Philosophers, Banker's Algorithm)

Aim: Implement the Dining Philosophers problem using the Resource Allocation Graph
approach to avoid deadlock.

Algorithm:
✓ Start
✓ Each philosopher is represented as a thread.
✓ Use mutex locks to represent the forks. Each fork is initially available.
✓ When a philosopher wants to eat, they need to acquire both the left and right forks.
✓ Use a resource allocation graph to ensure that no philosopher holds a fork while
waiting for another fork.
✓ Implement a solution to prevent deadlock by ensuring that a philosopher only picks
up both forks if both are available.
✓ After eating, the philosopher releases both forks.
✓ Stop
Program:
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <unistd.h>
#define NUM_PHILOSOPHERS 5
pthread_mutex_t forks[NUM_PHILOSOPHERS];
void *philosopher(void *arg) {
int id = *((int *) arg);
int left_fork = id;
int right_fork = (id + 1) % NUM_PHILOSOPHERS;
while (1) {
// Thinking
printf("Philosopher %d is thinking\n", id);
// Pick up left fork
pthread_mutex_lock(&forks[left_fork]);
printf("Philosopher %d picks up left fork %d\n", id, left_fork);

[122]
Example Lab programs

// Pick up right fork


pthread_mutex_lock(&forks[right_fork]);
printf("Philosopher %d picks up right fork %d\n", id, right_fork);

// Eating
printf("Philosopher %d is eating\n", id);
// Put down right fork
pthread_mutex_unlock(&forks[right_fork]);
printf("Philosopher %d puts down right fork %d\n", id, right_fork);
// Put down left fork
pthread_mutex_unlock(&forks[left_fork]);
printf("Philosopher %d puts down left fork %d\n", id, left_fork);
}
}
int main() {
pthread_t philosophers[NUM_PHILOSOPHERS];
int philosopher_ids[NUM_PHILOSOPHERS];
// Initialize mutex locks for each fork
for (int i = 0; i < NUM_PHILOSOPHERS; i++) {
pthread_mutex_init(&forks[i], NULL);
}
// Create philosopher threads
for (int i = 0; i < NUM_PHILOSOPHERS; i++) {
philosopher_ids[i] = i;
pthread_create(&philosophers[i], NULL, philosopher, &philosopher_ids[i]);
}
// Join philosopher threads
for (int i = 0; i < NUM_PHILOSOPHERS; i++) {
pthread_join(philosophers[i], NULL);
}
// Destroy mutex locks
for (int i = 0; i < NUM_PHILOSOPHERS; i++) {
pthread_mutex_destroy(&forks[i]);
}
[123]
Advanced Operating System

return 0;
}
Output:
Philosopher 0 is thinking
Philosopher 0 picks up left fork 0
Philosopher 0 picks up right fork 1
Philosopher 0 is eating
Philosopher 0 puts down right fork 1
Philosopher 0 puts down left fork 0
Philosopher 1 is thinking
Philosopher 1 picks up left fork 1
Philosopher 1 picks up right fork 2
Philosopher 1 is eating
Philosopher 1 puts down right fork 2
Philosopher 1 puts down left fork 1
Result: The program simulates the Dining Philosophers problem using mutex locks to
represent the forks and implements the Resource Allocation Graph approach to avoid deadlock.
Each philosopher successfully picks up both forks only if they are both available and releases
them after eating, preventing deadlock.

[124]
Example Lab programs

Memory Allocation and Management


Aim: Demonstrate dynamic memory allocation and deallocation in C.
Algorithm:
Allocate memory dynamically using the malloc() function.
Use the allocated memory for storing data.
Deallocate the memory when it is no longer needed using the free() function.
Program
#include <stdio.h>
#include <stdlib.h>
int main() {
int n;
int *arr;
// Ask the user for the size of the array
printf("Enter the size of the array: ");
scanf("%d", &n);
// Allocate memory dynamically for the array
arr = (int *)malloc(n * sizeof(int));
if (arr == NULL) {
printf("Memory allocation failed.\n");
return 1;
}
// Initialize the array elements
printf("Enter %d elements:\n", n);
for (int i = 0; i < n; i++) {
scanf("%d", &arr[i]);
}
// Print the array elements
printf("Array elements are:\n");
for (int i = 0; i < n; i++) {
printf("%d ", arr[i]);
}
printf("\n");
// Deallocate the dynamically allocated memory
free(arr);
[125]
Advanced Operating System

return 0;
}
Output:
Enter the size of the array: 5
Enter 5 elements:
1
2
3
4
5
Array elements are:
12345

Result: The program demonstrates dynamic memory allocation and deallocation in C. It asks the
user for the size of an array, allocates memory for the array dynamically, initializes the array
elements, prints them, and finally deallocates the dynamically allocated memory to prevent
memory leaks.

[126]
Example Lab programs

Page Replacement Techniques

Aim: Implement the Least Recently Used (LRU) page replacement algorithm.
Algorithm:
✓ Start
✓ Maintain a data structure, such as a queue or a list, to keep track of the pages currently in
memory and their access history.
✓ When a page needs to be replaced:
✓ Identify the page that has not been accessed for the longest time (least recently used).
✓ Remove that page from memory and replace it with the new page.
✓ Update the access history of pages whenever a page is accessed.
✓ Stop
Program:
#include <stdio.h>
#include <stdlib.h>
#define MAX_FRAMES 3
typedef struct Node {
int data;
struct Node *next;
} Node;
Node *head = NULL;
int frame_count = 0;
void display() {
Node *temp = head;
while (temp != NULL) {
printf("%d ", temp->data);
temp = temp->next;
}
printf("\n");
}
void add_page(int page) {
if (frame_count < MAX_FRAMES) {
// Add page to an empty frame
Node *new_node = (Node *)malloc(sizeof(Node));
[127]
Advanced Operating System

new_node->data = page;
new_node->next = head;
head = new_node;
frame_count++;
} else {
// Replace the least recently used page
Node *temp = head;
Node *prev = NULL;
while (temp->next != NULL) {
prev = temp;
temp = temp->next;
}
prev->next = NULL;
free(temp);
// Add the new page to the front
Node *new_node = (Node *)malloc(sizeof(Node));
new_node->data = page;
new_node->next = head;
head = new_node;
}
}
void access_page(int page) {
Node *temp = head;
Node *prev = NULL;
while (temp != NULL && temp->data != page) {
prev = temp;
temp = temp->next;
}
if (temp != NULL) {
// Move accessed page to the front
if (prev != NULL) {
prev->next = temp->next;
temp->next = head;
head = temp;
[128]
Example Lab programs

}
} else {
// Page fault: Page not found in memory
printf("Page %d not found in memory. Adding page...\n", page);
add_page(page);
}
}
int main() {
// Access pages
access_page(1);
display();
access_page(2);
display();
access_page(3);
display();
access_page(4);
display();
access_page(2);
display();
access_page(1);
display();
return 0;
}
Output:
Page 1 not found in memory. Adding page...
1
Page 2 not found in memory. Adding page...
21
Page 3 not found in memory. Adding page...
321
Page 4 not found in memory. Adding page...
432
243
243
[129]
Advanced Operating System

Disk Scheduling Algorithms


Aim: Implement the SCAN (Elevator) disk scheduling algorithm.
Algorithm:
✓ Start
✓ Arrange the pending disk requests in ascending order.
✓ Determine the direction of movement of the disk head.
✓ Traverse the disk in the determined direction, servicing requests along the way.
✓ When reaching the end of the disk in one direction, change direction and
continue servicing requests until all requests are completed.
✓ Stop

Program:

#include <stdio.h>
#include <stdlib.h>
#define MAX_REQUESTS 100
void sort_requests(int n, int requests[]) {
// Bubble sort algorithm to sort disk requests in ascending order
for (int i = 0; i < n - 1; i++) {
for (int j = 0; j < n - i - 1; j++) {
if (requests[j] > requests[j + 1]) {
// Swap elements
int temp = requests[j];
requests[j] = requests[j + 1];
requests[j + 1] = temp;
}
}
}
}
void scan(int n, int requests[], int start, int max_cylinder) {
int total_movement = 0;
int current_cylinder = start;
int direction = 1; // 1: Right, -1: Left
// Sort requests
sort_requests(n, requests);

[130]
Example Lab programs

// Find the position of the start cylinder in the sorted requests


int start_index;
for (int i = 0; i < n; i++) {
if (requests[i] >= start) {
start_index = i;
break;
}
}
// Scan in the determined direction
printf("Disk head movement: %d -> ", start);
if (direction == 1) {
// Scan to the right
for (int i = start_index; i < n; i++) {
printf("%d -> ", requests[i]);
total_movement += abs(requests[i] - current_cylinder);
current_cylinder = requests[i];
}
// Reverse direction and scan to the left
printf("0 -> ");
total_movement += current_cylinder;
current_cylinder = 0;
for (int i = start_index - 1; i >= 0; i--) {
printf("%d -> ", requests[i]);
total_movement += abs(requests[i] - current_cylinder);
current_cylinder = requests[i];
}
} else {
// Scan to the left
for (int i = start_index; i >= 0; i--) {
printf("%d -> ", requests[i]);
total_movement += abs(requests[i] - current_cylinder);
current_cylinder = requests[i];
}
// Reverse direction and scan to the right
[131]
Advanced Operating System

printf("%d -> ", max_cylinder);


total_movement += abs(max_cylinder - current_cylinder);
current_cylinder = max_cylinder;
for (int i = start_index + 1; i < n; i++) {
printf("%d -> ", requests[i]);
total_movement += abs(requests[i] - current_cylinder);
current_cylinder = requests[i];
}
}
printf("\nTotal head movement: %d\n", total_movement);
}
int main() {
int n, start, max_cylinder;
int requests[MAX_REQUESTS];
// Input the number of requests, starting cylinder, and maximum cylinder
printf("Enter the number of disk requests: ");
scanf("%d", &n);
printf("Enter the starting cylinder: ");
scanf("%d", &start);
printf("Enter the maximum cylinder: ");
scanf("%d", &max_cylinder);
// Input the disk requests
printf("Enter the disk requests:\n");
for (int i = 0; i < n; i++) {
scanf("%d", &requests[i]);
}
// Perform SCAN disk scheduling
scan(n, requests, start, max_cylinder);
return 0;
}
Output:
Enter the number of disk requests: 5
Enter the starting cylinder: 50
Enter the maximum cylinder: 200
[132]
Example Lab programs

Enter the disk requests:


95
180
34
119
11
Disk head movement: 50 -> 95 -> 119 -> 180 -> 200 -> 34 -> 11 -> 0 ->
Total head movement: 409
Result: The program demonstrates the SCAN (Elevator) disk scheduling algorithm. It takes
disk requests, the starting cylinder, and the maximum cylinder as input and calculates the total
head movement required to service all the requests. The program simulates the movement of the
disk head in the determined direction, servicing requests along the way.

[133]
ADVANCED OPERATING SYSTEMS
Dr. A. Karunamurthy, MCA., M.Tech., Ph.D., is an Associate Professor in Sri
Manakula Vinayagar Engineering College (Autonomous), Pondicherry. He received his
Ph.D., from Bharathiyar University, Coimbatore and M.Tech., from Manonmaniam
Sundaranar University, Tirunelveli. He pursued his P.G in Pondicherry University. He
is an academician in Computer Science and Engineering with more than a decade of
accomplished experience in teaching. He has published over 37 articles in National and
International referred. He has participated in numerous conference and workshop in
academic platform. He is an active member of ISTE, CSI, IAENG & IAEPT.
Furthermore, he pursued postdoctoral research at the Singapore Institute of Technology
(SIT).

Mr.R.Ramakrishnan,MCA.,M.Phil.,M.Tech.,(Ph.D.,) holds multiple academic


qualifications, including an M.C.A from Madurai Kamaraj University, an M.Phil from
Periyar University, and an M.Tech from Pondicherry University. Currently pursuing his
Ph.D.,(VMRF). he serves as an Associate Professor at Sri Manakula Vinayagar
Engineering College (SMVEC), With 22 years of experience in education and 10 years
in the industry, He possesses a rich background in both teaching and practical
application. He has authored over 30 articles in National and International Journals.
Additionally, he is a member of prestigious organizations such as ISTE, CSI, and
IAENG. His expertise lies in Programming and Software Engineering, and he has
successfully organized National-level seminars and symposiums.

Mr. V. Udhayakumar, MCA., M.Tech.,(Ph.D)., is an Associate Professor in Sri


Manakula Vinayagar Engineering College (Autonomous), Pondicherry. He is received
his MCA., Bharathidasan University, Trichy and M.Tech., form PRIST University,
Thanjavur. He Pursing his Ph.D., in PRIST University in Thanjavur. He is an
academician in Computer Science and Engineering with more than an 18 Year of
accomplished experience (UG & PG) in teaching. He has published over 19 articles
International referred. He has participated in numerous conference and workshop in
academic platform. He is an active member of ISTE.

Mr. P. Rajapandian, M.C.A., M.Phil., (Ph.D.), is an academician in computer science


and engineering with more than two decades of accomplished experience in teaching.
Currently serving as an Associate Professor at Sri Manakula Vinayagar Engineering
College (An Autonomous Institution) in Puducherry. He completed his M.Phil. In
Computer Science and is pursuing his Ph.D. at Pondicherry Central University. He has
published over 15 articles in national and international journals and has participated in
numerous conferences and workshops on the academic platform. He is an active
member of CSI, CSTA, IACSIT, and IAENG.

₹ 600

You might also like