KEMBAR78
CSC 336 Operating - System | PDF | Operating System | Process (Computing)
0% found this document useful (0 votes)
6 views47 pages

CSC 336 Operating - System

Uploaded by

King
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views47 pages

CSC 336 Operating - System

Uploaded by

King
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

PLATEAU STATE UNIVERSITY (PLASU), BOKKOS

OPERATING SYSTEM II

CSC 336

INTRODUCTION : An Operating System (OS) is a collection of software that manages computer

hardware resources and provides common services for computer programs. When you start using a

Computer System then it's the Operating System (OS) which acts as an interface between you and the

computer hardware. The operating system is really a low level Software which is categorized as a

System Software and supports a computer's basic functions, such as memory management, tasks

scheduling and controlling peripherals etc.

WHAT IS AN OPERATING SYSTEM?

An Operating System (OS) is an interface between a computer user and computer hardware. An

operating system is a software which performs all the basic tasks like file management, memory

management, process management, handling input and output, and controlling peripheral devices such

as disk drives and printers.

Generally, a Computer System consists of the following components:

Computer Users : Are the users who use the overall computer system.

Application Softwares: Are the softwares which users use directly to perform different

activities. These softwares are simple and easy to use like Browsers, Word, Excel, different

Editors, Games etc. These are usually written in high-level languages, such as Python, Java

and C++.

System Softwares :Are the softwares which are more complex in nature and they are more

near to computer hardware. These software are usually written in low-level languages like

assembly language and includes Operating Systems (Microsoft Windows, macOS, and

Linux), Compiler, and Assembler etc.

Computer Hardware includes Monitor, Keyboard, CPU, Disks, Memory, etc.

So now let's put it in simple words:

If we consider a Computer Hardware as a body of the Computer System, then we can say an

Operating System is its soul which brings it alive i.e. operational. We can never use a Computer
System if it does not have an Operating System installed on it.

EXAMPLES OF OPERATING SYSTEM

There are plenty of Operating Systems available in the market which include paid and unpaid (Open

Source). Following are the examples of the few most popular Operating Systems:

Windows: This is one of the most popular and commercial operating systems developed and

marketed by Microsoft. It has different versions in the market like Windows 8, Windows 10

etc. and most of them are paid.

Linux This is a Unix based and the most loved operating system first released on September

17, 1991 by Linus Torvalds. Today, it has 30+ variants available like Fedora, OpenSUSE,

CentOS, Ubuntu etc. Most of them are available free of charges though you can have their

enterprise versions by paying a nominal license fee.

MacOS This is again a kind of Unix operating system developed and marketed by Apple Inc.

since 2001.

iOS This is a mobile operating system created and developed by Apple Inc. exclusively for its

mobile devices like iPhone and iPad etc.

Android This is a mobile Operating System based on a modified version of the Linux kernel

and other open source software, designed primarily for touchscreen mobile devices such as

smartphones and tablets.

Some other old but popular Operating Systems include Solaris, VMS, OS/400, AIX, z/OS, etc.

FUNCTIONS OF OPERATING SYSTEM

The following are some of important functions of an operating System which we will look in more

details later:

Process Management

I/O Device Management

File Management

Network Management

Main Memory Management

Secondary Storage Management


Security Management

Command Interpreter System

Control over system performance

Job Accounting

Error Detection and Correction

Coordination between other software and users

Many more other important tasks

BRIEF HISTORY OF OPERATING SYSTEM

Operating systems have been evolving through the years. In the 1950s, computers were limited to

running one program at a time like a calculator, but later in the following decades, computers began to

include more and more software programs, sometimes called libraries, that formed the basis for

today’s operating systems.

The first Operating System was created by General Motors in 1956 to run a single IBM mainframe

computer, its name was the IBM 704. IBM was the first computer manufacturer to develop operating

systems and distribute them in its computers in the 1960s.

There are few facts about Operating System evaluation:

Stanford Research Institute developed the N-Line System (NLS) in the late 1960s, which was

the first operating system that resembled the desktop operating system we use today.

Microsoft bought QDOS (Quick and Dirty Operating System) in 1981 and branded it as

Microsoft Operating System (MS-DOS). As of 1994, Microsoft had stopped supporting

MS-DOS.

Unix was developed in the mid-1960s by the Massachusetts Institute of Technology,AT&T

Bell Labs, and General Electric as a joint effort. Initially it was named MULTICS,which

stands for Multiplexed Operating and Computing System.

FreeBSD is also a popular UNIX derivative, originating from the BSD project at Berkeley. All

modern Macintosh computers run a modified version of FreeBSD (OS X).

Windows 95 is a consumer-oriented graphical user interface-based operating system built on

top of MS-DOS. It was released on August 24, 1995 by Microsoft as part of its Windows 9x
family of operating systems.

Solaris is a proprietary Unix operating system originally developed by Sun Microsystems in

1991. After the Sun acquisition by Oracle in 2010 it was renamed Oracle Solaris.

OVERVIEW OF OPERATING SYSTEM

An Operating System (OS) is an interface between a computer user and computer hardware. An

operating system is a software which performs all the basic tasks like file management, memory

management, process management, handling input and output, and controlling peripheral devices such

as disk drives and printers.

An operating system is software that enables applications to interact with a computer's hardware. The

software that contains the core components of the operating system is called the kernel .

The primary purposes of an Operating System are to enable applications (softwares) to interact with

a computer's hardware and to manage a system's hardware and software resources.

Some popular Operating Systems include Linux Operating System, Windows Operating System,

VMS, OS/400, AIX, z/OS, etc. Today, Operating systems is found almost in every device like mobile

phones, personal computers, mainframe computers, automobiles, TV, Toys etc.

DEFINATIONS

We can have a number of definitions of an Operating System. Let's go through few of them:

An Operating System is the low-level software that supports a computer's basic functions, such as

scheduling tasks and controlling peripherals.

We can refine this definition as follows:

An operating system is a program that acts as an interface between the user and the computer

hardware and controls the execution of all kinds of programs.

Following is another definition taken from Wikipedia:

An operating system (OS) is system software that manages computer hardware, software resources,

and provides common services for computer programs.

ARCHITECTURE OF OPERATING SYSTEM

The following is the generic architecture diagram of an Operating System which is as follows:
OPERATING SYSTEM GENERATIONS

Operating systems have been evolving over the years. We can categorise this evaluation based on

different generations which is briefed below:

0TH GENERATION

The term 0 th generation is used to refer to the period of development of computing when Charles

Babbage invented the Analytical Engine and later John Atanasoff created a computer in 1940. The

hardware component technology of this period was electronic vacuum tubes. There was no Operating

System available for this generation computer and computer programs were written in machine

language. This computers in this generation were inefficient and dependent on the varying

competencies of the individual programmer as operators.

FIRST GENERATION (1951-1956)

The first generation marked the beginning of commercial computing including the introduction of

Eckert and Mauchly’s UNIVAC I in early 1951, and a bit later, the IBM 701.

System operation was performed with the help of expert operators and without the benefit of an

operating system for a time though programs began to be written in higher level, procedure-oriented

languages, and thus the operator’s routine expanded. Later mono-programmed operating system was

developed, which eliminated some of the human intervention in running job and provided
programmers with a number of desirable functions. These systems still continued to operate under the

control of a human operator who used to follow a number of steps to execute a program.

Programming language like FORTRAN was developed by John W. Backus in 1956.

SECOND GENERATION (1956-1964)

The second generation of computer hardware was most notably characterised by transistors replacing

vacuum tubes as the hardware component technology. The first operating system GMOS was

developed by the IBM computer. GMOS was based on single stream batch processing system, because

it collects all similar jobs in groups or batches and then submits the jobs to the operating system using

a punch card to complete all jobs in a machine. Operating system is cleaned after completing one job

and then continues to read and initiates the next job in punch card.

Researchers began to experiment with multiprogramming and multiprocessing in their computing

services called the time-sharing system. A noteworthy example is the Compatible Time Sharing

System (CTSS), developed at MIT during the early 1960s.

THIRD GENERATION (1964-1979)

The third generation officially began in April 1964 with IBM’s announcement of its System/360

family of computers. Hardware technology began to use integrated circuits (ICs) which yielded

significant advantages in both speed and economy.

Operating system development continued with the introduction and widespread adoption of

multiprogramming. The idea of taking fuller advantage of the computer’s data channel I/O

capabilities continued to develop.

Another progress which leads to developing of personal computers in fourth generation is a new

development of minicomputers with DEC PDP-1. The third generation was an exciting time, indeed,

for the development of both computer hardware and the accompanying operating system.

FOURTH GENERATION (1979 – PRESENT)

The fourth generation is characterised by the appearance of the personal computer and the

workstation. The component technology of the third generation, was replaced by very large scale

integration (VLSI). Many Operating Systems which we are using today like Windows, Linux, MacOS

etc developed in the fourth generation.

FUNCTIONS OF OPERATING SYSTEM


Following are some of important functions of an operating System.

Memory Management

Processor Management

Device Management

File Management

Network Management

Security

Control over system performance

Job accounting

Error detecting aids

Coordination between other software and users

Memory Management

Memory management refers to management of Primary Memory or Main Memory. Main memory is a

large array of words or bytes where each word or byte has its own address.

Main memory provides a fast storage that can be accessed directly by the CPU. For a program to be

executed, it must be in the main memory. An Operating System does the following activities for

memory management -

Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part are not in

use.

In multiprogramming, the OS decides which process will get memory when and how much.

Allocates the memory when a process requests it to do so.

De-allocates the memory when a process no longer needs it or has been terminated.

Processor Management

In multiprogramming environment, the OS decides which process gets the processor when and for

how much time. This function is called process scheduling . An Operating System does the following

activities for processor management -

Keeps tracks of processor and status of process. The program responsible for this task is
known as traffic controller .

Allocates the processor (CPU) to a process.

De-allocates processor when a process is no longer required.

Device Management

An Operating System manages device communication via their respective drivers. It does the

following activities for device management -

Keeps tracks of all devices. Program responsible for this task is known as the I/O controller .

Decides which process gets the device when and for how much time.

Allocates the device in the efficient way.

De-allocates devices.

File Management

A file system is normally organized into directories for easy navigation and usage. These directories

may contain files and other directions.

An Operating System does the following activities for file management -

Keeps track of information, location, uses, status etc. The collective facilities are often known

as file system .

Decides who gets the resources.

Allocates the resources.

De-allocates the resources.

Other Important Activities

Following are some of the important activities that an Operating System performs -

Security - By means of password and similar other techniques, it prevents unauthorized

access to programs and data.

Control over system performance - Recording delays between request for a service and

response from the system.

Job accounting - Keeping track of time and resources used by various jobs and users.
Error detecting aids - Production of dumps, traces, error messages, and other debugging and

error detecting aids.

Coordination between other softwares and users - Coordination and assignment of

compilers, interpreters, assemblers and other software to the various users of the computer

system.

COMPONENTS OF OPERATING SYSTEM

There are various components of an Operating System to perform well defined tasks. Though most of

the Operating Systems differ in structure but logically they have similar components. Each component

must be a well-defined portion of a system that appropriately describes the functions, inputs, and

outputs.

There following are 8-components of an Operating System:

1. Process Management

2. I/O Device Management

3. File Management

4. Network Management

5. Main Memory Management

6. Secondary Storage Management

7. Security Management

8. Command Interpreter System

Following section explains all the above components in more detail:

Process Management

A process is program or a fraction of a program that is loaded in main memory. A process needs

certain resources including CPU time, Memory, Files, and I/O devices to accomplish its task. The

process management component manages the multiple processes running simultaneously on the

Operating System.
A program in running state is called a process.

The operating system is responsible for the following activities in connection with process

management:

Create, load, execute, suspend, resume, and terminate processes.

Switch system among multiple processes in main memory.

Provides communication mechanisms so that processes can communicate with each other

Provides synchronization mechanisms to control concurrent access to shared data to keep

shared data consistent.

Allocate/de-allocate resources properly to prevent or avoid deadlock situation.

I/O Device Management

One of the purposes of an operating system is to hide the peculiarities of specific hardware devices

from the user. I/O Device Management provides an abstract level of H/W devices and keep the details

from applications to ensure proper use of devices, to prevent errors, and to provide users with

convenient and efficient programming environment.

Following are the tasks of I/O Device Management component:

Hide the details of H/W devices

Manage main memory for the devices using cache, buffer, and spooling

Maintain and provide custom drivers for each device.

File Management

File management is one of the most visible services of an operating system. Computers can store

information in several different physical forms; magnetic tape, disk, and drum are the most common

forms.

A file is defined as a set of correlated information and it is defined by the creator of the file. Mostly

files represent data, source and object forms, and programs. Data files can be of any type like

alphabetic, numeric, and alphanumeric.

A file is a sequence of bits, bytes, lines or records whose meaning is defined by its creator and user.

The operating system implements the abstract concept of the file by managing mass storage device,
such as types and disks. Also files are normally organized into directories to ease their use. These

directories may contain files and other directories and so on.

The operating system is responsible for the following activities in connection with file management:

File creation and deletion

Directory creation and deletion

The support of primitives for manipulating files and directories

Mapping files onto secondary storage

File backup on stable (nonvolatile) storage media

Network Management

The definition of network management is often broad, as network management involves several

different components. Network management is the process of managing and administering a computer

network. A computer network is a collection of various types of computers connected with each other.

Network management comprises fault analysis, maintaining the quality of service, provisioning of

networks, and performance management.

Network management is the process of keeping your network healthy for an efficient communication

between different computers.

Following are the features of network management:

Network administration

Network maintenance

Network operation

Network provisioning

Network security

Main Memory Management

Memory is a large array of words or bytes, each with its own address. It is a repository of quickly

accessible data shared by the CPU and I/O devices.

Main memory is a volatile storage device which means it loses its contents in the case of system

failure or as soon as system power goes down.


The main motivation behind Memory Management is to maximize memory utilization on the

computer system.

The operating system is responsible for the following activities in connections with memory

management:

Keep track of which parts of memory are currently being used and by whom.

Decide which processes to load when memory space becomes available.

Allocate and deallocate memory space as needed.

Secondary Storage Management

The main purpose of a computer system is to execute programs. These programs, together with the

data they access, must be in main memory during execution. Since the main memory is too small to

permanently accommodate all data and program, the computer system must provide secondary storage

to backup main memory.

Most modern computer systems use disks as the principle on-line storage medium, for both programs

and data. Most programs, like compilers, assemblers, sort routines, editors, formatters, and so on, are

stored on the disk until loaded into memory, and then use the disk as both the source and destination

of their processing.

The operating system is responsible for the following activities in connection with disk management:

Free space management

Storage allocation

Disk scheduling

Security Management

The operating system is primarily responsible for all task and activities happen in the computer

system. The various processes in an operating system must be protected from each other’s activities.

For that purpose, various mechanisms which can be used to ensure that the files, memory segment,

cpu and other resources can be operated on only by those processes that have gained proper

authorization from the operating system.

Security Management refers to a mechanism for controlling the access of programs, processes, or

users to the resources defined by a computer controls to be imposed, together with some means of
enforcement.

For example, memory addressing hardware ensure that a process can only execute within its own

address space. The timer ensure that no process can gain control of the CPU without relinquishing it.

Finally, no process is allowed to do it’s own I/O, to protect the integrity of the various peripheral

devices.

Command Interpreter System

One of the most important component of an operating system is its command interpreter. The

command interpreter is the primary interface between the user and the rest of the system.

Command Interpreter System executes a user command by calling one or more number of underlying

system programs or system calls.

Command Interpreter System allows human users to interact with the Operating System and provides

convenient programming environment to the users.

Many commands are given to the operating system by control statements. A program which reads and

interprets control statements is automatically executed. This program is called the shell and few

examples are Windows DOS command window, Bash of Unix/Linux or C-Shell of Unix/Linux.

Other Important Activities

An Operating System is a complex Software System. Apart from the above mentioned components

and responsibilities, there are many other activities performed by the Operating System. Few of them

are listed below:

Security - By means of password and similar other techniques, it prevents unauthorized

access to programs and data.

Control over system performance - Recording delays between request for a service and

response from the system.

Job accounting - Keeping track of time and resources used by various jobs and users.

Error detecting aids - Production of dumps, traces, error messages, and other debugging and

error detecting aids.

Coordination between other softwares and users - Coordination and assignment of

compilers, interpreters, assemblers and other software to the various users of the computer
systems.

TYPES OF OPERATING SYSTEM

Operating systems are there from the very first computer generation and they keep evolving with

time. We will discuss some of the important types of operating systems which are most commonly

used.

Batch operating system

The users of a batch operating system do not interact with the computer directly. Each user prepares

his job on an off-line device like punch cards and submits it to the computer operator. To speed up

processing, jobs with similar needs are batched together and run as a group. The programmers leave

their programs with the operator and the operator then sorts the programs with similar requirements

into batches.

The problems with Batch Systems are as follows -

Lack of interaction between the user and the job.

CPU is often idle, because the speed of the mechanical I/O devices is slower than the CPU.

Difficult to provide the desired priority.

Time-sharing operating systems

Time-sharing is a technique which enables many people, located at various terminals, to use a

particular computer system at the same time. Time-sharing or multitasking is a logical extension of

multiprogramming. Processor's time which is shared among multiple users simultaneously is termed

as time-sharing.

The main difference between Multiprogrammed Batch Systems and Time-Sharing Systems is that in

case of Multiprogrammed batch systems, the objective is to maximize processor use, whereas in

Time-Sharing Systems, the objective is to minimize response time.

Multiple jobs are executed by the CPU by switching between them, but the switches occur so

frequently. Thus, the user can receive an immediate response. For example, in a transaction

processing, the processor executes each user program in a short burst or quantum of computation.

That is, if n users are present, then each user can get a time quantum. When the user submits the

command, the response time is in few seconds at most.


The operating system uses CPU scheduling and multiprogramming to provide each user with a small

portion of a time. Computer systems that were designed primarily as batch systems have been

modified to time-sharing systems.

Advantages of Timesharing operating systems are as follows -

Provides the advantage of quick response.

Avoids duplication of software.

Reduces CPU idle time.

Disadvantages of Time-sharing operating systems are as follows -

Problem of reliability.

Question of security and integrity of user programs and data.

Problem of data communication.

Distributed operating System

Distributed systems use multiple central processors to serve multiple real-time applications and

multiple users. Data processing jobs are distributed among the processors accordingly.

The processors communicate with one another through various communication lines (such as

high-speed buses or telephone lines). These are referred as loosely coupled systems or distributed

systems. Processors in a distributed system may vary in size and function. These processors are

referred as sites, nodes, computers, and so on.

The advantages of distributed systems are as follows -

With resource sharing facility, a user at one site may be able to use the resources available at

another.

Speedup the exchange of data with one another via electronic mail.

If one site fails in a distributed system, the remaining sites can potentially continue operating.

Better service to the customers.

Reduction of the load on the host computer.

Reduction of delays in data processing.

Network operating System


A Network Operating System runs on a server and provides the server the capability to manage data,

users, groups, security, applications, and other networking functions. The primary purpose of the

network operating system is to allow shared file and printer access among multiple computers in a

network, typically a local area network (LAN), a private network or to other networks.

Examples of network operating systems include Microsoft Windows Server 2003, Microsoft Windows

Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, and BSD.

The advantages of network operating systems are as follows -

Centralized servers are highly stable.

Security is server managed.

Upgrades to new technologies and hardware can be easily integrated into the system.

Remote access to servers is possible from different locations and types of systems.

The disadvantages of network operating systems are as follows -

High cost of buying and running a server.

Dependency on a central location for most operations.

Regular maintenance and updates are required.

Real Time operating System

A real-time system is defined as a data processing system in which the time interval required to

process and respond to inputs is so small that it controls the environment. The time taken by the

system to respond to an input and display of required updated information is termed as the response

time . So in this method, the response time is very less as compared to online processing.

Real-time systems are used when there are rigid time requirements on the operation of a processor or

the flow of data and real-time systems can be used as a control device in a dedicated application. A

real-time operating system must have well-defined, fixed time constraints, otherwise the system will

fail. For example, scientific experiments, medical imaging systems, industrial control systems,

weapon systems, robots, air traffic control systems, etc.

There are two types of real-time operating systems.

Hard Real-Time Systems

Hard real-time systems guarantee that critical tasks complete on time. In hard real-time systems,
secondary storage is limited or missing and the data is stored in ROM. In these systems, virtual

memory is almost never found.

Soft Real-Time Systems

Soft real-time systems are less restrictive. A critical real-time task gets priority over other tasks and

retains the priority until it completes. Soft real-time systems have limited utility than hard real-time

systems. For example, multimedia, virtual reality, Advanced Scientific Projects like undersea

exploration and planetary rovers, etc.

OPERATING SYSTEM SERVICES

An Operating System provides services to both the users and to the programs.

It provides programs an environment to execute.

It provides users the services to execute the programs in a convenient manner.

Following are a few common services provided by an operating system -

Program execution

I/O operations

File System manipulation

Communication

Error Detection

Resource Allocation

Protection

Program execution

Operating systems handle many kinds of activities from user programs to system programs like printer

spooler, name servers, file server, etc. Each of these activities is encapsulated as a process.

A process includes the complete execution context (code to execute, data to manipulate, registers, OS

resources in use). Following are the major activities of an operating system with respect to program
management -

Loads a program into memory.

Executes the program.

Handles program's execution.

Provides a mechanism for process synchronization.

Provides a mechanism for process communication.

Provides a mechanism for deadlock handling.

I/O Operation

An I/O subsystem comprises of I/O devices and their corresponding driver software. Drivers hide the

peculiarities of specific hardware devices from the users.

An Operating System manages the communication between user and device drivers.

I/O operation means read or write operation with any file or any specific I/O device.

Operating system provides the access to the required I/O device when required.

File system manipulation

A file represents a collection of related information. Computers can store files on the disk (secondary

storage), for long-term storage purpose. Examples of storage media include magnetic tape, magnetic

disk and optical disk drives like CD, DVD. Each of these media has its own properties like speed,

capacity, data transfer rate and data access methods.

A file system is normally organized into directories for easy navigation and usage. These directories

may contain files and other directions. Following are the major activities of an operating system with

respect to file management -

Program needs to read a file or write a file.

The operating system gives the permission to the program for operation on file.

Permission varies from read-only, read-write, denied and so on.

Operating System provides an interface to the user to create/delete files.

Operating System provides an interface to the user to create/delete directories.

Operating System provides an interface to create the backup of file system.


Communication

In case of distributed systems which are a collection of processors that do not share memory,

peripheral devices, or a clock, the operating system manages communications between all the

processes. Multiple processes communicate with one another through communication lines in the

network.

The OS handles routing and connection strategies, and the problems of contention and security.

Following are the major activities of an operating system with respect to communication -

Two processes often require data to be transferred between them

Both the processes can be on one computer or on different computers, but are connected

through a computer network.

Communication may be implemented by two methods, either by Shared Memory or by

Message Passing.

Error handling

Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices or in the memory

hardware. Following are the major activities of an operating system with respect to error handling -

The OS constantly checks for possible errors.

The OS takes an appropriate action to ensure correct and consistent computing.

Resource Management

In case of multi-user or multi-tasking environment, resources such as main memory, CPU cycles and

files storage are to be allocated to each user or job. Following are the major activities of an operating

system with respect to resource management -

The OS manages all kinds of resources using schedulers.

CPU scheduling algorithms are used for better utilization of CPU.

Protection

Considering a computer system having multiple users and concurrent execution of multiple processes,

the various processes must be protected from each other's activities.

Protection refers to a mechanism or a way to control the access of programs, processes, or users to the

resources defined by a computer system. Following are the major activities of an operating system
with respect to protection -

The OS ensures that all access to system resources is controlled.

The OS ensures that external I/O devices are protected from invalid access attempts.

The OS provides authentication features for each user by means of passwords.

PROPERTIES OF OPERATING SYSTEM

The following are the different properties of an Operating System.

1. Batch processing

2. Multitasking

3. Multiprogramming

4. Interactivity

5. Real Time System

6. Distributed Environment

7. Spooling

Batch processing

Batch processing is a technique in which an Operating System collects the programs and data together

in a batch before processing starts. An operating system does the following activities related to batch

processing -

The OS defines a job which has predefined sequence of commands, programs and data as a

single unit.

The OS keeps a number a jobs in memory and executes them without any manual information.

Jobs are processed in the order of submission, i.e., first come first served fashion.

When a job completes its execution, its memory is released and the output for the job gets

copied into an output spool for later printing or processing.


ADVANTAGES

Batch processing takes much of the work of the operator to the computer.

Increased performance as a new job get started as soon as the previous job is finished, without

any manual intervention.

DISADVANTAGES

Difficult to debug program.

A job could enter an infinite loop.

Due to lack of protection scheme, one batch job can affect pending jobs.

Multitasking

Multitasking is when multiple jobs are executed by the CPU simultaneously by switching between

them. Switches occur so frequently that the users may interact with each program while it is running.

An OS does the following activities related to multitasking -

The user gives instructions to the operating system or to a program directly, and receives an

immediate response.

The OS handles multitasking in the way that it can handle multiple operations/executes

multiple programs at a time.

Multitasking Operating Systems are also known as Time-sharing systems.

These Operating Systems were developed to provide interactive use of a computer system at a

reasonable cost.
A time-shared operating system uses the concept of CPU scheduling and multiprogramming to

provide each user with a small portion of a time-shared CPU.

Each user has at least one separate program in memory.

A program that is loaded into memory and is executing is commonly referred to as a process .

When a process executes, it typically executes for only a very short time before it either

finishes or needs to perform I/O.

Since interactive I/O typically runs at slower speeds, it may take a long time to complete.

During this time, a CPU can be utilized by another process.

The operating system allows the users to share the computer simultaneously. Since each action

or command in a time-shared system tends to be short, only a little CPU time is needed for

each user.

As the system switches CPU rapidly from one user/program to the next, each user is given the

impression that he/she has his/her own CPU, whereas actually one CPU is being shared among

many users.

Multiprogramming

Sharing the processor, when two or more programs reside in memory at the same time, is referred as

multiprogramming . Multiprogramming assumes a single shared processor. Multiprogramming

increases CPU utilization by organizing jobs so that the CPU always has one to execute.

The following figure shows the memory layout for a multiprogramming system.
An OS does the following activities related to multiprogramming.

The operating system keeps several jobs in memory at a time.

This set of jobs is a subset of the jobs kept in the job pool.

The operating system picks and begins to execute one of the jobs in the memory.

Multiprogramming operating systems monitor the state of all active programs and system

resources using memory management programs to ensure that the CPU is never idle, unless

there are no jobs to process.

ADVANTAGES

High and efficient CPU utilization.

User feels that many programs are allotted CPU almost simultaneously.

DISADVANTAGES

CPU scheduling is required.

To accommodate many jobs in memory, memory management is required.

INTERACTIVITY

Interactivity refers to the ability of users to interact with a computer system. An Operating system
does the following activities related to interactivity -

Provides the user an interface to interact with the system.

Manages input devices to take inputs from the user. For example, keyboard.

Manages output devices to show outputs to the user. For example, Monitor.

The response time of the OS needs to be short, since the user submits and waits for the result.

Real Time System

Real-time systems are usually dedicated, embedded systems. An operating system does the following

activities related to real-time system activity.

In such systems, Operating Systems typically read from and react to sensor data.

The Operating system must guarantee response to events within fixed periods of time to ensure

correct performance.

Distributed Environment

A distributed environment refers to multiple independent CPUs or processors in a computer system.

An operating system does the following activities related to distributed environment -

The OS distributes computation logics among several physical processors.

The processors do not share memory or a clock. Instead, each processor has its own local

memory.

The OS manages the communications between the processors. They communicate with each

other through various communication lines.

Spooling

Spooling is an acronym for simultaneous peripheral operations on line. Spooling refers to putting data

of various I/O jobs in a buffer. This buffer is a special area in memory or hard disk which is accessible

to I/O devices.

An operating system does the following activities related to distributed environment -

Handles I/O device data spooling as devices have different data access rates.

Maintains the spooling buffer which provides a waiting station where data can rest while the

slower device catches up.


Maintains parallel computation because of spooling process as a computer can perform I/O in

parallel fashion. It becomes possible to have the computer read data from a tape, write data to

disk and to write out to a tape printer while it is doing its computing task.

Advantages

The spooling operation uses a disk as a very large buffer.

Spooling is capable of overlapping I/O operation for one job with processor operations for

another job.

OPERATING SYSTEM- PROCESSES

PROCESS

A process is basically a program in execution. The execution of a process must progress in a

sequential fashion.

A process is defined as an entity which represents the basic unit of work to be implemented in the

system.

To put it in simple terms, we write our computer programs in a text file and when we execute this

program, it becomes a process which performs all the tasks mentioned in the program.

When a program is loaded into the memory and it becomes a process, it can be divided into four

sections - stack, heap, text and data. The following image shows a simplified layout of a process
inside main memory -

COMPONENT & DESCRIPTION

Stack : The process Stack contains the temporary data such as method/function parameters,

return address and local variables.

Heap: This is dynamically allocated memory to a process during its run time.

Text: This includes the current activity represented by the value of Program Counter and the

contents of the processor's registers.

Data: This section contains the global and static variables.

PROGRAM

A program is a piece of code which may be a single line or millions of lines. A computer program is

usually written by a computer programmer in a programming language. For example, here is a simple

program written in C programming language -

#include <stdio.h>
int main() {

printf("Hello, World! \n");

return 0;

A computer program is a collection of instructions that performs a specific task when executed by a

computer. When we compare a program with a process, we can conclude that a process is a dynamic

instance of a computer program.

A part of a computer program that performs a well-defined task is known as an algorithm .A

collection of computer programs, libraries and related data are referred to as a software .

PROCESS LIFE CYCLE

When a process executes, it passes through different states. These stages may differ in different

operating systems, and the names of these states are also not standardized.

In general, a process can have one of the following five states at a time.

Start : This is the initial state when a process is first started/created.

Ready : The process is waiting to be assigned to a processor. Ready processes are waiting to have the

processor allocated to them by the operating system so that they can run. Process may come into this

state after Start state or while running it by but interrupted by the scheduler to assign CPU to some

other process.

Running : Once the process has been assigned to a processor by the OS scheduler, the process state is

set to running and the processor executes its instructions.

Waiting : Process moves into the waiting state if it needs to wait for a resource, such as waiting for

user input, or waiting for a file to become available.

Terminated or Exit : Once the process finishes its execution, or it is terminated by the operating

system, it is moved to the terminated state where it waits to be removed from main memory.
Process Control Block (PCB)

A Process Control Block is a data structure maintained by the Operating System for every process.

The PCB is identified by an integer process ID (PID). A PCB keeps all the information needed to keep

track of a process as listed below in the table -

Process State : The current state of the process i.e., whether it is ready, running, waiting, or whatever.

Process privileges : This is required to allow/disallow access to system resources.

Process ID : Unique identification for each of the process in the operating system.

Pointer : A pointer to parent process.

Program Counter : Program Counter is a pointer to the address of the next instruction to be executed

for this process.

CPU registers : Various CPU registers where process need to be stored for execution for running

state.

CPU Scheduling Information : Process priority and other scheduling information which is required

to schedule the process.

Memory management information : This includes the information of page table, memory limits,

Segment table depending on memory used by the operating system.

Accounting information : This includes the amount of CPU used for process execution, time limits,
execution ID etc.

IO status information : This includes a list of I/O devices allocated to the process.

The architecture of a PCB is completely dependent on Operating System and may contain different

information in different operating systems. Here is a simplified diagram of a PCB -

The PCB is maintained for a process throughout its lifetime, and is deleted once the process

terminates.

OPERATING SYSTEM- PROCESS SCHEDULING

DEFINITION

The process scheduling is the activity of the process manager that handles the removal of the running

process from the CPU and the selection of another process on the basis of a particular strategy.

Process scheduling is an essential part of a Multiprogramming operating systems. Such operating

systems allow more than one process to be loaded into the executable memory at a time and the loaded

process shares the CPU using time multiplexing.

CATEGORIES OF SCHEDULING

There are two categories of scheduling:


1. Non-preemptive: Here the resource can’t be taken from a process until the process completes

execution. The switching of resources occurs when the running process terminates and moves

to a waiting state.

2. Preemptive: Here the OS allocates the resources to a process for a fixed amount of time.

During resource allocation, the process switches from running state to ready state or from

waiting state to ready state. This switching occurs as the CPU may give priority to other

processes and replace the process with higher priority with the running process.

Process Scheduling Queues

The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues. The OS

maintains a separate queue for each of the process states and PCBs of all processes in the same

execution state are placed in the same queue. When the state of a process is changed, its PCB is

unlinked from its current queue and moved to its new state queue.

The Operating System maintains the following important process scheduling queues -

Job queue - This queue keeps all the processes in the system.

Ready queue - This queue keeps a set of all processes residing in main memory, ready and

waiting to execute. A new process is always put in this queue.

Device queues - The processes which are blocked due to unavailability of an I/O device

constitute this queue.

The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.). The OS
scheduler determines how to move processes between the ready and run queues which can only have

one entry per processor core on the system; in the above diagram, it has been merged with the CPU.

Two-State Process Model

Two-state process model refers to running and non-running states which are described below -

Running : When a new process is created, it enters into the system as in the running state.

Not Running : Processes that are not running are kept in queue, waiting for their turn to execute. Each

entry in the queue is a pointer to a particular process. Queue is implemented by using linked list. Use

of dispatcher is as follows. When a process is interrupted, that process is transferred in the waiting

queue. If the process has completed or aborted, the process is discarded. In either case, the dispatcher

then selects a process from the queue to execute.

Schedulers

Schedulers are special system software which handle process scheduling in various ways. Their main

task is to select the jobs to be submitted into the system and to decide which process to run.

Schedulers are of three types -

Long-Term Scheduler

Short-Term Scheduler

Medium-Term Scheduler

Long Term Scheduler

It is also called a job scheduler . A long-term scheduler determines which programs are admitted to

the system for processing. It selects processes from the queue and loads them into memory for

execution. Process loads into the memory for CPU scheduling.

The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and

processor bound. It also controls the degree of multiprogramming. If the degree of multiprogramming

is stable, then the average rate of process creation must be equal to the average departure rate of

processes leaving the system.

On some systems, the long-term scheduler may not be available or minimal. Time-sharing operating
systems have no long term scheduler. When a process changes the state from new to ready, then there

is use of long-term scheduler.

Short Term Scheduler

It is also called as CPU scheduler . Its main objective is to increase system performance in accordance

with the chosen set of criteria. It is the change of ready state to running state of the process. CPU

scheduler selects a process among the processes that are ready to execute and allocates CPU to one of

them.

Short-term schedulers, also known as dispatchers, make the decision of which process to execute next.

Short-term schedulers are faster than long-term schedulers.

Medium Term Scheduler

Medium-term scheduling is a part of swapping . It removes the processes from the memory. It reduces

the degree of multiprogramming. The medium-term scheduler is in-charge of handling the swapped

out-processes.

A running process may become suspended if it makes an I/O request. A suspended processes cannot

make any progress towards completion. In this condition, to remove the process from memory and

make space for other processes, the suspended process is moved to the secondary storage. This

process is called swapping , and the process is said to be swapped out or rolled out. Swapping may be

necessary to improve the process mix.

Comparison among Scheduler

S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler

1 It is a job scheduler It is a CPU scheduler It is a process swapping


scheduler.

Speed is fastest among other Speed is in between both short


2 Speed is lesser than short term
scheduler two and long term scheduler.

It provides lesser control over It reduces the degree of


3 It controls the degree of
multiprogramming degree of multiprogramming multiprogramming.

It is also minimal in time It is a part of Time sharing


4 It is almost absent or minimal
in time sharing system sharing system systems.

5 It selects processes from pool It selects those processes which It can re-introduce the process
and loads them into memory are ready to execute into memory and execution can

for execution be continued.

CONTEXT SWITCHING

A context switching is the mechanism to store and restore the state or context of a CPU in Process

Control block so that a process execution can be resumed from the same point at a later time. Using

this technique, a context switcher enables multiple processes to share a single CPU. Context switching

is an essential part of a multitasking operating system features.

When the scheduler switches the CPU from executing one process to execute another, the state from

the current running process is stored into the process control block. After this, the state for the process

to run next is loaded from its own PCB and used to set the PC, registers, etc. At that point, the second

process can start

executing.

Figure 1
Figure 2

Context switches are computationally intensive since register and memory state must be saved and

restored. To avoid the amount of context switching time, some hardware systems employ two or more

sets of processor registers. When the process is switched, the following information is stored for later

use.

Program Counter

Scheduling information

Base and limit register value

Currently used register


Changed State

I/O State information

Accounting information

OPERATING SYSTEM SCHEDULING ALGORITHM

A Process Scheduler schedules different processes to be assigned to the CPU based on particular

scheduling algorithms. There are six popular process scheduling algorithms which we are going to

discuss in this chapter -

First-Come, First-Served (FCFS) Scheduling

Shortest-Job-Next (SJN) Scheduling

Priority Scheduling

Shortest Remaining Time

Round Robin(RR) Scheduling

Multiple-Level Queues Scheduling

These algorithms are either non-preemptive or preemptive . Non-preemptive algorithms are

designed so that once a process enters the running state, it cannot be preempted until it completes its

allotted time, whereas the preemptive scheduling is based on priority where a scheduler may preempt

a low priority running process anytime when a high priority process enters into a ready state.

FIRST COME FIRST SERVE (FCFS)

As the name suggests, the process coming first in the ready state will be executed first by the CPU

irrespective of the burst time or the priority. This is implemented by using the First In First Out

(FIFO) queue. So, what happens is that, when a process enters into the ready state, then the PCB of

that process will be linked to the tail of the queue and the CPU starts executing the processes by taking

the process from the head of the queue. If the CPU is allocated to a process then it can't be taken back

until it finishes the execution of that process.

Example:
Process Arrival time Burst time

P1 0 ms 18 ms

P2 2 ms 7 ms

P3 2 ms 10 ms

Gantt chart

P1 P2 P3

0 ms 18 ms 18 ms 25 ms 25 ms 35 ms

In the above example, you can see that we have three processes P1, P2, and P3, and they are coming

in the ready state at 0ms, 2ms, and 2ms respectively. So, based on the arrival time, the process P1 will

be executed for the first 18ms. After that, the process P2 will be executed for 7ms and finally, the

process P3 will be executed for 10ms. One thing to be noted here is that if the arrival time of the

processes is the same, then the CPU can select any process.

---------------------------------------------

| Process | Waiting Time | Turnaround Time |

---------------------------------------------

| P1 | 0ms | 18ms |

| P2 | 16ms | 23ms |

| P3 | 23ms | 33ms |

---------------------------------------------

Total waiting time: (0 + 16 + 23) = 39ms

Average waiting time: ( 39/3) = 13ms

Total turnaround time: (18 + 23 + 33) = 74ms

Average turnaround time: (74/3) = 24.66ms

Advantages of FCFS:

It is the most simple scheduling algorithm and is easy to implement.

Disadvantages of FCFS:
This algorithm is non-preemptive so you have to execute the process fully and after that other

processes will be allowed to execute.

Throughput is not efficient.

FCFS suffers from the Convey effect i.e. if a process is having very high burst time and it is

coming first, then it will be executed first irrespective of the fact that a process having very

less time is there in the ready state.

SHORTEST JOB FIRST (NON-PREEMPTIVE)

In the FCFS, we saw if a process is having a very high burst time and it comes first then the other

process with a very low burst time have to wait for its turn. So, to remove this problem, we come with

a new approach i.e. Shortest Job First or SJF.

In this technique, the process having the minimum burst time at a particular instant of time will be

executed first. It is a non-preemptive approach i.e. if the process starts its execution then it will be

fully executed and then some other process will come.

Example:

Process Arrival time Burst time

P1 3 ms 5 ms

P2 0 ms 4 ms

P3 4 ms 2 ms

P4 5 ms 4 ms

Gantt Chart

P2 P3 P4 P1

0 ms 4 ms 4 ms 6 ms 6 ms 10 ms 10 ms 15 ms
In the above example, at 0ms, we have only one process i.e. process P2, so the process P2 will be

executed for 4ms. Now, after 4ms, there are two new processes i.e. process P1 and process P3. The

burst time of P1 is 5ms and that of P3 is 2ms. So, amongst these two, the process P3 will be executed

first because its burst time is less than P1. P3 will be executed for 2ms. Now, after 6ms, we have two

processes with us i.e. P1 and P4 (because we are at 6ms and P4 comes at 5ms). Amongst these two,

the process P4 is having a less burst time as compared to P1. So, P4 will be executed for 4ms and after

that P1 will be executed for 5ms. So, the waiting time and turnaround time of these processes will be:

---------------------------------------------

| Process | Waiting Time | Turnaround Time |

---------------------------------------------

| P1 | 7ms | 12ms |

| P2 | 0ms | 4ms |

| P3 | 0ms | 2ms |

| P4 | 1ms | 5ms |

---------------------------------------------

Total waiting time: (7 + 0 + 0 + 1) = 8ms

Average waiting time: ( 8/4) = 2ms

Total turnaround time: (12 + 4 + 2 + 5) = 23ms

Average turnaround time: (23/4) = 5.75ms

Advantages of SJF (non-preemptive):

Short processes will be executed first.

Disadvantages of SJF (non-preemptive):

It may lead to starvation if only short burst time processes are coming in the ready state

SHORTEST JOB FIRST (PREEMPTIVE)

This is the preemptive approach of the Shortest Job First algorithm. Here, at every instant of time, the

CPU will check for some shortest job. For example, at time 0ms, we have P1 as the shortest process.

So, P1 will execute for 1ms and then the CPU will check if some other process is shorter than P1 or
not. If there is no such process, then P1 will keep on executing for the next 1ms and if there is some

process shorter than P1 then that process will be executed. This will continue until the process gets

executed.

This algorithm is also known as Shortest Remaining Time First i.e. we schedule the process based on

the shortest remaining time of the processes.

Example:

Process Arrival time Burst time

P1 1 ms 7 ms

P2 1 ms 8 ms

P3 2 ms 7 ms

P4 3 ms 3 ms

Gantt Chart

P1 P4 P1 P3 P2

1 3 3 6 6 10 10 17 17 25

In the above example, at time 1ms, there are two processes i.e. P1 and P2. Process P1 is having burst

time as 6ms and the process P2 is having 8ms. So, P1 will be executed first. Since it is a preemptive

approach, so we have to check at every time quantum. At 2ms, we have three processes i.e. P1(5ms

remaining), P2(8ms), and P3(7ms). Out of these three, P1 is having the least burst time, so it will

continue its execution. After 3ms, we have four processes i.e P1(4ms remaining), P2(8ms), P3(7ms),

and P4(3ms). Out of these four, P4 is having the least burst time, so it will be executed. The process

P4 keeps on executing for the next three ms because it is having the shortest burst time. After 6ms, we

have 3 processes i.e. P1(4ms remaining), P2(8ms), and P3(7ms). So, P1 will be selected and executed.

This process of time comparison will continue until we have all the processes executed. So, waiting

and turnaround time of the processes will be:

---------------------------------------------

| Process | Waiting Time | Turnaround Time |

---------------------------------------------
| P1 | 3ms | 9ms |

| P2 | 16ms | 24ms |

| P3 | 8ms | 15ms |

| P4 | 0ms | 3ms |

---------------------------------------------

Total waiting time: (3 + 16 + 8 + 0) = 27ms

Average waiting time: ( 27/4) = 6.75ms

Total turnaround time: (9 + 24 + 15 + 3) = 51ms

Average turnaround time: (51/4) = 12.75ms

Advantages of SJF (preemptive):

Short processes will be executed first.

Disadvantages of SJF (preemptive):

It may result in starvation if short processes keep on coming.

ROUND-ROBIN

In this approach of CPU scheduling, we have a fixed time quantum and the CPU will be allocated to a

process for that amount of time only at a time. For example, if we are having three process P1, P2, and

P3, and our time quantum is 2ms, then P1 will be given 2ms for its execution, then P2 will be given

2ms, then P3 will be given 2ms. After one cycle, again P1 will be given 2ms, then P2 will be given

2ms and so on until the processes complete its execution.

It is generally used in the time-sharing environments and there will be no starvation in case of the

round-robin.

Example:

Process Arrival time Burst time

P1 0 ms 10 ms

P2 0 ms 5 ms

P3 0 ms 8 ms
Gantt Chart

P1 P2 P3 P1 P2 P3

1 2 2 4 4 6 6 8 8 10 10 12

P1 P2 P3 P1 P3 P1

12 14 14 15 15 17 17 19 19 21 21 23

In the above example, every process will be given 2ms in one turn because we have taken the time

quantum to be 2ms. So process P1 will be executed for 2ms, then process P2 will be executed for 2ms,

then P3 will be executed for 2 ms. Again process P1 will be executed for 2ms, then P2, and so on. The

waiting time and turnaround time of the processes will be:

---------------------------------------------

| Process | Waiting Time | Turnaround Time |

---------------------------------------------

| P1 | 13ms | 23ms |

| P2 | 10ms | 15ms |

| P3 | 13ms | 21ms |

---------------------------------------------

Total waiting time: (13 + 10 + 13) = 36ms

Average waiting time: ( 36/3) = 12ms

Total turnaround time: (23 + 15 + 21) = 59ms

Average turnaround time: (59/3) = 19.66ms

Advantages of round-robin:

No starvation will be there in round-robin because every process will get chance for its

execution.
Used in time-sharing systems.

Disadvantages of round-robin:

We have to perform a lot of context switching here, which will keep the CPU idle

PRIORITY SCHEDULING (NON-PREEMPTIVE)

In this approach, we have a priority number associated with each process and based on that priority

number the CPU selects one process from a list of processes. The priority number can be anything. It

is just used to identify which process is having a higher priority and which process is having a lower

priority. For example, you can denote 0 as the highest priority process and 100 as the lowest priority

process. Also, the reverse can be true i.e. you can denote 100 as the highest priority and 0 as the

lowest priority.

Example:

Process Arrival time Burst time Priority

P1 0 ms 5 ms 1

P2 1 ms 3 ms 2

P3 2 ms 8 ms 1

P4 3 ms 6 ms 3

NOTE: In this example, we are taking higher priority number as higher priority

Gannt Chart

0 ms 5 ms 5 ms 11ms 11 ms 14 ms 14 ms 22 ms

In the above example, at 0ms, we have only one process P1. So P1 will execute for 5ms because we

are using non-preemption technique here. After 5ms, there are three processes in the ready state i.e.

process P2, process P3, and process P4. Out to these three processes, the process P4 is having the

highest priority so it will be executed for 6ms and after that, process P2 will be executed for 3ms

followed by the process P1. The waiting and turnaround time of processes will be:
---------------------------------------------

| Process | Waiting Time | Turnaround Time |

---------------------------------------------

| P1 | 0ms | 5ms |

| P2 | 10ms | 13ms |

| P3 | 12ms | 20ms |

| P4 | 2ms | 8ms |

---------------------------------------------

Total waiting time: (0 + 10 + 12 + 2) = 24ms

Average waiting time: ( 24/4) = 6ms

Total turnaround time: (5 + 13 + 20 + 8) = 46ms

Average turnaround time: (46/4) = 11.5ms

Advantages of priority scheduling (non-preemptive):

Higher priority processes like system processes are executed first.

Disadvantages of priority scheduling (non-preemptive):

It can lead to starvation if only higher priority process comes into the ready state.

If the priorities of more two processes are the same, then we have to use some other scheduling

algorithm.

MULTILEVEL QUEUE SCHEDULING

In multilevel queue scheduling, we divide the whole processes into some batches or queues and then

each queue is given some priority number. For example, if there are four processes P1, P2, P3, and P4,

then we can put process P1 and P4 in queue1 and process P2 and P3 in queue2. Now, we can assign

some priority to each queue. So, we can take the queue1 as having the highest priority and queue2 as

the lowest priority. So, all the processes of the queue1 will be executed first followed by queue2.

Inside the queue1, we can apply some other scheduling algorithm for the execution of processes of

queue1. Similar is with the case of queue2.

So, multiple queues for processes are maintained that are having common characteristics and each

queue has its own priority and there is some scheduling algorithm used in each of the queues.
Example:

Process Arrival time Burst time Queue

P1 0 ms 5 ms 1

P2 0 ms 3 ms 2

P3 0 ms 8 ms 2

P4 0 ms 6 ms 1

Gannt Chart

P1 P4 P2 P3 P2 P3

0 5 5 11 11 13 13 15 15 16 16 22

In the above example, we have two queues i.e. queue1 and queue2. Queue1 is having higher priority

and queue1 is using the FCFS approach and queue2 is using the round-robin approach (time quantum

= 2ms).

Since the priority of queue1 is higher, so queue1 will be executed first. In the queue1, we have two

processes i.e. P1 and P4 and we are using FCFS. So, P1 will be executed followed by P4. Now, the

job of the queue1 is finished. After this, the execution of the processes of queue2 will be started by

using the round-robin approach.

MULTILEVEL FEEDBACK QUEUE SCHEDULING

Multilevel feedback queue scheduling is similar to multilevel queue scheduling but here the processes

can change their queue also. For example, if a process is in queue1 initially then after partial execution

of the process, it can go into some other queue.

In a multilevel feedback queue, we have a list of queues having some priority and the higher priority

queue is always executed first. Let's assume that we have two queues i.e. queue1 and queue2 and we

are using round-robin for these i.e. time quantum for queue1 is 2 ms and for queue2 is 3ms. Now, if a

process starts executing in the queue1 then if it gets fully executed in 2ms then it is ok, its priority will

not be changed. But if the execution of the process will not be completed in the time quantum of
queue1, then the priority of that process will be reduced and it will be placed in the lower priority

queue i.e. queue2 and this process will continue.

While executing a lower priority queue, if a process comes into the higher priority queue, then the

execution of that lower priority queue will be stopped and the execution of the higher priority queue

will be started. This can lead to starvation because if the process keeps on going into the higher

priority queue then the lower priority queue keeps on waiting for its turn.

MULTIPLE-LEVEL QUEUES SCHEDULING

Multiple-level queues are not an independent scheduling algorithm. They make use of other existing

algorithms to group and schedule jobs with common characteristics.

Multiple queues are maintained for processes with common characteristics.

Each queue can have its own scheduling algorithms.

Priorities are assigned to each queue.

For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound jobs in another queue.

The Process Scheduler then alternately selects jobs from each queue and assigns them to the CPU

based on the algorithm assigned to the queue.

PAGING: This is a memory management scheme that eliminates the need for a contiguous allocation of physical memory. The process
of retrieving processes in the form of pages from the secondary storage into the main memory is known as paging. The basic purpose
of paging is to separate each procedure into pages. Additionally, frames will be used to split the main memory. This scheme permits the
physical address space of a process to be non – contiguous.
In paging, the physical memory is divided into fixed-size blocks called page frames, which are the same size as the pages used by the
process. The process’s logical address space is also divided into fixed-size blocks called pages, which are the same size as the page
frames. When a process requests memory, the operating system allocates one or more page frames to the process and maps the
process’s logical pages to the physical page frames.
The mapping between logical pages and physical page frames is maintained by the page table, which is used by the memory
management unit to translate logical addresses into physical addresses. The page table maps each logical page number to a physical
page frame number.
Terminologies Associated with Memory Control
• Logical Address or Virtual Address: This is a deal that is generated through the CPU and used by a technique to get the right
of entry to reminiscence. It is known as a logical or digital deal because it isn’t always a physical vicinity in memory but an
opportunity for a connection with a place inside the device’s logical address location.
• Logical Address Space or Virtual Address Space: This is the set of all logical addresses generated via a software program. It
is normally represented in phrases or bytes and is split into regular-duration pages in a paging scheme.
• Physical Address: This is a cope that corresponds to a bodily place in reminiscence. It is the actual cope with this that is
available on the memory unit and is used by the memory controller to get admission to the reminiscence.
• Physical Address Space: This is the set of all bodily addresses that correspond to the logical addresses inside the way’s logical
deal with place. It is usually represented in words or bytes and is cut up into fixed-size frames in a paging scheme.
In a paging scheme, the logical deal with the region is cut up into steady-duration pages, and every internet web page is mapped to a
corresponding body within the physical deal with the vicinity. The going for walks tool keeps a web internet web page desk for every
method, which maps the system’s logical addresses to its corresponding bodily addresses. When a method accesses memory, the CPU
generates a logical address that is translated to a bodily address using the net page table. The reminiscence controller then uses the
physical cope to get the right of entry to the reminiscence.
Important Features of Paging in PC Reminiscence Management
• Logical to bodily address mapping: In paging, the logical address area of a technique is divided into constant-sized pages, and
each web page is mapped to a corresponding physical body within the main reminiscence. This permits the working gadget to
manipulate the memory in an extra flexible way, as it is able to allocate and deallocate frames as needed.
• Fixed web page and frame length: Paging makes use of a set web page length, which is usually identical to the size of a frame
within the most important memory. This facilitates simplifying the reminiscence control technique and improves device
performance.
• Page desk entries: Each page within the logical address area of a method is represented through a page table entry (PTE),
which contains facts approximately the corresponding bodily body in the predominant memory. This consists of the frame
range, in addition to other manipulate bits which can be used by the running machine to manage the reminiscence.
• A number of page desk entries: The range of page desk entries in a manner’s page desk is identical to the wide variety of
pages inside the logical deal with the area of the technique.
• Page table stored in important memory: The web page desk for each system is typically saved in important reminiscence, to
allow for green get right of entry to and change by the operating device. However, this could additionally introduce overhead,
because the web page table must be updated on every occasion a system is swapped in or out of the main memory.
How Paging Works?
Paging is a method used by operating systems to manage memory efficiently. It breaks physical memory into fixed-size blocks called
“frames” and logical memory into blocks of the same size called “pages.” When a program runs, its pages are loaded into any available
frames in the physical memory.
This approach prevents fragmentation issues by keeping memory allocation uniform. Each program has a page table, which the
operating system uses to keep track of where each page is stored in physical memory. When a program accesses data, the system uses
this table to convert the program’s address into a physical memory address.
Paging allows for better memory use and makes it easier to manage. It also supports virtual memory, letting parts of programs be stored
on disk and loaded into memory only when needed. This way, even large programs can run without fitting entirely into main memory.
• If Logical Address = 31 bit, then Logical Address Space = 231 words = 2 G words (1 G = 230)
• If Logical Address Space = 128 M words = 27 * 220 words, then Logical Address = log2 227 = 27 bits
• If Physical Address = 22 bit, then Physical Address Space = 222 words = 4 M words (1 M = 220)
• If Physical Address Space = 16 M words = 24 * 220 words, then Physical Address = log2 224 = 24 bits
The mapping from virtual to physical address is done by the Memory Management Unit (MMU) which is a hardware device and this
mapping is known as the paging technique.
• The Physical Address Space is conceptually divided into a number of fixed-size blocks, called frames.
• The Logical Address Space is also split into fixed-size blocks, called pages.
• Page Size = Frame Size
Example
• Physical Address = 12 bits, then Physical Address Space = 4 K words
• Logical Address = 13 bits, then Logical Address Space = 8 K words
• Page size = frame size = 1 K words (assumption)

Paging
The address generated by the CPU is divided into
• Page number(p): Number of bits required to represent the pages in Logical Address Space or Page number
• Page offset(d): Number of bits required to represent a particular word in a page or page size of Logical Address Space or word
number of a page or page offset.
In a paging scheme, the physical cope with the area is divided into fixed-length frames, each of which contains some bytes or words.
When a manner is running, its logical address space is split into constant-size pages, which might be mapped to corresponding frames
within the physical address space.
Physical Address is divided into:
To represent a physical address in this scheme, parts are commonly used:
Frame Range: This is the variety of the frame within the physical cope with the area that consists of the byte or phrase being
addressed. The wide variety of bits required to represent the body range relies upon the scale of the physical cope with the area and the
size of each frame. For instance, if the physical cope with area carries 2^20 frames and each frame is 4KB (2^12 bytes) in size, then
the frame range could require 20-12 = 8 bits.
Frame Offset: This is the wide variety of the byte or word within the body this is being addressed. The number of bits required to
represent the frame offset relies upon the size of every frame. For instance, if everybody is 4KB in size, then the frame offset could
require 12 bits. So, a physical address in this scheme may be represented as follows:
Physical Address = (Frame Number << Number of Bits in Frame Offset) + Frame Offset, where “<<” represents a bitwise left
shift operation.
• The TLB is associative, high-speed memory.
• Each entry in TLB consists of two parts: a tag and a value.
• When this memory is used, then an item is compared with all tags simultaneously. If the item is found, then the corresponding
value is returned.
Paging is a memory management technique used in operating systems to manage memory and allocate memory to processes. In paging,
memory is divided into fixed-size blocks called pages, and processes are allocated memory in terms of these pages. Each page is of the
same size, and the size is typically a power of 2, such as 4KB or 8 KB.
Important Points About Paging in Operating Systems
• Reduces internal fragmentation: Paging facilitates lessening internal fragmentation by using allocating memory in fixed-size
blocks (pages), which might be usually a whole lot smaller than the size of the process’s facts segments. This lets in for greater
efficient use of memory in view that there are fewer unused bytes in each block.
• Enables reminiscence to be allotted on call for: Paging enables memory to be allocated on call for, this means that memory is
most effectively allocated when it’s far needed. This allows for extra efficient use of memory in view that only the pages that
are absolutely used by the manner want to be allocated inside the physical memory.
• Protection and sharing of memory: Paging allows for the protection and sharing of reminiscence between methods, as each
procedure has its own web page table that maps its logical deal with area to its physical address space. This permits techniques
to proportion facts at the same time as preventing unauthorized get right of entry to every other’s memory.
• External fragmentation: Paging can result in outside fragmentation, wherein memory turns fragmented into small,
non-contiguous blocks. This can make it difficult to allocate massive blocks of reminiscence to a method seeing that there may
not be enough contiguous free memory to be had.
• Overhead: Paging involves overhead because of the renovation of the web page table and the translation of logical addresses to
physical addresses. The working device must maintain the page table for each manner and perform a deal with translation
whenever a procedure accesses memory, which can slow down the machine.
What is Memory Management Unit (MMU)?
A memory management unit (MMU) is a technique used to convert logical address to physical address. Logical address is the address
generated by CPU for each page and physical address is the real address of the frame where page is going to be stored.
Whenever a page has to be accessed by CPU using the logical address, it requires physical address for accessing the page. Logical
address comprises of two parts: Page Number and Offset.
Conclusion
In conclusion, paging is a memory management technique that helps computers in storing data efficiently and it also helps in retrieve
data as it breaks the memory into small, fixed size chunks called pages. It helps in handling larger amount of data without the issue of
fragmentation that improves the performance and usability.

You might also like