Python Java JavaScript SQ
← prev next →
Layered Structure of
Operating System
The operating system can be implemented with
the help of various structures. The structure of
the OS depends mainly on how the various
common components of the operating system
are interconnected and melded into the kernel.
Depending on this, we have to follow the
structures of the operating system.
The layered structure approach breaks up the
operating system into different layers and retains
much more control on the system. The bottom
layer (layer 0) is the hardware, and the topmost
layer (layer N) is the user interface. These layers
are so designed that each layer uses the
functions of the lower-level layers only. It
simplifies the debugging process as if lower-level
layers are debugged, and an error occurs during
debugging. The error must be on that layer only
as the lower-level layers have already been
debugged.
Layer6
UserPrograms
Layer5
1/OBuffer
Layer4
ProcessManagement
Layer3
Layer2 MemoryManagement
CPUScheduling
Layer1
Hardware
This allows implementers to change the
inner workings and increases modularity.
As long as the external interface of the
routines doesn't change, developers have
more freedom to change the inner workings
of the routines.
The main advantage is the simplicity of
construction and debugging. The main
difficulty is defining the various layers.
The main disadvantage of this structure is that
the data needs to be modified and passed on at
each layer, which adds overhead to the system.
Moreover, careful planning of the layers is
necessary as a layer can use only lower-level
layers. UNIX is an example of this structure.
Why Layering in Operating System?
Layering provides a distinct advantage in an
operating system. All the layers can be defined
separately and interact with each other as
required. Also, it is easier to create, maintain and
update the system if it is done in the form of
layers. Change in one layer specification does not
affect the rest of the layers.
Each of the layers in the operating system can
only interact with the above and below layers.
The lowest layer handles the hardware, and the
uppermost layer deals with the user applications.
Architecture of Layered Structure
This type of operating system was created as an
improvement over the early monolithic systems.
The operating system is split into various layers in
the layered operating system, and each of the
layers has different functionalities. There are
some rules in the implementation of the layers as
follows.
A particular layer can access all the layers
present below it, but it cannot access them.
That is, layer n-1 can access all the layers
from n-2 to 0, but it cannot access the nth
Layer 0 deals with allocating the processes,
switching between processes when
interruptions occur or the timer expires. It
also deals with the basic multiprogramming
of the CPU.
Thus if the user layer wants to interact with the
hardware layer, the response will be traveled
through all the layers from n-1 to 1. Each layer
must be designed and implemented such that it
will need only the services provided by the layers
below it.
There are six layers in the layered operating
system. A diagram demonstrating these layers is
as follows:
1. Hardware: This layer interacts with the
system hardware and coordinates with all the
peripheral devices used, such as a printer,
mouse, keyboard, scanner, etc. These types
of hardware devices are managed in the
hardware layer.
The hardware layer is the lowest and most
authoritative layer in the layered operating
system architecture. It is attached directly to
the core of the system.
2. CPU Scheduling: This layer deals with
scheduling the processes for the CPU. Many
scheduling queues are used to handle
processes. When the processes enter the
system, they are put into the job queue.
The processes that are ready to execute in
the main memory are kept in the ready
queue. This layer is responsible for managing
how many processes will be allocated to the
CPU and how many will stay out of the CPU.
3. Memory Management: Memory
management deals with memory and moving
processes from disk to primary memory for
execution and back again. This is handled by
the third layer of the operating system. All
memory management is associated with this
layer. There are various types of memories in
the computer like RAM, ROM.
If you consider RAM, then it is concerned
with swapping in and swapping out of
memory. When our computer runs, some
processes move to the main memory (RAM)
for execution, and when programs, such as
calculator, exit, it is removed from the main
memory.
4. Process Management: This layer is
responsible for managing the processes, i.e.,
assigning the processor to a process and
deciding how many processes will stay in the
waiting schedule. The priority of the
processes is also managed in this layer. The
different algorithms used for process
scheduling are FCFS (first come, first
served), SJF (shortest job first), priority
scheduling, round-robin scheduling, etc.
5. I/O Buffer: I/O devices are very important in
computer systems. They provide users with
the means of interacting with the system.
This layer handles the buffers for the I/O
devices and makes sure that they work
correctly.
Suppose you are typing from the keyboard.
There is a keyboard buffer attached with the
keyboard, which stores data for a temporary
time. Similarly, all input/output devices have
some buffer attached to them. This is
because the input/output devices have slow
processing or storing speed. The computer
uses buffers to maintain the good timing
speed of the processor and input/output
devices.
6. User Programs: This is the highest layer in
the layered operating system. This layer
deals with the many user programs and
applications that run in an operating system,
such as word processors, games, browsers,
etc. You can also call this an application layer
because it is concerned with application
programs.
Advantages of Layered Structure
There are several advantages of the layered
structure of operating system design, such as:
1. Modularity: This design promotes
modularity as each layer performs only the
tasks it is scheduled to perform.
2. Easy debugging: As the layers are discrete
so it is very easy to debug. Suppose an error
occurs in the CPU scheduling layer. The
developer can only search that particular
layer to debug, unlike the Monolithic system
where all the services are present.
3. Easy update: A modification made in a
particular layer will not affect the other
layers.
4. No direct access to hardware: The
hardware layer is the innermost layer present
in the design. So a user can use the services
of hardware but cannot directly modify or
access it, unlike the Simple system in which
the user had direct access to the hardware.
5. Abstraction: Every layer is concerned with
its functions. So the functions and
implementations of the other layers are
abstract to it.
Disadvantages of Layered Structure
Though this system has several advantages over
the Monolithic and Simple design, there are also
some disadvantages, such as:
1. Complex and careful implementation: As a
layer can access the services of the layers
below it, so the arrangement of the layers
must be done carefully. For example, the
backing storage layer uses the services of
the memory management layer. So it must be
kept below the memory management layer.
Thus with great modularity comes complex
implementation.
2. Slower in execution: If a layer wants to
interact with another layer, it requests to
travel through all the layers present between
the two interacting layers. Thus it increases
response time, unlike the Monolithic system,
which is faster than this. Thus an increase in
the number of layers may lead to a very
inefficient design.
3. Functionality: It is not always possible to
divide the functionalities. Many times, they
are interrelated and can't be separated.
4. Communication: No communication
between non-adjacent layers.
Next Topic Multiprogramming vs. Time
Sharing Operating System
← prev next →
ADVERTISEMENT
Related Posts
Dead Operating Systems
Operating systems are the hidden
architects who design our digital
interactions and experiences in the ever-
changing world of technology. While most
people are aware of operating systems,
such as Windows, macOS, and Linux, there
is a fascinating but often missed chapter in
the history of...
16 min read
Difference between Concurrency and
Parallelism in Operating System
Concurrency and parallelism are related but
not the same terms, and they are
sometimes confused. The key distinction
between concurrency and parallelism is
that concurrency is concerned with dealing
with several things simultaneously or
managing concurrent events while
essentially hiding latency. In contrast,
parallelism is about...
3 min read
Dekker's algorithm in Process
Synchronization
Dekker's algorithm in Process
Synchronization In this article, we will
discuss Dekker's algorithm in Process
Synchronization. But before discussing
Dekker's algorithm, we must know
about the process synchronization with its
challenges and importance. Introduction to
Process Synchronization The simultaneous
coordination of several processes in the
computer system is...
12 min read
AmigaOS
The operating system used inside the
Amiga series of personal computers is
referred to as . , which was first created by
way of Commodore International, has a
passionate user base and prolonged
records. A 7. Sixteen MHz Motorola 68000
CPU, 256 KB of RAM, and...
5 min read
Tasks in Real-Time Systems
A real-time operating system (RTOS) serves
real-time applications that process data
without any buffering delay. In an RTOS, the
Processing time requirement is calculated
in tenths of seconds increments of time. It
is a time-bound system that is defined as
fixed time constraints. In this type...
7 min read
Distributed Operating System
A distributed operating system (DOS) is an
essential type of operating system.
Distributed systems use many central
processors to serve multiple real-time
applications and users. As a result, data
processing jobs are distributed between
the processors. It connects multiple
computers via a single communication
channel. Furthermore, each...
5 min read
What is RPC in Operating System
? Remote Procedure Call or RPC is a
powerful technique for constructing
distributed, client-server-based
applications. It is also known as a function
call or a subroutine call. A remote
procedure call is when a computer
program causes a procedure to execute in a
different address space, coded...
7 min read
DOS vs UNIX
Difference between DOS and UNIX In this
article, you will learn the difference
between the DOS and UNIX operating
systems. But before discussing the
differences, you will need to know about
the DOS and UNIX. What is DOS? DOS
stands for a Disk Operating System. It is a
computer...
7 min read
Lamport's Bakery Algorithm
In this article, we will understand Lamport's
bakery algorithm in detail. What do you
mean by Lamport's bakery algorithm?
Lamport proposed a bakery algorithm, a
software solution, for the n process mutual
exclusion problem. This algorithm solves a
critical problem, following the fairest, first
come, first serve principle. Why...
4 min read
Compute Node Kernel (CNK) Operating
System
The Compute Node Kernel (CNK) operating
system first came into the picture with the
debut of the IBM Blue Gene/L
supercomputer in 2004. The Compute
Node Kernel (CNK) turned advanced by way
of IBM as a lightweight operating device for
its Blue Gene family of...
9 min read
Subscribe to Tpoint
Tech
We request you to subscribe our
newsletter for upcoming updates.
Your Email Subscribe
Learn Important Tutorial
Python Java
Javascript HTML
Database PHP
C++ React
B.Tech / MCA
Data
DBMS
Structures
Operating
DAA
System
Computer Compiler
Network Design
Computer Discrete
Organization Mathematics
Ethical Computer
Hacking Graphics
Web Software
Technology Engineering
Cyber
Automata
Security
C
C++
Programming
Java .Net
Python Programs
Control Data
System Warehouse
Preparation