Device Management in Operating System
Device management in an operating system means controlling the Input/output devices like disk, microphone,
keyboard, printer, magnetic tape, USB ports, camcorder, scanner, other accessories, and supporting units like
supporting units control channels.
A process may require various resources, including main memory, file access, and access to disk drives, and others. If
resources are available, they could be allocated, and control returned to the CPU. Otherwise, the procedure would
have to be postponed until adequate resources become available. The system has multiple devices, and in order to
handle these physical or virtual devices, the operating system requires a separate program known as an ad device
controller. It also determines whether the requested device is available.
Functions of the device management in the operating system
There are various functions of device management in the operating system. Some of them are as follows:
1. It keeps track of data, status, location, uses, etc. The file system is a term used to define a group of facilities.
2. It enforces the pre-determined policies and decides which process receives the device when and for how
long.
3. It improves the performance of specific devices.
4. It monitors the status of every device, including printers, storage drivers, and other devices.
5. It allocates and effectively deallocates the device. De-allocating differentiates the devices at two levels: first,
when an I/O command is issued and temporarily freed. Second, when the job is completed, and the device is
permanently release.
Input/Output (IO) devices can be classified into several categories based on their functionality and purpose. Here are
some common classifications of IO devices:
1. Input Devices:
Keyboard: Allows users to input alphanumeric characters and commands.
Mouse: Enables users to control the cursor and select objects on the screen.
Touchscreen: Displays visual elements and detects user input through touch.
Scanner: Converts physical documents or images into digital formats.
Microphone: Captures audio and converts it into a digital signal.
Webcam: Captures video and transmits it as a digital signal.
2. Output Devices:
Monitor/Display: Presents visual information and graphics to the user.
Printer: Produces hard copies of digital documents or images.
Speaker: Outputs audio signals and reproduces sound.
Projector: Displays visual content on a larger screen or surface.
Headphones/Earphones: Delivers audio output for personal listening.
3. Storage Devices:
Hard Disk Drive (HDD): Provides non-volatile storage for digital data.
Solid-State Drive (SSD): Similar to an HDD but uses flash memory for storage.
USB Flash Drive: Portable storage device that connects via USB port.
Optical Disc Drive: Reads and writes data to optical discs such as CDs, DVDs, or Blu-ray discs.
Memory Card: Removable storage used in cameras, smartphones, and other devices.
4. Communication Devices:
Network Interface Card (NIC): Connects a device to a network via Ethernet cable.
Modem: Converts digital signals to analog signals for transmission over telephone lines.
Router: Connects multiple devices to a network and directs data traffic.
Wireless Adapter: Enables wireless connectivity to networks (e.g., Wi-Fi or Bluetooth).
Bluetooth Dongle: Adds Bluetooth functionality to devices without built-in support.
5. Other Devices:
Game Controller: Input device specifically designed for gaming purposes.
Barcode Reader: Scans barcodes for product identification or data retrieval.
Biometric Scanner: Captures unique biological features for identification purposes (e.g., fingerprint scanner).
GPS (Global Positioning System): Receives signals from satellites to determine geographic location.
Sensor Devices: Detects and measures physical or environmental conditions (e.g., temperature, light,
motion).
Controllers: IO devices often require a controller to manage their operations and interface with the
computer system. Controllers can be categorized based on the type of device they control, such as:
1. Display Controller: Manages the output to a display device like a monitor or projector.
2. Disk Controller: Controls the transfer of data between the computer system and disk storage
devices like hard drives or SSDs.
3. Network Controller: Facilitates communication between the computer system and a network,
handling tasks such as data transmission and reception.
4. USB Controller: Regulates communication between the computer and USB devices like
keyboards, mice, or flash drives.
5. Audio Controller: Manages audio input and output, including tasks like sound processing and
conversion.
Memory-Mapped IO: Memory-mapped IO allows the IO devices to communicate with the computer
system using memory addresses. In this approach, the IO device is assigned a range of memory
addresses, and reading from or writing to those addresses triggers IO operations. The CPU
communicates with the device by reading from or writing to the corresponding memory addresses. This
technique simplifies the IO process, as IO devices can be accessed using the same instructions and
addressing modes as regular memory.
Direct Memory Access (DMA) Operation: DMA allows for efficient data transfer between IO devices and
memory without involving the CPU for every data transfer. In DMA operation, the IO device and
memory communicate directly, bypassing the CPU. The IO device and DMA controller coordinate the
data transfer, while the CPU can perform other tasks. DMA is particularly useful for high-speed data
transfer, such as large file copying or multimedia streaming.
Interrupts: Interrupts are signals sent by IO devices to the CPU to request attention or indicate a change
in status. When an IO device needs to communicate with the CPU, it triggers an interrupt, temporarily
pausing the CPU's current execution. The CPU then transfers control to an interrupt handler routine,
which handles the specific IO device's request or event. Interrupts allow for efficient handling of IO
operations and can be categorized into:
1. Hardware Interrupts: Generated by external hardware devices to indicate events like IO
completion, error conditions, or device requests.
2. Software Interrupts: Generated by software programs to request specific services from the CPU
or to communicate with IO devices.
3. Interrupt Requests (IRQs): These are specific hardware lines dedicated to receiving interrupt
signals from various devices. Each IRQ is associated with a specific IO device or set of devices.
I/O Software is used for interaction with I/O devices like mouse, keyboards, USB devices, printers, etc.
Several commands are made via external available devices which makes the OS function upon each of
them one by one.
I/O software is organized in the following ways:
User Level Libraries– Provides a simple interface to program for input-output functions.
Kernel Level Modules– Provides device driver to interact with the device-independent I/O modules
and device controller.
Hardware-A layer including hardware controller and actual hardware which interact with device drivers.
Let us now see all the goals of I/O software in the below illustrated section one after the another:
Goals Of I/O Software
In this section, we will talk about all the goals of I/O software one after the another which are illustrated
below:
1. Uniform naming: For example naming of file systems in Operating Systems is done in a way that
the user does not have to be aware of the underlying hardware name.
2. Synchronous versus Asynchronous: When the CPU is working on some process it goes into the
block state when the interrupt occurs. Therefore most of the devices are asynchronous. And if the I/O
operation is in a blocking state it is much easier to write the I/O operation. It is always the operating
system’s responsibility to create such an interrupt-driven user program.
3. Device Independence: The most important part of I/O software is device independence. It is always
preferable to write a program that can open all other I/O devices. For example, it is not necessary to
write the input-taking program again and again for taking input from various files and devices. As this
creates much work to do and also much space to store the different programs.
4. Buffering: Data that we enter into a system cannot be stored directly in memory. For example, the
data is converted into smaller groups and then transferred to the outer buffer for examination. Buffer
has a major impact on I/O software as it is the one that ultimately helps store the data and copy data.
Many devices have constraints and just to avoid it some data is always put into the buffer in advance
so the buffer rate of getting filled with data and getting empty remains balanced.
5. Error handling: Errors and mostly generated by the controller and also they are mostly handled by
the controller itself. When the lower level solves the problem it does not reach the upper level.
6. Shareable and Non-Shareable Devices: Devices like Hard Disk can be shared among multiple
processes while devices like Printers cannot be shared. The goal of I/O software is to handle both
types of devices.
7. Caching: Caching is the process in which all the most accessible and most used data is kept in a
separate memory (known as Cache memory) for access by creating a copy of the originally available
data. The reason for implementing this Caching process is just to increase the speed of accessing the
data since accessing the Cached copy of data is more efficient as compared to accessing the original
data.
Handling IO involves various techniques and software layers to facilitate efficient and reliable data
transfer between IO devices and the computer system. Let's explore the different approaches and
software layers involved:
1. Programmed IO: Programmed IO is a basic method where the CPU executes instructions to
communicate directly with IO devices. In this approach, the CPU controls the entire IO operation
by reading or writing data to IO device registers. The CPU actively polls the device to check its
status and initiates data transfers. Programmed IO is simple to implement but can be inefficient
as it ties up the CPU, causing delays and wasting processing power.
2. Interrupt-Driven IO: Interrupt-driven IO utilizes interrupts to handle IO operations. IO devices
generate interrupts to request attention or indicate completion of a task. When an interrupt
occurs, the CPU suspends its current execution and transfers control to an interrupt handler
routine. The interrupt handler services the IO request or performs necessary operations, such as
reading or writing data to/from the IO device. Interrupt-driven IO minimizes CPU involvement
and allows the CPU to perform other tasks while IO operations are in progress.
3. IO using DMA (Direct Memory Access): DMA (Direct Memory Access) enables high-speed data
transfer between IO devices and memory without CPU intervention. With DMA, the IO device
and a DMA controller directly exchange data, bypassing the CPU. The DMA controller takes
control of the memory bus, transfers data between the IO device and memory, and notifies the
CPU when the operation is complete. DMA reduces CPU overhead and enhances data transfer
efficiency, making it ideal for large data transfers.
IO Software Layers: IO Software Layers provide abstraction and manage the complexities of IO
operations. These layers include:
i. Interrupt Handlers: Interrupt handlers are software routines that handle
interrupts generated by IO devices. They are responsible for servicing the
interrupt, acknowledging the device, and performing the required actions,
such as reading or writing data.
ii. Device Drivers: Device drivers are software components that provide a
standardized interface between the operating system and IO devices. They
abstract the low-level details of IO devices and provide a uniform API for the
operating system and applications to interact with the devices. Device
drivers handle tasks like device initialization, data transfer, error handling,
and managing device-specific features.
iii. IO Libraries: IO libraries are higher-level software libraries that provide
convenient APIs and functions for IO operations. They abstract the
complexities of low-level IO programming and offer a simplified interface
for application developers. IO libraries often provide functions for tasks like
opening/closing devices, reading/writing data, and handling IO errors.
iv. IO Subsystems: IO subsystems are components of the operating system
responsible for managing and coordinating IO operations. They provide
services like device discovery, device configuration, IO scheduling, and IO
request management. IO subsystems ensure efficient and secure utilization
of IO devices across the system.
Disk Management:
Disk management refers to the process of organizing and controlling the storage devices (disks) in a computer
system. It involves various operations and techniques to effectively utilize and maintain disk resources. Here are some
key aspects of disk management:
1. Partitioning: Partitioning involves dividing a physical disk into multiple logical sections called partitions.
Each partition acts as a separate unit with its own file system and can be treated as an individual disk.
Partitioning allows for better organization, separate operating system installations, data isolation, and
improved disk performance.
2. Formatting: Formatting is the process of preparing a partition or disk for data storage. It involves creating a
file system on the partition to enable data storage, retrieval, and organization.
3. File System Management: File system management deals with maintaining and optimizing the file systems
on the disk. It includes tasks such as creating, deleting, and renaming files and directories, managing
permissions and access control, handling file system metadata, and ensuring file system integrity.
4. Disk Backup and Recovery: Disk backup involves creating copies of data and system configurations to
safeguard against data loss or disk failures. Recovery involves restoring data from backups in case of
accidental deletion, disk failures, or system crashes.
5. Disk Health Monitoring: Monitoring disk health involves tracking the performance, reliability, and overall
condition of disks. By monitoring disk health, potential issues can be identified early, allowing for preventive
actions like disk replacement or repairs.
Disk Structure: - The disk structure refers to the organization and layout of data on a storage device, typically a hard
disk drive (HDD) or solid-state drive (SSD). Understanding the disk structure helps in efficient data storage, retrieval,
and management. Understanding the disk structure helps in optimizing disk usage, implementing efficient file
systems, and ensuring data integrity and reliability. It allows for effective data organization, retrieval, and
management on storage devices.
Disk scheduling:- THIS is done by operating systems to schedule I/O requests arriving for the disk.
Disk scheduling is also known as I/O scheduling. Disk scheduling is important because:
Multiple I/O requests may arrive by different processes and only one I/O request can be served at a
time by the disk controller. Thus other I/O requests need to wait in the waiting queue and need to be
scheduled.
Two or more requests may be far from each other so can result in greater disk arm movement.
Hard drives are one of the slowest parts of the computer system and thus need to be accessed in an
efficient manner.
The main goal of Disk Scheduling Algorithm is to minimize the seek time. (Seek time: - Time taken to
reach up to desired track so the disk scheduling algorithm that gives minimum average seek time is
better.)
There are many Disk Scheduling Algorithms
1. FCFS: FCFS is the simplest of all the Disk Scheduling Algorithms. In FCFS, the requests are
addressed in the order they arrive in the disk queue.
Advantages:
Every request gets a fair chance
No indefinite postponement
Disadvantages:
Does not try to optimize seek time
May not provide the best possible service
Example:
Suppose the order of request is- (82,170,43,140,24,16,190)
And current position of Read/Write head is: 50
2. SSTF: In SSTF (Shortest Seek Time First), requests having shortest seek time are executed
first. So, the seek time of every request is calculated in advance in the queue and then they are
scheduled according to their calculated seek time. As a result, the request near the disk arm
will get executed first. SSTF is certainly an improvement over FCFS as it decreases the
average response time and increases the throughput of system.
Advantages:
Average Response Time decreases
Throughput increases
Disadvantages:
Overhead to calculate seek time in advance
Can cause Starvation for a request if it has a higher seek time as compared to incoming requests
High variance of response time as SSTF favors only some requests
3. SCAN: In SCAN algorithm the disk arm moves in a particular direction and services the
requests coming in its path and after reaching the end of the disk, it reverses its direction and
again services the request arriving in its path. So, this algorithm works as an elevator and is
hence also known as an elevator algorithm. As a result, the requests at the midrange are
serviced more and those arriving behind the disk arm will have to wait.
Advantages:
High throughput
Low variance of response time
Average response time
Disadvantages:
Long waiting time for requests for locations just visited by disk arm
4. CSCAN: In SCAN algorithm, the disk arm again scans the path that has been scanned, after
reversing its direction. So, it may be possible that too many requests are waiting at the other
end or there may be zero or few requests pending at the scanned area.
These situations are avoided in CSCAN algorithm in which the disk arm instead of reversing its
direction goes to the other end of the disk and starts servicing the requests from there. So, the disk arm
moves in a circular fashion and this algorithm is also similar to SCAN algorithm and hence it is known
as C-SCAN (Circular SCAN).
Provides more uniform wait time compared to SCAN
5. LOOK: It is similar to the SCAN disk scheduling algorithm except for the difference that the
disk arm in spite of going to the end of the disk goes only to the last request to be serviced in
front of the head and then reverses its direction from there only. Thus it prevents the extra
delay which occurred due to unnecessary traversal to the end of the disk.