KEMBAR78
OS Unit 3 Part 2 | PDF | Input/Output | Computer Data Storage
0% found this document useful (0 votes)
19 views20 pages

OS Unit 3 Part 2

This document discusses file management concepts, including file attributes, operations, types, access methods, and allocation strategies. It details how files are structured, accessed, and stored in operating systems, highlighting methods such as contiguous, linked, and indexed allocation. Additionally, it covers directory structures, emphasizing the limitations and advantages of single-level directories.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views20 pages

OS Unit 3 Part 2

This document discusses file management concepts, including file attributes, operations, types, access methods, and allocation strategies. It details how files are structured, accessed, and stored in operating systems, highlighting methods such as contiguous, linked, and indexed allocation. Additionally, it covers directory structures, emphasizing the limitations and advantages of single-level directories.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Unit III Part 2nd – File management

Files
Concepts:
 A file is a named collection of related information that is recorded on secondary
storage such as magnetic disks, magnetic tapes and optical disks.
 In general, a file is a sequence of bits, bytes, lines or records whose meaning is
defined by the files creator and user.
Attributes of a File
Following are some of the attributes of a file:
 Name: It is the only information which is in human-readable form.
 Identifier: The file is identified by a unique tag(number) within file system.
 Type: It is needed for systems that support different types of files.
 Location: Pointer to file location on device.
 Size: The current size of the file.
 Protection: This controls and assigns the power of reading, writing, executing.
 Time, date, and user identification: This is the data for protection, security,
and usage monitoring. 
File Operations
The operating system must do to perform basic file operations given below.
 Creating a file: First, space in the file system must be found for the file. Second,
an entry for the new file must be made in the directory.
 Writing a file: To write a file, we make a system call specifying both the name
of the file and the information to be written to the file. Given the name of the
file, the system searches the directory to find the file's location. The system must
keep a write pointer to the location in the file where the next write is to take
place. The write pointer must be updated whenever a write occurs.
 Reading a file: To read from a file, we use a system call that specifies the name
of the file and where (in memory) the next block of the file should be put. Again,
the directory is searched for the associated entry, and the system needs to keep a
read pointer to the location in the file where the next read is to take place. Once
the read has taken place, the read pointer is updated.
 Repositioning within a file: The directory is searched for the appropriate entry,
and the current-file-position pointer is repositioned to a given value.
Repositioning within a file need not involve any actual I/O. This file operation
is also known as a file seek.
 Deleting a file. To delete a file, we search the directory for the named file.
Having found the associated directory entry, we release all file space, so that it
can be reused by other files, and erase the directory entry.

Page 1 of 20
 Protection: Access-control information determines who can do reading, writing,
executing, and so on.
 Truncating a file: The user may want to erase the contents of a file but keep its
attributes. Rather than forcing the user to delete the file and then recreate it, this
function allows all attributes to remain unchanged—except for file length—but
lets the tile be reset to length zero and its file space released.
In brief

File Types

Page 2 of 20
File System Structure
A File Structure should be according to a required format that the operating system can
understand.
 A file has a certain defined structure according to its type.
 A text file is a sequence of characters organized into lines.
 A source file is a sequence of procedures and functions.
 An object file is a sequence of bytes organized into blocks that are
understandable by the machine. 
 When operating system defines different file structures, it also contains the code
to support these file structure. Unix, MS-DOS support minimum number of file
structure.
Files can be structured in several ways in which three common structures are given in
this tutorial with their short description one by one.
File Structure 1
 Here, as you can see from the figure 1, the file is an unstructured sequence of
bytes.
 Therefore, the OS doesn't care about what is in the file, as all it sees are bytes.
File Structure 2
 Now, as you can see from the figure 2 that shows the second structure of a file,
where a file is a sequence of fixed-length records where each with some internal
structure.
 Central to the idea about a file being a sequence of records is the idea that read
operation returns a record and write operation just appends a record.
File Structure 3
 Now in the last structure of a file that you can see in the figure 3, a file basically
consists of a tree of records, not necessarily all the same length, each containing
a key field in a fixed position in the record. The tree is stored on the field, just to
allow the rapid searching for a specific key.

Fig.1 Fig.2 Fig.3

Page 3 of 20
6.2 File Access method
File access mechanism refers to the manner in which the records of a file may be
accessed. There are several ways to access files −
 Sequential access
 Direct/Random access
 Indexed sequential access
1. Sequential Access
 A sequential access is that in which the records are accessed in some sequence,
i.e., the information in the file is processed in order, one record after the other.
This access method is the most primitive one.
 The idea of Sequential access is based on the tape model which is a sequential
access device.
 The Sequential access method is best because most of the records in a file are to
be processed. For example, transaction files.
 Example: Compilers usually access files in this fashion.
In Brief:
 Data is accessed one record right after another is an order.
 Read command cause a pointer to be moved ahead by one.
 Write command allocate space for the record and move the pointer to the new
End of File.
 Such a method is reasonable for tape.
Advantages of sequential access
 It is simple to program and easy to design.
 Sequential file is best use if storage space.
Disadvantages of sequential access
 Sequential file is time consuming process.
 It has high data redundancy.
 Random searching is not possible.

2. Direct Access
 Sometimes it is not necessary to process every record in a file.
 It is not necessary to process all the records in the order in which they are present in the
memory. In all such cases, direct access is used.
 The disk is a direct access device which gives us the reliability to random access of any
file block.
 In the file, there is a collection of physical blocks and the records of that blocks.
 Example: Databases are often of this type since they allow query processing that
involves immediate access to large amounts of information. All reservation systems fall
into this category.
In brief:
 This method is useful for disks.
 The file is viewed as a numbered sequence of blocks or records.

Page 4 of 20
 There are no restrictions on which blocks are read/written, it can be dobe in any
order.
 User now says "read n" rather than "read next".
 "n" is a number relative to the beginning of file, not relative to an absolute
physical disk location. 
Advantages:
 Direct access file helps in online transaction processing system (OLTP) like
online railway reservation system. 
 In direct access file, sorting of the records are not required.
 It accesses the desired records immediately.
 It updates several files quickly.
 It has better control over record allocation.
Disadvantages:
 Direct access file does not provide backup facility.
 It is expensive.
 It has less storage space as compared to sequential file.

3. Indexed Sequential Access


 The index sequential access method is a modification of the direct access
method.
 Basically, it is kind of combination of both the sequential access as well as direct
access.
 The main idea of this method is to first access the file directly and then it accesses
sequentially.
 In this access method, it is necessary for maintaining an index.
 The index is nothing but a pointer to a block.
 The direct access of the index is made to access a record in a file.
 The information which is obtained from this access is used to access the file.
Sometimes the indexes are very big.
 So to maintain all these hierarchies of indexes are built in which one direct access
of an index leads to information of another index access.
 It is built on top of Sequential access.
 It uses an Index to control the pointer while accessing files.
Advantages:
 In indexed sequential access file, sequential file and random file access is
possible.
 It accesses the records very fast if the index table is properly organized.
 The records can be inserted in the middle of the file.
 It provides quick access for sequential and direct processing.
 It reduces the degree of the sequential search.
Disadvantages:
 Indexed sequential access file requires unique keys and periodic reorganization.
 Indexed sequential access file takes longer time to search the index for the data
access or retrieval.

Page 5 of 20
 It requires more storage space.
 It is expensive because it requires special software.
 It is less efficient in the use of storage space as compared to other file
organizations.

Swapping:
 Swapping is a mechanism in which a process can be swapped temporarily out of
main memory (or move) to secondary storage (disk) and make that memory
available to other processes.
 At some later time, the system swaps back the process from the secondary
storage to main memory.
 Though performance is usually affected by swapping process but it helps in
running multiple and big processes in parallel and that's the reason
 Swapping is also known as a technique for memory compaction.
 Swap space is a space on hard disk which is a substitute of physical memory.
 It is used as virtual memory which contains process memory image.
 Whenever our computer run short of physical memory it uses its virtual memory
and stores information in memory on disk.

File Space Allocation:


Files are allocated disk spaces by operating system. Operating systems deploy following
three main ways to allocate disk space to files.
 Contiguous Allocation
 Linked Allocation
 Indexed Allocation
1. Contiguous Allocation
 In this scheme, each file occupies a contiguous set of blocks on the disk. For
example, if a file requires n blocks and is given a block b as the starting location,
then the blocks assigned to the file will be: b, b+1, b+2,……b+n-1.

Page 6 of 20
 This means that given the starting block address and the length of the file (in
terms of blocks required), we can determine the blocks occupied by the file.
 The directory entry for a file with contiguous allocation contains
1. Address of starting block
2. Length of the allocated portion.
 The file ‘mail’ in the following figure starts from the block 19 with length = 6
blocks. Therefore, it occupies 19, 20, 21, 22, 23, 24 blocks.

 Each file occupies a contiguous address space on disk.


 Assigned disk address is in linear order.
 Easy to implement.
 External fragmentation is a major issue with this type of allocation technique.
Advantages:
 Both the Sequential and Direct Accesses are supported by this. For direct access,
the address of the kth block of the file which starts at block b can easily be
obtained as (b+k).
 This is extremely fast since the number of seeks are minimal because of
contiguous allocation of file blocks.
Disadvantages:
 This method suffers from both internal and external fragmentation. This makes
it inefficient in terms of memory utilization.
 Increasing file size is difficult because it depends on the availability of
contiguous memory at a particular instance.
2. Linked Allocation
 In this scheme, each file is a linked list of disk blocks which need not be
contiguous.
 The disk blocks can be scattered anywhere on the disk.
 The directory entry contains a pointer to the starting and the ending file block.
 Each block contains a pointer to the next block occupied by the file.
 The file ‘jeep’ in following image shows how the blocks are randomly
distributed. The last block (25) contains -1 indicating a null pointer and does not
point to any other block.

Page 7 of 20
 Each file carries a list of links to disk blocks.
 Directory contains link / pointer to first block of a file.
 No external fragmentation
 Effectively used in sequential access file.
 Inefficient in case of direct access file.
Advantages:
1. File size does not have to be specified.
2. No external fragmentation.
Disadvantages:
1. It does sequential access efficiently and is not for direct access
2. Each block contains a pointer, wasting space
3. Blocks scatter everywhere and a large number of disk seeks may be necessary
4. Reliability: what if a pointer is lost or damaged?
3. Indexed Allocation
 In this scheme, a special block known as the Index block contains the pointers
to all the blocks occupied by a file. Each file has its own index block.
 The ith entry in the index block contains the disk address of the ith file block.
 The directory entry contains the address of the index block as shown in the
image:

 Provides solutions to problems of contiguous and linked allocation.


 A index block is created having all pointers to files.

Page 8 of 20
 Each file has its own index block which stores the addresses of disk space
occupied by the file.
 Directory contains the addresses of index blocks of files.
Advantages:
 This supports direct access to the blocks occupied by the file and therefore
provides fast access to the file blocks.
 It overcomes the problem of external fragmentation.
Disadvantages:
 The pointer overhead for indexed allocation is greater than linked allocation.
 For very small files, say files that expand only 2-3 blocks, the indexed allocation
would keep one entire block (index block) for the pointers which is inefficient in
terms of memory utilization. However, in linked allocation we lose the space of
only 1 pointer per block.

6.3 Directory Structure


 A directory is a container that is used to contain folders and file.
 It organizes files and folders into a hierarchical manner.

1. Single-level directory –
 Single level directory is simplest directory structure.
 In it all files are contained in same directory which make it easy to support and
understand.
 A single level directory has a significant limitation, however, when the number
of files increases or when the system has more than one user.
 Since all the files are in the same directory, they must have the unique name. if
two users call their dataset test, then the unique name rule violated.

Advantages:
 Since it is a single directory, so its implementation is very easy.
 If files are smaller in size, searching will faster.

Page 9 of 20
 The operations like file creation, searching, deletion, updating are very easy in
such a directory structure.
Disadvantages:
 There may chance of name collision because two files cannot have the same
name.
 Searching will become time taking if directory will large.
 In this cannot group the same type of files together.

2. Two-level directory –
 As, a single level directory often leads to confusion of files names among
different users hence the solution to this problem is to create a separate directory
for each user.
 In the two-level directory structure, each user has their own user files directory
(UFD).
 The UFDs has similar structures, but each lists only the files of a single user.
system’s master file directory (MFD) is searches whenever a new user id=s
logged in.
 The MFD is indexed by username or account number, and each entry points to
the UFD for that user.

Advantages:
 We can give full path like /User-name/directory-name/.
 Different users can have same directory as well as file name.
 Searching of files become more easy due to path name and user-grouping.
Disadvantages:
 A user is not allowed to share files with other users.
 Still it not very scalable, two files of the same type cannot be grouped together
in the same user.
3. Tree-structured directory –
 Once we have seen a two-level directory as a tree of height 2, the natural
generalization is to extend the directory structure to a tree of arbitrary height.

 This generalization allows the user to create their own subdirectories and to
organize on their files accordingly.
 A tree structure is the most common directory structure. The tree has a root
directory, and every file in the system have a unique path.

Page 10 of 20
Advantages:
 Very generalize, since full path name can be given.
 Very scalable, the probability of name collision is less.
 Searching becomes very easy, we can use both absolute path as well as relative.
Disadvantages:
 Every file does not fit into the hierarchical model; files may be saved into
multiple directories.
 We cannot share files.
 It is inefficient, because accessing a file may go under multiple directories.

Device Management
Device management in operating system known as the management of the
I/O devices such as a keyboard, magnetic tape, disk, printer, microphone,
USB ports, scanner, etc.as well as the supporting units like control
channels.

The operating system handles communication with a device via their drivers.
The OS components give a uniform interface for accessing devices with various
physical features. There are various functions of device management in the
operating system. Some of them are as follows
 It Keeps track of data status, location uses, etc. The file system is term Used
to define a group of facilities.
 It enforces the pre-determined policies and decides which process receives
the device when and for how long
 It improves the performance of specific devices.
 It monitors the status of every device, including printers, storage drivers and
other devices.

Page 11 of 20
Device Controllers
In computer systems, I/O devices do not usually communicate with the
operating system. The operating system manages their task with the help of
one intermediate electronic device called a device controller.

The device controller knows how to communicate with the operating system
as well as how to communicate with I/O devices. So device controller is an
interface between the computer system (operating system) and I/O devices.
The device controller communicates with the system using the system bus. So
how the device controller, I/O devices, and the system bus is connected is
shown below in the diagram.

In the above diagram, some IO devices have DMA (Direct Memory access) via
device controllers and some of them do not have DMA. The devices which have
a DMA path to communicate with the system to access memory are much faster
than devices that have a non-DMA path to access the memory. The devices
have a non-DMA path via the device controller to access the memory, they have
to go from the processor which means it will be scheduled by the scheduler and
then when it gets loaded into RAM then it will get the CPU to execute its
instruction to access memory so it is slow from devices which has a DMA.

Page 12 of 20
A device controller generally can control more than one IO device but it is most
common to control only a single device. Device controllers are stored in the
chip and that chip is attached to the system bus there is a connection cable
from the controller to each device which is controlled by it. Generally, one
controller controls one device. The operating system communicates with
device controllers and the device controller communicates with devices so
indirectly operating system communicates with IO devices.

Device Driver
Device Drivers are a set of programs that act as an intermediary between the
operating system of the computer and the hardware components.
Device Driver in computing refers to a special kind of software program or a
specific type of software application that controls a specific hardware device
that enables different hardware devices to communicate with the computer's
Operating System. A device driver communicates with the computer hardware
by computer subsystem or computer bus connected to the hardware.
Device Drivers are essential for a computer system to work properly because,
without a device driver, the particular hardware fails to work accordingly, which
means it fails in doing the function/action it was created to do. Most use the
term Driver, but some may say Hardware Driver, which also refers to
the Device Driver.

Working of Device Driver


Device Drivers depend upon the Operating System's instruction to access the
device and perform any particular action. After the action, they also show
their reactions by delivering output or status/message from the hardware
device to the Operating system. For example, a printer driver tells the printer
in which format to print after getting instruction from OS, similarly, A sound
card driver is there due to which 1's and 0's data of the MP3 file is converted
to audio signals and you enjoy the
music. Card reader, controller,
modem, network card, sound card,
printer, video card, USB devices, RAM,
Speakers, etc need Device Drivers to
operate.

The following figure illustrates the


interaction between the user, OS,
Device driver, and the devices:

Page 13 of 20
Types of Device Driver
1. Kernel-mode Device Driver
This Kernel-mode device driver includes some generic hardware that loads
with the operating system as part of the OS these are BIOS, motherboard,
processor, and some other hardware that are part of kernel software. These
include the minimum system requirement device drivers for each operating
system.
2. User-mode Device Driver
Other than the devices which are brought by the kernel for working the
system the user also brings some devices for use during the using of a system
that devices need device drivers to function those drivers fall under User
mode device driver. For example, the user needs any plug-and-play action
that comes under this.

Interrupts driven input output in operating system

Interrupt-driven I/O is a technique where the CPU is notified by an I/O device


when a data transfer is complete, rather than constantly checking the device's
status. This allows the CPU to perform other tasks while waiting for the I/O
operation to finish.

How it Works:
1. I/O Device Ready: When an I/O device, like a keyboard or a disk drive, has data
ready for transfer, it generates an interrupt signal to the CPU.
2. Interrupt Handler: The CPU, upon receiving the interrupt, temporarily suspends
its current operation and branches to an interrupt service routine (ISR).
3. Data Transfer: The ISR handles the data transfer between the I/O device and
memory, either directly or using DMA (Direct Memory Access).
4. Return to Main Program: After the data transfer is complete, the ISR returns
control to the CPU, which resumes its previous operation.

Advantages of Interrupt-Driven I/O:


 Efficiency: The CPU is not tied up waiting for I/O operations to complete. It can
perform other tasks while waiting, leading to better resource utilization.
 Responsiveness: The CPU can respond quickly to I/O requests as they become
available, reducing latency.
 Flexibility: The system can handle multiple I/O devices concurrently without the CPU
needing to poll each one.

Page 14 of 20
Disadvantages of Interrupt-Driven I/O:

 Complexity: Implementing and debugging interrupt-driven I/O can be more complex


than polling, especially in systems with multiple interrupt sources.
 Latency: Interrupt processing can introduce a short delay, known as interrupt latency,
which can affect real-time applications.
 Interrupt Management: Managing interrupt priorities and handling potential interrupt
conflicts can be challenging.

Memory Mapped i/o


CPU needs to communicate with the various memory and input-output devices
(I/O). Data between the processor and these devices flow with the help of the
system bus. There are three ways in which system bus can be allotted to them:
 Separate set of address, control and data bus to I/O and memory.
 Have common bus (data and address) for I/O and memory but separate
control lines.
 Have common bus (data, address, and control) for I/O and memory.

In first case it is simple because both have different set of address space and
instruction but it require more buses.

Isolated I/O
In Isolated I/O, the CPU uses the same buses (wires) to talk to both memory and
I/O devices, but it has separate control signals to tell whether it’s dealing with
memory or an I/O device.
 I/O devices have special addresses called ports.
 When the CPU wants to communicate with an I/O device:
o It puts the port address on the address bus.
o It uses special control lines like I/O Read or I/O Write.
o Then data is sent or received using the data bus.
As memory and I/O have separate
address spaces, it’s called Isolated
I/O. Also, the CPU uses different
instructions for memory and I/O
(like IN and OUT for I/O).

Page 15 of 20
Advantages of Isolated I/O
 Large I/O Address Space: Isolated I/O allows for a larger I/O address space
because I/O devices have their own separate address space, independent of
the system memory.
 Greater Flexibility: It offers greater flexibility, as I/O devices can be added or
removed without affecting the memory address space.
 Improved Reliability: Since I/O devices do not share the same address space
as memory, failures in I/O devices are less likely to affect the memory or
other devices, improving system reliability.
Disadvantages of Isolated I/O
 Slower I/O Operations: I/O operations may be slower because isolated I/O
requires special instructions, which add extra processing steps.
 More Complex Programming: Programming becomes more complex due to
the need for dedicated I/O instructions, such as IN and OUT, which are
separate from standard memory instructions.

Memory Mapped I/O


In a memory-mapped I/O system, there are no special input or output
instructions. Instead, the CPU uses the same instructions it uses for memory
(like LOAD and STORE) to access I/O devices.
 Each I/O device is assigned a specific address in the regular memory
address space.
 Devices are connected through interface registers, which act like
memory locations.
 When the CPU wants to read from or write to an I/O device, it accesses
the corresponding address, just like it would access a memory word.
 These interface registers respond to normal read/write operations as if
they were memory cells.
This design allows I/O and memory to be treated uniformly, simplifying
programming and hardware design.

Page 16 of 20
Applications of Memory-Mapped I/O
 Graphics Processing: Memory-mapped I/O is widely used in graphics cards
to provide fast access to frame buffers and control registers. Graphics data
is mapped directly to memory, allowing the CPU to interact with the
graphics hardware as if it were accessing normal memory. This enables
efficient rendering and display operations.

 Network Communication: Network Interface Cards (NICs) often use


memory-mapped I/O to manage data transfer between the system
memory and the network. The NIC’s control and status registers are
mapped to specific memory addresses, allowing the CPU to efficiently
control and monitor network operations.

 Direct Memory Access (DMA): DMA controllers use memory-mapped I/O


to enable high-speed data transfers between I/O devices and system
memory without involving the CPU. By mapping DMA control registers to
memory, devices can transfer data directly, improving system performance
and reducing CPU load.

Advantages of Memory-Mapped I/O


 Faster I/O Operations: Memory-mapped I/O allows the CPU to access
I/O devices using the same mechanism and speed as regular memory
access. This results in faster I/O operations compared to isolated I/O.
 Simplified Programming: Since the same instructions are used for
both memory and I/O operations, programming becomes easier.
Developers do not need to learn or use special I/O instructions,
reducing complexity.
 Efficient Use of Address Space: Memory-mapped I/O enables I/O
devices to share the same address space as memory. This can make
the system more efficient, especially in systems with a unified memory
model.

Disadvantages of Memory-Mapped I/O


 Limited I/O Address Space: Because memory and I/O devices share the
same address space, the number of available addresses for I/O devices is
limited. This can be a problem in systems with many peripherals.

 Potential Performance Issues: If an I/O device responds slowly, it may


delay the CPU when accessing that memory-mapped region. This can affect
overall system performance, especially in time-sensitive tasks.

Page 17 of 20
Differences between memory mapped I/O and isolated I/O

Aspect Isolated I/O Memory-Mapped I/O


Address Memory and I/O have Memory and I/O share the
Space separate address spaces same address space
Memory All addresses can be used Some address space is used
Usage for memory for I/O, reducing memory
space
Instruction Separate instructions for Same instructions are used
Set I/O and memory read/write for both I/O and memory
operations
I/O I/O addresses are Regular memory addresses
Addressing called ports are used for both memory
and I/O
Efficiency More efficient due to Slightly less efficient due to
separate control lines and shared resources
buses
Hardware Larger hardware due to Smaller hardware as fewer
Size additional buses and logic buses are needed
Design More complex requires Simpler design I/O is
Complexity separate logic for I/O and handled like memory
memory

Direct Memory Access in OS


Direct Memory Access (DMA) is a technique used in computers and other
electronic devices to allow peripherals (like hard drives, network cards, and
sound cards) to communicate directly with the main memory (RAM) without
involving the CPU. This process speeds up data transfer and frees up the CPU
to perform other tasks, improving overall system performance.
 The peripheral device sends a request to the DMA controller to initiate a
data transfer.
 The DMA controller takes control of the system’s memory bus and
accesses memory directly, either reading data from it or writing data to it.
 After the transfer is complete, the DMA controller signals the CPU that the
task is finished, and the CPU can continue with other tasks.
Page 18 of 20
 In simpler terms, DMA acts as a traffic controller for data moving in and
out of memory. It efficiently manages these transfers, freeing up the CPU
for more complex tasks. This mechanism significantly boosts overall system
efficiency and speed.

DMA Controller Components


1. Control Logic: The Control Logic is the central component that
manages the overall DMA operation. It processes control signals and
directs data transfers between the peripherals and memory. It receives
commands from other components and determines how and when data
should be moved.
2. DMA Select and DMA Request: DMA Select is used by the DMA
controller to select the appropriate data transfer request. DMA Request
is initiated by a peripheral device when it needs to perform a data
transfer. The request tells the DMA controller that the device is ready to
either read or write data.
3. DMA Acknowledge: The DMA Acknowledge signal is sent back from
the control logic to the peripheral device to confirm that the DMA
operation has been initiated and the device can proceed with the data
transfer.
4. Bus Request and Bus Grant: Bus Request is generated by the DMA
controller when it needs access to the system's bus for data transfer.
The Bus Grant signal is sent from the CPU or the system’s bus
controller to give the DMA controller permission to use the bus for
transferring data.
5. Address Bus and Data Bus: The Address Bus and Data Bus are used
to transfer data and memory addresses between the DMA controller,
memory, and peripherals. The Data Bus Buffer temporarily holds data
being transferred, while the Address Bus Buffer holds memory
addresses.
Page 19 of 20
6. Registers:
 Address Register: This stores the memory address where data will
be written or read from.
 Word Count Register: This keeps track of the number of words or
units of data that need to be transferred.
 Control Register: This contains control information, including the
direction of data transfer (read or write), and any other control
signals necessary to manage the DMA operation.
7. Internal Bus: The Internal Bus connects all the components inside the
DMA controller, allowing them to communicate and pass data
efficiently.
8. Interrupt: The Interrupt signal is used to inform the CPU once the DMA
operation is completed. After the data has been transferred, the DMA
controller sends an interrupt to notify the CPU, so the CPU can resume
processing or handle other tasks.
Working:
 The DMA controller facilitates the transfer of data between memory
and peripherals without involving the CPU for each individual data
operation, as mentioned in the article.
 The DMA Select and DMA Request initiate the process when a
peripheral want to transfer data, similar to how DMA allows peripherals
to operate independently of the CPU.
 Address Bus and Data Bus handle the flow of data and memory
addresses during the transfer, improving system efficiency by
bypassing the CPU.
 The Registers (Address Register, Word Count Register, Control
Register) store the necessary information to control the transfer, as
described in the article, where DMA controls the movement of data
between the device and memory.
 The Interrupt is triggered once the transfer is completed, similar to how
the CPU is notified in the article that DMA operations have been
finished.

Page 20 of 20

You might also like