KEMBAR78
Solution of Operating Systems (RCA301) 2023-24 | PDF | Process (Computing) | Thread (Computing)
0% found this document useful (0 votes)
43 views20 pages

Solution of Operating Systems (RCA301) 2023-24

The document discusses various topics related to operating systems including interrupts, inter-process communication, dispatcher, busy waiting, mutual exclusion, memory fragmentation, disk cache, producer-consumer problem, demand paging, and direct memory access. It provides definitions and explanations of these terms.

Uploaded by

akttripathi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views20 pages

Solution of Operating Systems (RCA301) 2023-24

The document discusses various topics related to operating systems including interrupts, inter-process communication, dispatcher, busy waiting, mutual exclusion, memory fragmentation, disk cache, producer-consumer problem, demand paging, and direct memory access. It provides definitions and explanations of these terms.

Uploaded by

akttripathi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Printed Pages: 02 Sub Code: RCA301

Paper Id: Roll No.

MCA
(SEM III) THEORY EXAMINATION 2023-24
OPERATING SYSTEMS
Time: 3 Hours Total Marks: 70
Note: 1. Attempt all Sections. If require any missing data; then choose suitably.

SECTION A

1. Attempt all questions in brief. 2 x 7 = 14


a. What do you mean by Interrupt?
Answer: The interruption is a signal emitted by hardware or software when a
process or an event needs immediate attention. It alerts the processor to a high-
priority process requiring interruption of the current working process. In I/O
devices one of the bus control lines is dedicated for this purpose and is called
the Interrupt Service Routine (ISR).
b. Explain the term “Inter Process Communication”.
Answer: Inter-process communication (IPC) is a mechanism that allows
processes to communicate with each other and synchronize their actions. The
communication between these processes can be seen as a method of co-
operation between them. Processes can communicate with each other through
either Shared Memory or Message passing.
c. What do you understand by dispatcher?
Answer: A dispatcher is a special program which comes into play after the
scheduler. When the scheduler completes its job of selecting a process, it is the
dispatcher which takes that process to the desired state/queue. The dispatcher
is the module that gives a process control over the CPU after it has been
selected by the short-term scheduler.
d. Define the term busy waiting.
Answer: Busy Waiting is defined as a process synchronization technique
where the process waits and continuously keeps on checking for the condition
to be satisfied before going ahead with its execution. Busy Waiting is also
known as busy looping or spinning. The condition that is to be checked is the
entry condition to be true such as availability of a resource or lock in the
computer system.
e. Explain mutual exclusion.
Answer: Mutual Exclusion is a property of process synchronization that states
that “no two processes can exist in the critical section at any given point of
time”. The term was first coined by Dijkstra.
f. What do you mean by memory fragmentation?
Answer: Fragmentation in memory management refers to the inefficient use
of storage capacity due to the division of memory into non-contiguous blocks.
g. What do you mean by disk cache?
Answer: Disk cache is a temporary computer memory that stores frequently
used data. It essentially acts as a bridge between the slower, non-volatile
storage (like hard drive) and the computer's main memory (RAM). Such cache
ensures that commonly accessed information is readily available.

SECTION B
1|Page
2. Attempt any three of the following: 7 x 3 = 21
a. Explore various services offered by the Operating System.
Answer: Operating system is software that acts as an intermediary between the
user and computer hardware. It is a program with the help of which we are able to
run various applications. It is the one program that is running all the time. Every
computer must have an operating system to smoothly execute other programs.
The OS coordinates the use of the hardware and application programs for various
users. It provides a platform for other application programs to work. The operating
system is a set of special programs that run on a computer system that allows it to
work properly. It controls input-output devices, execution of programs, managing
files, etc.
Services of Operating System
1. Program execution.
2. Input Output Operations
3. Communication between Process
4. File Management
5. Memory Management
6. Process Management
7. Security and Privacy
8. Resource Management
9. User Interface
10. Networking
11. Error handling
12. Time Management
b. Discuss various states of a process with suitable diagram.
Answer: A process has several stages that it passes through from beginning to
end. There must be a minimum of five states. Even though during execution, the
process could be in one of these states.

• New. The process is being created.


• Running. Instructions are being executed.
• Waiting. The process is waiting for some event to occur (such as an I/O
completion or reception of a signal).
• Ready. The process is waiting to be assigned to a processor.
• Terminated. The process has finished execution.
c. State the Producer-Consumer problem. Discuss a solution of Producer-Consumer
problem using semaphores.
Answer: The Producer-Consumer problem is a classical multi-process synchronization
problem, that is we are trying to achieve synchronization between more than one process.
There is one Producer in the producer-consumer problem, Producer is producing
some items, whereas there is one Consumer that is consuming the items produced
by the Producer. The same memory buffer is shared by both producers and
consumers which is of fixed size. The task of the Producer is to produce the item,

2|Page
put it into the memory buffer, and again start producing items. Whereas the task
of the Consumer is to consume the item from the memory buffer.

Below are a few points that are considered as the problems occur in Producer-
Consumer. The producer should produce data only when the buffer is not full. In
case it is found that the buffer is full, the producer is not allowed to store any data
in the memory buffer. Data can only be consumed by the consumer if and only if
the memory buffer is not empty. In case it is found that the buffer is empty, the
consumer is not allowed to use any data from the memory buffer.
Accessing memory buffer should not be allowed to producer and consumer at the
same time.
Let's see the code for the above problem:
Producer Code

Producer Code

d. What do you mean by demand paging? Discuss the steps taken by operating
system to handle page faults.

Answer: Demand paging can be described as a memory management technique


that is used in operating systems to improve memory usage and system
performance. Demand paging is a technique used in virtual memory systems
where pages enter main memory only when requested or needed by the CPU.
In demand paging, the operating system loads only the necessary pages of a
program into memory at runtime, instead of loading the entire program into
memory at the start. A page fault occurred when the program needed to access
a page that is not currently in memory. The operating system then loads the
required pages from the disk into memory and updates the page tables
3|Page
accordingly. This process is transparent to the running program, and it continues
to run as if the page had always been in memory.
The
procedure for handling this page fault is straightforward (Figure blow):

1. We check an internal table (usually kept with the process control block)
for this process to determine whether the reference was a valid or an
invalid memory access.
2. If the reference was invalid, we terminate the process. If it was valid but
3. We have not yet brought in that page; we now page it in.
4. We find a free frame (by taking one from the free-frame list, for example).
5. We scheduled a secondary storage operation to read the desired page into
the newly allocated frame.
6. When the storage read is complete, modify the internal table kept with
7. the process and the page table to indicate that the page is now in memory.
8. We restart the instruction that was interrupted by the trap. The process can
now access the page as though it had always been in memory.

e. What do you mean by record blocking? Discuss its types with suitable diagrams.

SECTION C
3. Attempt any one part of the following: 7x1=7
(a) Sketch the diagram and discuss working of DMA.
Answer: Direct Memory Access uses hardware for accessing the memory,
that hardware is called a DMA Controller. It has the work of transferring the
data between Input Output devices and main memory with very less
interaction with the processor. The direct Memory Access Controller is a
control unit, which has the work of transferring data.

4|Page
DMA Controller Diagram in Computer Architecture: DMA Controller is
a type of control unit that works as an interface for the data bus and the I/O
Devices. As mentioned, DMA Controller has the work of transferring the data
without the intervention of the processors, processors can control the data
transfer. DMA Controller also contains an address unit, which generates the
address and selects an I/O device for the transfer of data. Here we are showing
the block diagram of the DMA Controller.

Working of DMA Controller: The DMA controller registers have three


registers as follows.
• Address register – It contains the address to specify the desired location
in memory.
• Word count register – It contains the number of words to be transferred.
• Control register – It specifies the transfer mode.
All registers in the DMA appear to the CPU as I/O interface registers.
Therefore, the CPU can both read and write into the DMA registers under
program control via the data bus. The figure below shows the block diagram
of the DMA controller. The unit communicates with the CPU through the data
bus and control lines. Through the use of the address bus and allowing the
DMA and RS register to select inputs, the register within the DMA is chosen
by the CPU. RD and WR are two-way inputs. When BG (bus grant) input is 0,
the CPU can communicate with DMA registers. When BG (bus grant) input is
1, the CPU has relinquished the buses and DMA can communicate directly
with the memory.

5|Page
Working Diagram of DMA Controller: The CPU initializes the DMA by
sending the given information through the data bus.
• The starting address of the memory block where the data is available
(to read) or where data is to be stored (to write).
• It also sends word count which is the number of words in the memory
block to be read or written.
• Control to define the mode of transfer such as read or write.
• A control to begin the DMA transfer.
Modes of Data Transfer in DMA: There are 3 modes of data transfer in DMA
that are described below.
• Burst Mode: In Burst Mode, buses are handed over to the CPU by the DMA
if the whole data is completely transferred, not before that.
• Cycle Stealing Mode: In Cycle Stealing Mode, buses are handed over to
the CPU by the DMA after the transfer of each byte. Continuous request
for bus control is generated by this Data Transfer Mode. It works more
easily for higher-priority tasks.
• Transparent Mode: Transparent Mode in DMA does not require any bus in
the transfer of the data as it works when the CPU is executing the
transaction.
(b) Write a short note on:
i. Multitasking: Multitasking is the concept of executing more than one
job simultaneously in the CPU of your system. In the concept of
multitasking, there is a specified time duration assigned to every job.
This assigned time duration is known as a time slice. When the CPU
spends the time slice for a job, the CPU then gets shifted to another job
waiting in the queue. Multitasking is preemptive as it shares common
resources such as CPU and memory to run multiple programs in the
system. In this manner, multitasking allows you to run numerous tasks
simultaneously. Multitasking is a complex theory of operating systems
that works based on time slices.
ii. Time sharing: A time-shared operating system uses CPU scheduling
and multi-programming to provide each user with a small portion of a
shared computer at once. Each user has at least one separate program
in memory. A program is loaded into memory and executes, it performs
a short period of time either before completion or to complete I/O. This
short period of time during which the user gets the attention of
the CPU is known as time slice, time slot, or quantum. It is typically of
6|Page
the order of 10 to 100 milliseconds. Time-shared operating systems are
more complex than multiprogrammed operating systems. In both,
multiple jobs must be kept in memory simultaneously, so the system
must have memory management and security. To achieve a good
response time, jobs may have to swap in and out of disk from the main
memory which now serves as a backing store for the main memory. A
common method to achieve this goal is virtual memory, a technique
that allows the execution of a job that may not be completely in
memory.
4. Attempt any one part of the following: 7x1=7
(a) Differentiate between Process and Threads.

S.NO Process Thread

Process means any Thread means a segment of a


1. program is in execution. process.

The process takes more The thread takes less time to


2. time to terminate. terminate.

It takes more time for


It takes less time for creation.
3. creation.

It also takes more time It takes less time for context


4. for context switching. switching.

The process is less


Thread is more efficient in terms
efficient in terms of
of communication.
5. communication.

We don’t need multi programs in


Multiprogramming holds
action for multiple threads
the concepts of multi-
because a single process consists
process.
6. of multiple threads.

7. The process is isolated. Threads share memory.

A Thread is lightweight as each


The process is called the
thread in a process shares code,
heavyweight process.
8. data, and resources.

Thread switching does not


Process switching uses an
require calling an operating
interface in an operating
system and causes an interrupt to
system.
9. the kernel.

7|Page
If one process is blocked
If a user-level thread is blocked,
then it will not affect the
then all other user-level threads
execution of other
are blocked.
10. processes

The process has its own


Thread has Parents’ PCB, its own
Process Control Block,
Thread Control Block, and Stack
Stack, and Address
and common Address space.
11. Space.

Since all threads of the same


process share address space and
Changes to the parent
other resources so any changes to
process do not affect
the main thread may affect the
child processes.
behavior of the other threads of
12. the process.

A system call is involved No system call is involved, it is


13. in it. created using APIs.

The process does not


Threads share data with each
share data with each
other.
14. other.

(b) What do you mean by context stitching? Explain with suitable diagram.
Answer: Context witching refers to a technique/method used by the OS to
switch processes from a given state to another one for the execution of its
function using the CPUs present in the system. When switching is performed
in the system, the old running process’s status is stored as registers, and the
CPU is assigned to a new process for the execution of its tasks. While new
processes are running in a system, the previous ones must wait in the ready
queue. The old process’s execution begins at that particular point at which
another process happened to stop it. It describes the features of a multitasking
OS where multiple processes share a similar CPU to perform various tasks
without the requirement for further processors in the given system.
Steps of Context Switching:

8|Page
Several steps are involved in the context switching of a process. The diagram
given below represents context switching between two processes, P1 and P2,
in case of an interrupt, I/O need, or the occurrence of a priority-based process
in the PCB’s ready queue.
As you can see in the illustration above, the process P1 is initially running on
the CPU for the execution of its task. At the very same time, P2, another
process, is in its ready state. If an interruption or error has occurred or if the
process needs I/O, the P1 process would switch the state from running to
waiting.
Before the change of the state of the P1 process, context switching helps in
saving the context of the P1 process as registers along with the program counter
(to PCB1). Then it loads the P2 process state from its ready state (of PCB2) to
its running state.
Here are the steps are taken to switch the P1 to P2:
1. The context switching must save the P1’s state as the program counter
and register to PCB that is in its running state.
2. Now it updates the PCB1 to the process P1 and then moves the process
to its appropriate queue, like the ready queue, waiting queue and I/O
queue.
3. Then, another process enters the running state. We can also select a new
process instead of from the ready state that needs to be executed or
when a process has a higher priority of executing its task.
4. Thus, now we need to update the PCB for the P2 selected process. It
involves switching a given process state from its running state or from
any other state, such as exit, blocked, or suspended.
5. In case the CPU already performs the execution of the P2 process, then
we must get the P2 process’s status so as to resume the execution of it
at the very same time at the same point at which there’s a system
interrupt.
In a similar manner, the P2 process is switched off from the system’s CPU to
let the process P1 resume its execution. The process P1 is reloaded to the
running state from PCB1 to resume its assigned task at the very same point.
Else, the data is lost, so when the process is again executed, it starts the
execution at its initial level.
5. Attempt any one part of the following: 7x1=7
(a) Consider the four processes with the given arrival and burst time.
Process Arrival Time Burst Time
P1 0.4 1.1
P2 0.0 1.6
P3 0.6 0.8
P4 1.0 0.3
Sketch Gantt chart for these processes for Preemptive and Non-Preemptive SJF
scheduling algorithms. Also find out average turnaround time, average waiting
time and CPU utilization.
Answer: The Gantt Chart for Preemptive SJF scheduling algorithm is as
follows:

9|Page
The Gantt Chart for Non-Preemptive SJF scheduling algorithm is as follows:

Turnaround time for Preemptive SJF scheduling algorithm:


TAT(P1)=2.6-0.4=2.2
TAT(P2)=3.8-0.0=3.8
TAT(P3)=1.7-0.6=1.1
TAT(P4)=1.3-1.0=0.3
Average Turnaround time for Preemptive SJF scheduling algorithm=
(2.2+3.8+1.1+0.3)/4=1.85

Waiting time for Preemptive SJF scheduling algorithm:


WT(P1)=2.2-1.1=1.1
WT(P2)=3.8-1.6=2.2
WT(P3)=1.1-0.8=0.3
WT(P4)=0.3-0.3=0.0
Average Waiting time for Preemptive SJF scheduling algorithm=
(1.1+2.2+0.3+0.0)/4=0.9

Turnaround time for Non-Preemptive SJF scheduling algorithm:


TAT(P1)=3.8-0.4=3.4
TAT(P2)=1.6-0.0=1.6
TAT(P3)=2.7-0.6=2.1
TAT(P4)=1.9-1.0=0.9
Average Turnaround time for Non-Preemptive SJF scheduling algorithm=
(3.4+1.6+2.1+0.9)/4=2.0

Waiting time for Non-Preemptive SJF scheduling algorithm:


WT(P1)=3.4-1.1=2.3
WT(P2)=1.6-1.6=0.0
WT(P3)=2.1-0.8=1.3
WT(P4)=0.9-0.3=0.6
Average Waiting time for Non-Preemptive SJF scheduling algorithm=
(2.3+0.0+1.3+0.6)/4=1.05
(b) What do you mean by deadlock? Discuss banker’s algorithm for handling the
deadlock.
Answer: Every process needs some resources to complete its execution.
However, the resource is granted in a sequential order.
1. The process requests for some resource.
2. OS grant the resource if it is available otherwise let the process waits.
3. The process uses it and release on the completion.
A Deadlock is a situation where each of the computer process waits for a
resource which is being assigned to some another process. In this situation,
none of the process gets executed since the resource it needs, is held by some
other process which is also waiting for some other resource to be released. The
banker’s algorithm is named so because it is used in the banking system to
check whether a loan can be sanctioned to a person or not. Suppose there are
n number of account holders in a bank and the total sum of their money is S.

10 | P a g e
If a person applies for a loan, then the bank first subtracts the loan amount
from the total money that the bank has and if the remaining amount is greater
than S then only the loan is sanctioned. It is done because if all the account
holders come to withdraw their money, then the bank can easily do it.
It also helps the OS to successfully share the resources between all the
processes. It is called the banker’s algorithm because bankers need a similar
algorithm- they admit loans that collectively exceed the bank’s funds and
then release each borrower’s loan in instalments. The banker’s algorithm uses
the notation of a safe allocation state to ensure that granting a resource request
cannot lead to a deadlock either immediately or in the future.
In other words, the bank would never allocate its money in such a way that it
can no longer satisfy the needs of all its customers. The bank would try to be
in a safe state always.
The following Data structures are used to implement the Banker’s
Algorithm:
Let ‘n’ be the number of processes in the system and ‘m’ be the number of
resource types.
Available
• It is a 1-d array of size ‘m’ indicating the number of available
resources of each type.
• Available[ j ] = k means there are ‘k’ instances of resource
type Rj
Max
• It is a 2-d array of size ‘n*m’ that defines the maximum demand
of each process in a system.
• Max[ i, j ] = k means process Pi may request at
most ‘k’ instances of resource type Rj.
Allocation
• It is a 2-d array of size ‘n*m’ that defines the number of
resources of each type currently allocated to each process.
• Allocation[ i, j ] = k means process Pi is currently
allocated ‘k’ instances of resource type Rj
Need
• It is a 2-d array of size ‘n*m’ that indicates the remaining
resource need of each process.
• Need [ i, j ] = k means process Pi currently needs ‘k’ instances
of resource type Rj
• Need [ i, j ] = Max [ i, j ] – Allocation [ i, j ]
Allocation specifies the resources currently allocated to process Pi and
Needi specifies the additional resources that process Pi may still request to
complete its task.
Banker’s algorithm consists of a Safety algorithm and a Resource request
algorithm.
Banker’s Algorithm
1. Active:= Running U Blocked;
for k=1…r
New_ request[k]:= Requested_ resources[requesting_ process, k];
2. Simulated_ allocation:= Allocated_ resources;
for k=1…..r //Compute projected allocation state
Simulated_ allocation [requesting _process, k]:= Simulated_ allocation
[requesting _process, k] + New_ request[k];
3. feasible:= true;
for k=1….r // Check whether projected allocation state is feasible
if Total_ resources[k]< Simulated_ total_ alloc [k] then feasible:= false;
11 | P a g e
4. if feasible= true
then // Check whether projected allocation state is a safe allocation state
while set Active contains a process P1 such that
For all k, Total _resources[k] – Simulated_ total_ alloc[k]>= Max_ need
[l ,k]-Simulated_ allocation[l, k]
Delete Pl from Active;
for k=1…..r
Simulated_ total_ alloc[k]:= Simulated_ total_ alloc[k]- Simulated_
allocation[l, k];
5. If set Active is empty
then // Projected allocation state is a safe allocation state
for k=1….r // Delete the request from pending requests
Requested_ resources[requesting_ process, k]:=0;
for k=1….r // Grant the request
Allocated_ resources[requesting_ process, k]:= Allocated_
resources[requesting_ process, k] + New_ request[k];
Total_ alloc[k]:= Total_ alloc[k] + New_ request[k];
Safety Algorithm
The algorithm for finding out whether or not a system is in a safe state can
be described as follows:
1) Let Work and Finish be vectors of length ‘m’ and ‘n’ respectively
Initialize: Work = Available
Finish[i] = false; for i=1, 2, 3, 4….n
2. Find an i such that both:
a) Finish[i] = false
b) Needi <= Work
if no such i exists goto step (4)
3) Work = Work + Allocation[i]
Finish[i] = true
goto step (2)
4) if Finish [i] = true for all i then the system is in a safe state
Resource-Request Algorithm: Let Requesti be the request array for process
Pi. Requesti [j] = k means process Pi wants k instances of resource type Rj.
When a request for resources is made by process Pi, the following actions are
taken:
1) If Requesti <= Needi Goto step (2); otherwise, raise an error
condition, since the process has exceeded its maximum claim.
2) If Requesti <= Available Goto step (3); otherwise, Pi must wait, since the
resources are not available.
3) Have the system pretend to have allocated the requested resources to
process Pi by modifying the state as follows:
Available = Available – Requesti
Allocationi = Allocationi + Requesti
Needi = Needi– Requesti

6. Attempt any one part of the following: 7x1=7


(a) Consider the following page reference string:
7, 2, 3, 1, 2, 5, 3, 4, 6, 7, 7, 1, 0, 5, 4, 6, 2, 3, 0, 1.
Assuming demand paging with three frames, find out number of page faults occurring
in each of the following replacement algorithms?
• LRU replacement
• FIFO replacement
• Optimal replacement

12 | P a g e
Answer: For demand paging with three frames:
• LRU replacement

• FIFO replacement

• Optimal replacement

(b) Write Short note on:


i. Pre-Paging and Demand Paging
Pre-paging : Pre-paging is used to overcome a major drawback of demand paging. A
major drawback of demand paging is many page faults, which may occur as soon as a
process starts to execute. The situation results from an effort to load the initial locality
into memory, and the same situation may arise repeatedly.
Prepaging is an attempt to prevent this high level of initial paging. The strategy is to
bring some or all of the pages that will be needed into memory at one time. The major
advantage of Pre-paging is that it might save time when a process references
consecutive addresses. In this case, it is easy for the operating system to guess and load
the appropriate pages, and, as there is a high probability of the guess being right for
many pages, fewer page faults will occur.
Pre-paging may not always be beneficial. The advantage of Pre-paging is based on the
answer to a simple question: whether the cost of implementing Pre-paging is less than
the cost of servicing the corresponding page faults. It may be the case that a
considerably large number of pages brought back into the memory by Pre-paging are
not used. The disadvantage of the concept is that there is wastage of resources like
time and memory if the pre-loaded pages are unused.

13 | P a g e
Demand paging: It is a technique used in virtual memory systems where the pages
are brought in the main memory only when required or demanded by the CPU. Hence,
it is also called lazy swapper because the swapping of pages is done only when the
CPU requires it. Virtual memory is commonly implemented in demand paging.
In demand paging, the pager brings only those necessary pages into the memory
instead of swapping in a whole process. Thus, demand paging avoids reading into
memory pages that will not be used anyways, decreasing the swap time and the amount
of physical memory needed.

The demand paging system depends on the page table implementation because the
page table helps map logical memory to physical memory. Bitwise operators are
implemented in the page table to indicate that pages are ok or not (valid or invalid).
All valid pages exist in primary memory, and invalid pages exist in secondary
memory. Now all processes go-ahead to access all pages, and then the following things
will be happened, such as:
1. Attempt to currently access page.
2. If the page is ok (Valid), then all processing instructions work as normal.
3. If anyone's page is found invalid, then a page fault issue arises.
4. Now memory reference is determined whether a valid reference exists on the
auxiliary memory or not. If not exist, then the process is terminated, otherwise
needed pages are paged in.
5. Now disk operations are implemented to fetch the required page into primary
memory.

14 | P a g e
ii. Logical Address Vs Physical Address: The differences between Logical
Address and Physical Address are as follows:

Parameter LOGICAL ADDRESS PHYSICAL ADDRESS

Basic generated by CPU location in a memory unit

Logical Address Space is set Physical Address is set of all


Address of all logical addresses physical addresses mapped to
Space generated by CPU in the corresponding logical
reference to a program. addresses.

User can view the logical User can never view physical
Visibility
address of a program. address of program.

Generation generated by the CPU Computed by MMU

The user can use the logical The user can indirectly access
Access address to access the physical physical address but not
address. directly.

Logical address can be Physical address will not


Editable
change. change.

Also called virtual address. real address.

7. Attempt any one part of the following: 7x1=7


(a) Discuss various types of directory structure used by operating systems.
Answer: A directory is a container that is used to contain folders and files. It
organizes files and folders in a hierarchical manner. Various types of directory
structure with their advantages and disadvantages are discussed as follows:

• Single-level directory: The single-level directory is the simplest


directory structure. In it, all files are contained in the same directory
which makes it easy to support and understand.

A single level directory has a significant limitation, however, when the


number of files increases or when the system has more than one user.
Since all the files are in the same directory, they must have a unique
name. If two users call their dataset test, then the unique name rule
violated.
15 | P a g e
Advantages:
• Since it is a single directory, its implementation is very easy.
• If the files are smaller in size, searching will become faster.
• The operations like file creation, searching, deletion, updating are very
easy in such a directory structure.
• Logical Organization: Directory structures help to logically organize
files and directories in a hierarchical structure. This provides an easy
way to navigate and manage files, making it easier for users to access
the data they need.
• Increased Efficiency: Directory structures can increase the efficiency
of the file system by reducing the time required to search for files. This
is because directory structures are optimized for fast file access,
allowing users to quickly locate the file they need.
• Improved Security: Directory structures can provide better security for
files by allowing access to be restricted at the directory level. This helps
to prevent unauthorized access to sensitive data and ensures that
important files are protected.
• Facilitates Backup and Recovery: Directory structures make it easier to
backup and recover files in the event of a system failure or data loss.
By storing related files in the same directory, it is easier to locate and
backup all the files that need to be protected.
• Scalability: Directory structures are scalable, making it easy to add new
directories and files as needed. This helps to accommodate growth in
the system and makes it easier to manage large amounts of data.
Disadvantages:
• There may be a chance of a name collision because two files can have
the same name.
• Searching will become time taking if the directory is large.
• This cannot group the same type of files together.

Two-level directory: As we have seen, a single level directory often leads to


confusion of files names among different users. The solution to this problem is
to create a separate directory for each user. In the two-level directory structure,
each user has their own user files directory (UFD). The UFDs have similar
structures, but each list only the files of a single user. System’s master file
directory (MFD) is searched whenever a new user id is created.

Advantages:
• The main advantage is there can be more than two files with the same
name, and would be very helpful if there are multiple users.

16 | P a g e
• A security would be there which would prevent the user to access other
user’s files.
• Searching of the files becomes very easy in this directory structure.
Disadvantages:
• As there is an advantage of security, there is also disadvantage that the
user cannot share the file with the other users.
• Unlike the advantage users can create their own files, users don’t have
the ability to create subdirectories.
• Scalability is not possible because one user can’t group the same types
of files together.
Tree Structure/ Hierarchical Structure: Tree directory structure of
operating system is most commonly used in our personal computers. Users can
create files and subdirectories too, which was a disadvantage in the previous
directory structures. This directory structure resembles a real tree upside down,
where the root directory is at the peak. This root contains all the directories for
each user. The users can create subdirectories and even store files in their
directory. A user do not have access to the root directory data and cannot
modify it. And, even in this directory the user do not have access to other user’s
directories. The structure of tree directory is given below which shows how
there are files and subdirectories in each user’s directory.

Advantages:
• This directory structure allows subdirectories inside a directory.
• The searching is easier.
• File sorting of important and unimportant becomes easier.
• This directory is more scalable than the other two directory structures
explained.
Disadvantages:
• As the user isn’t allowed to access other user’s directory, this prevents
file sharing among users.
• As the user has the capability to make subdirectories, if the number of
subdirectories increase the searching may become complicated.
• Users cannot modify the root directory data.
• If files do not fit in one, they might have to be fit into other directories.
Acyclic Graph Structure: As we have seen from the above three directory
structures, none of them have the capability to access one file from multiple
directories. The file or the subdirectory could be accessed through the directory
it was present in, but not from the other directory. This problem is solved in
acyclic graph directory structure, where a file in one directory can be accessed
from multiple directories. In this way, the files could be shared between the

17 | P a g e
users. It is designed in a way that multiple directories point to a particular
directory or file with the help of links.
In the below figure, this explanation can be nicely observed, where a file is
shared between multiple users. If any user makes a change, it would be
reflected to both the users.

Advantages:
• Sharing of files and directories is allowed between multiple users.
• Searching becomes too easy.
• Flexibility is increased as file sharing and editing access is there for
multiple users.
Disadvantages:
• Because of the complex structure it has, it is difficult to implement this
directory structure.
• The user must be very cautious to edit or even deletion of file as the file
is accessed by multiple users.
• If we need to delete the file, then we need to delete all the references of
the file inorder to delete it permanently.
(b) Explain various file allocation methods on secondary storage.
Answer: The allocation methods define how the files are stored in the disk
blocks. There are three main disk space or file allocation methods.
• Contiguous Allocation
• Linked Allocation
• Indexed Allocation
The main idea behind these methods is to provide:
• Efficient disk space utilization.
• Fast access to the file blocks.
All the three methods have their own advantages and disadvantages as
discussed below:
Contiguous Allocation: In this scheme, each file occupies a contiguous set
of blocks on the disk. For example, if a file requires n blocks and is given a
block b as the starting location, then the blocks assigned to the file will
be: b, b+1, b+2,……b+n-1. This means that given the starting block address
and the length of the file (in terms of blocks required), we can determine the
blocks occupied by the file.
The directory entry for a file with contiguous allocation contains
• Address of starting block
• Length of the allocated portion.
The file ‘mail’ in the following figure starts from the block 19 with length =
6 blocks. Therefore, it occupies 19, 20, 21, 22, 23, 24 blocks.

18 | P a g e
Advantages:
• Both the Sequential and Direct Accesses are supported by this.
For direct access, the address of the kth block of the file which
starts at block b can easily be obtained as (b+k).
• This is extremely fast since the number of seeks are minimal
because of contiguous allocation of file blocks.
Disadvantages:
• This method suffers from both internal and external
fragmentation. This makes it inefficient in terms of memory
utilization.
• Increasing file size is difficult because it depends on the
availability of contiguous memory at a particular instance.
Linked List Allocation: In this scheme, each file is a linked list of disk
blocks which need not be contiguous. The disk blocks can be scattered
anywhere on the disk. The directory entry contains a pointer to the starting
and the ending file block.

Each block contains a pointer to the next block occupied by the file. The
file ‘jeep’ in following image shows how the blocks are randomly

19 | P a g e
distributed. The last block (25) contains -1 indicating a null pointer and
does not point to any other block.
Advantages:
• This is very flexible in terms of file size. File size can be increased
easily since the system does not have to look for a contiguous
chunk of memory.
• This method does not suffer from external fragmentation. This
makes it relatively better in terms of memory utilization.
Disadvantages:
• Because the file blocks are distributed randomly on the disk, a
large number of seeks are needed to access every block
individually. This makes linked allocation slower.
• It does not support random or direct access. We can not directly
access the blocks of a file. A block k of a file can be accessed by
traversing k blocks sequentially (sequential access ) from the
starting block of the file via block pointers.
• Pointers required in the linked allocation incur some extra
overhead.
Indexed Allocation: In this scheme, a special block known as the Index
block contains the pointers to all the blocks occupied by a file. Each file has
its own index block. The ith entry in the index block contains the disk address
of the ith file block. The directory entry contains the address of the index
block as shown in the image:

Advantages:
• This supports direct access to the blocks occupied by the file and
therefore provides fast access to the file blocks.
• It overcomes the problem of external fragmentation.
Disadvantages:
• The pointer overhead for indexed allocation is greater than
linked allocation.
• For very small files, say files that expand only 2-3 blocks, the
indexed allocation would keep one entire block (index block) for
the pointers which is inefficient in terms of memory utilization.
However, in linked allocation we lose the space of only 1 pointer
per block.

20 | P a g e

You might also like