Solution of Operating Systems (RCA301) 2023-24
Solution of Operating Systems (RCA301) 2023-24
MCA
(SEM III) THEORY EXAMINATION 2023-24
OPERATING SYSTEMS
Time: 3 Hours Total Marks: 70
Note: 1. Attempt all Sections. If require any missing data; then choose suitably.
SECTION A
SECTION B
1|Page
2. Attempt any three of the following: 7 x 3 = 21
a. Explore various services offered by the Operating System.
Answer: Operating system is software that acts as an intermediary between the
user and computer hardware. It is a program with the help of which we are able to
run various applications. It is the one program that is running all the time. Every
computer must have an operating system to smoothly execute other programs.
The OS coordinates the use of the hardware and application programs for various
users. It provides a platform for other application programs to work. The operating
system is a set of special programs that run on a computer system that allows it to
work properly. It controls input-output devices, execution of programs, managing
files, etc.
Services of Operating System
1. Program execution.
2. Input Output Operations
3. Communication between Process
4. File Management
5. Memory Management
6. Process Management
7. Security and Privacy
8. Resource Management
9. User Interface
10. Networking
11. Error handling
12. Time Management
b. Discuss various states of a process with suitable diagram.
Answer: A process has several stages that it passes through from beginning to
end. There must be a minimum of five states. Even though during execution, the
process could be in one of these states.
2|Page
put it into the memory buffer, and again start producing items. Whereas the task
of the Consumer is to consume the item from the memory buffer.
Below are a few points that are considered as the problems occur in Producer-
Consumer. The producer should produce data only when the buffer is not full. In
case it is found that the buffer is full, the producer is not allowed to store any data
in the memory buffer. Data can only be consumed by the consumer if and only if
the memory buffer is not empty. In case it is found that the buffer is empty, the
consumer is not allowed to use any data from the memory buffer.
Accessing memory buffer should not be allowed to producer and consumer at the
same time.
Let's see the code for the above problem:
Producer Code
Producer Code
d. What do you mean by demand paging? Discuss the steps taken by operating
system to handle page faults.
1. We check an internal table (usually kept with the process control block)
for this process to determine whether the reference was a valid or an
invalid memory access.
2. If the reference was invalid, we terminate the process. If it was valid but
3. We have not yet brought in that page; we now page it in.
4. We find a free frame (by taking one from the free-frame list, for example).
5. We scheduled a secondary storage operation to read the desired page into
the newly allocated frame.
6. When the storage read is complete, modify the internal table kept with
7. the process and the page table to indicate that the page is now in memory.
8. We restart the instruction that was interrupted by the trap. The process can
now access the page as though it had always been in memory.
e. What do you mean by record blocking? Discuss its types with suitable diagrams.
SECTION C
3. Attempt any one part of the following: 7x1=7
(a) Sketch the diagram and discuss working of DMA.
Answer: Direct Memory Access uses hardware for accessing the memory,
that hardware is called a DMA Controller. It has the work of transferring the
data between Input Output devices and main memory with very less
interaction with the processor. The direct Memory Access Controller is a
control unit, which has the work of transferring data.
4|Page
DMA Controller Diagram in Computer Architecture: DMA Controller is
a type of control unit that works as an interface for the data bus and the I/O
Devices. As mentioned, DMA Controller has the work of transferring the data
without the intervention of the processors, processors can control the data
transfer. DMA Controller also contains an address unit, which generates the
address and selects an I/O device for the transfer of data. Here we are showing
the block diagram of the DMA Controller.
5|Page
Working Diagram of DMA Controller: The CPU initializes the DMA by
sending the given information through the data bus.
• The starting address of the memory block where the data is available
(to read) or where data is to be stored (to write).
• It also sends word count which is the number of words in the memory
block to be read or written.
• Control to define the mode of transfer such as read or write.
• A control to begin the DMA transfer.
Modes of Data Transfer in DMA: There are 3 modes of data transfer in DMA
that are described below.
• Burst Mode: In Burst Mode, buses are handed over to the CPU by the DMA
if the whole data is completely transferred, not before that.
• Cycle Stealing Mode: In Cycle Stealing Mode, buses are handed over to
the CPU by the DMA after the transfer of each byte. Continuous request
for bus control is generated by this Data Transfer Mode. It works more
easily for higher-priority tasks.
• Transparent Mode: Transparent Mode in DMA does not require any bus in
the transfer of the data as it works when the CPU is executing the
transaction.
(b) Write a short note on:
i. Multitasking: Multitasking is the concept of executing more than one
job simultaneously in the CPU of your system. In the concept of
multitasking, there is a specified time duration assigned to every job.
This assigned time duration is known as a time slice. When the CPU
spends the time slice for a job, the CPU then gets shifted to another job
waiting in the queue. Multitasking is preemptive as it shares common
resources such as CPU and memory to run multiple programs in the
system. In this manner, multitasking allows you to run numerous tasks
simultaneously. Multitasking is a complex theory of operating systems
that works based on time slices.
ii. Time sharing: A time-shared operating system uses CPU scheduling
and multi-programming to provide each user with a small portion of a
shared computer at once. Each user has at least one separate program
in memory. A program is loaded into memory and executes, it performs
a short period of time either before completion or to complete I/O. This
short period of time during which the user gets the attention of
the CPU is known as time slice, time slot, or quantum. It is typically of
6|Page
the order of 10 to 100 milliseconds. Time-shared operating systems are
more complex than multiprogrammed operating systems. In both,
multiple jobs must be kept in memory simultaneously, so the system
must have memory management and security. To achieve a good
response time, jobs may have to swap in and out of disk from the main
memory which now serves as a backing store for the main memory. A
common method to achieve this goal is virtual memory, a technique
that allows the execution of a job that may not be completely in
memory.
4. Attempt any one part of the following: 7x1=7
(a) Differentiate between Process and Threads.
7|Page
If one process is blocked
If a user-level thread is blocked,
then it will not affect the
then all other user-level threads
execution of other
are blocked.
10. processes
(b) What do you mean by context stitching? Explain with suitable diagram.
Answer: Context witching refers to a technique/method used by the OS to
switch processes from a given state to another one for the execution of its
function using the CPUs present in the system. When switching is performed
in the system, the old running process’s status is stored as registers, and the
CPU is assigned to a new process for the execution of its tasks. While new
processes are running in a system, the previous ones must wait in the ready
queue. The old process’s execution begins at that particular point at which
another process happened to stop it. It describes the features of a multitasking
OS where multiple processes share a similar CPU to perform various tasks
without the requirement for further processors in the given system.
Steps of Context Switching:
8|Page
Several steps are involved in the context switching of a process. The diagram
given below represents context switching between two processes, P1 and P2,
in case of an interrupt, I/O need, or the occurrence of a priority-based process
in the PCB’s ready queue.
As you can see in the illustration above, the process P1 is initially running on
the CPU for the execution of its task. At the very same time, P2, another
process, is in its ready state. If an interruption or error has occurred or if the
process needs I/O, the P1 process would switch the state from running to
waiting.
Before the change of the state of the P1 process, context switching helps in
saving the context of the P1 process as registers along with the program counter
(to PCB1). Then it loads the P2 process state from its ready state (of PCB2) to
its running state.
Here are the steps are taken to switch the P1 to P2:
1. The context switching must save the P1’s state as the program counter
and register to PCB that is in its running state.
2. Now it updates the PCB1 to the process P1 and then moves the process
to its appropriate queue, like the ready queue, waiting queue and I/O
queue.
3. Then, another process enters the running state. We can also select a new
process instead of from the ready state that needs to be executed or
when a process has a higher priority of executing its task.
4. Thus, now we need to update the PCB for the P2 selected process. It
involves switching a given process state from its running state or from
any other state, such as exit, blocked, or suspended.
5. In case the CPU already performs the execution of the P2 process, then
we must get the P2 process’s status so as to resume the execution of it
at the very same time at the same point at which there’s a system
interrupt.
In a similar manner, the P2 process is switched off from the system’s CPU to
let the process P1 resume its execution. The process P1 is reloaded to the
running state from PCB1 to resume its assigned task at the very same point.
Else, the data is lost, so when the process is again executed, it starts the
execution at its initial level.
5. Attempt any one part of the following: 7x1=7
(a) Consider the four processes with the given arrival and burst time.
Process Arrival Time Burst Time
P1 0.4 1.1
P2 0.0 1.6
P3 0.6 0.8
P4 1.0 0.3
Sketch Gantt chart for these processes for Preemptive and Non-Preemptive SJF
scheduling algorithms. Also find out average turnaround time, average waiting
time and CPU utilization.
Answer: The Gantt Chart for Preemptive SJF scheduling algorithm is as
follows:
9|Page
The Gantt Chart for Non-Preemptive SJF scheduling algorithm is as follows:
10 | P a g e
If a person applies for a loan, then the bank first subtracts the loan amount
from the total money that the bank has and if the remaining amount is greater
than S then only the loan is sanctioned. It is done because if all the account
holders come to withdraw their money, then the bank can easily do it.
It also helps the OS to successfully share the resources between all the
processes. It is called the banker’s algorithm because bankers need a similar
algorithm- they admit loans that collectively exceed the bank’s funds and
then release each borrower’s loan in instalments. The banker’s algorithm uses
the notation of a safe allocation state to ensure that granting a resource request
cannot lead to a deadlock either immediately or in the future.
In other words, the bank would never allocate its money in such a way that it
can no longer satisfy the needs of all its customers. The bank would try to be
in a safe state always.
The following Data structures are used to implement the Banker’s
Algorithm:
Let ‘n’ be the number of processes in the system and ‘m’ be the number of
resource types.
Available
• It is a 1-d array of size ‘m’ indicating the number of available
resources of each type.
• Available[ j ] = k means there are ‘k’ instances of resource
type Rj
Max
• It is a 2-d array of size ‘n*m’ that defines the maximum demand
of each process in a system.
• Max[ i, j ] = k means process Pi may request at
most ‘k’ instances of resource type Rj.
Allocation
• It is a 2-d array of size ‘n*m’ that defines the number of
resources of each type currently allocated to each process.
• Allocation[ i, j ] = k means process Pi is currently
allocated ‘k’ instances of resource type Rj
Need
• It is a 2-d array of size ‘n*m’ that indicates the remaining
resource need of each process.
• Need [ i, j ] = k means process Pi currently needs ‘k’ instances
of resource type Rj
• Need [ i, j ] = Max [ i, j ] – Allocation [ i, j ]
Allocation specifies the resources currently allocated to process Pi and
Needi specifies the additional resources that process Pi may still request to
complete its task.
Banker’s algorithm consists of a Safety algorithm and a Resource request
algorithm.
Banker’s Algorithm
1. Active:= Running U Blocked;
for k=1…r
New_ request[k]:= Requested_ resources[requesting_ process, k];
2. Simulated_ allocation:= Allocated_ resources;
for k=1…..r //Compute projected allocation state
Simulated_ allocation [requesting _process, k]:= Simulated_ allocation
[requesting _process, k] + New_ request[k];
3. feasible:= true;
for k=1….r // Check whether projected allocation state is feasible
if Total_ resources[k]< Simulated_ total_ alloc [k] then feasible:= false;
11 | P a g e
4. if feasible= true
then // Check whether projected allocation state is a safe allocation state
while set Active contains a process P1 such that
For all k, Total _resources[k] – Simulated_ total_ alloc[k]>= Max_ need
[l ,k]-Simulated_ allocation[l, k]
Delete Pl from Active;
for k=1…..r
Simulated_ total_ alloc[k]:= Simulated_ total_ alloc[k]- Simulated_
allocation[l, k];
5. If set Active is empty
then // Projected allocation state is a safe allocation state
for k=1….r // Delete the request from pending requests
Requested_ resources[requesting_ process, k]:=0;
for k=1….r // Grant the request
Allocated_ resources[requesting_ process, k]:= Allocated_
resources[requesting_ process, k] + New_ request[k];
Total_ alloc[k]:= Total_ alloc[k] + New_ request[k];
Safety Algorithm
The algorithm for finding out whether or not a system is in a safe state can
be described as follows:
1) Let Work and Finish be vectors of length ‘m’ and ‘n’ respectively
Initialize: Work = Available
Finish[i] = false; for i=1, 2, 3, 4….n
2. Find an i such that both:
a) Finish[i] = false
b) Needi <= Work
if no such i exists goto step (4)
3) Work = Work + Allocation[i]
Finish[i] = true
goto step (2)
4) if Finish [i] = true for all i then the system is in a safe state
Resource-Request Algorithm: Let Requesti be the request array for process
Pi. Requesti [j] = k means process Pi wants k instances of resource type Rj.
When a request for resources is made by process Pi, the following actions are
taken:
1) If Requesti <= Needi Goto step (2); otherwise, raise an error
condition, since the process has exceeded its maximum claim.
2) If Requesti <= Available Goto step (3); otherwise, Pi must wait, since the
resources are not available.
3) Have the system pretend to have allocated the requested resources to
process Pi by modifying the state as follows:
Available = Available – Requesti
Allocationi = Allocationi + Requesti
Needi = Needi– Requesti
12 | P a g e
Answer: For demand paging with three frames:
• LRU replacement
• FIFO replacement
• Optimal replacement
13 | P a g e
Demand paging: It is a technique used in virtual memory systems where the pages
are brought in the main memory only when required or demanded by the CPU. Hence,
it is also called lazy swapper because the swapping of pages is done only when the
CPU requires it. Virtual memory is commonly implemented in demand paging.
In demand paging, the pager brings only those necessary pages into the memory
instead of swapping in a whole process. Thus, demand paging avoids reading into
memory pages that will not be used anyways, decreasing the swap time and the amount
of physical memory needed.
The demand paging system depends on the page table implementation because the
page table helps map logical memory to physical memory. Bitwise operators are
implemented in the page table to indicate that pages are ok or not (valid or invalid).
All valid pages exist in primary memory, and invalid pages exist in secondary
memory. Now all processes go-ahead to access all pages, and then the following things
will be happened, such as:
1. Attempt to currently access page.
2. If the page is ok (Valid), then all processing instructions work as normal.
3. If anyone's page is found invalid, then a page fault issue arises.
4. Now memory reference is determined whether a valid reference exists on the
auxiliary memory or not. If not exist, then the process is terminated, otherwise
needed pages are paged in.
5. Now disk operations are implemented to fetch the required page into primary
memory.
14 | P a g e
ii. Logical Address Vs Physical Address: The differences between Logical
Address and Physical Address are as follows:
User can view the logical User can never view physical
Visibility
address of a program. address of program.
The user can use the logical The user can indirectly access
Access address to access the physical physical address but not
address. directly.
Advantages:
• The main advantage is there can be more than two files with the same
name, and would be very helpful if there are multiple users.
16 | P a g e
• A security would be there which would prevent the user to access other
user’s files.
• Searching of the files becomes very easy in this directory structure.
Disadvantages:
• As there is an advantage of security, there is also disadvantage that the
user cannot share the file with the other users.
• Unlike the advantage users can create their own files, users don’t have
the ability to create subdirectories.
• Scalability is not possible because one user can’t group the same types
of files together.
Tree Structure/ Hierarchical Structure: Tree directory structure of
operating system is most commonly used in our personal computers. Users can
create files and subdirectories too, which was a disadvantage in the previous
directory structures. This directory structure resembles a real tree upside down,
where the root directory is at the peak. This root contains all the directories for
each user. The users can create subdirectories and even store files in their
directory. A user do not have access to the root directory data and cannot
modify it. And, even in this directory the user do not have access to other user’s
directories. The structure of tree directory is given below which shows how
there are files and subdirectories in each user’s directory.
Advantages:
• This directory structure allows subdirectories inside a directory.
• The searching is easier.
• File sorting of important and unimportant becomes easier.
• This directory is more scalable than the other two directory structures
explained.
Disadvantages:
• As the user isn’t allowed to access other user’s directory, this prevents
file sharing among users.
• As the user has the capability to make subdirectories, if the number of
subdirectories increase the searching may become complicated.
• Users cannot modify the root directory data.
• If files do not fit in one, they might have to be fit into other directories.
Acyclic Graph Structure: As we have seen from the above three directory
structures, none of them have the capability to access one file from multiple
directories. The file or the subdirectory could be accessed through the directory
it was present in, but not from the other directory. This problem is solved in
acyclic graph directory structure, where a file in one directory can be accessed
from multiple directories. In this way, the files could be shared between the
17 | P a g e
users. It is designed in a way that multiple directories point to a particular
directory or file with the help of links.
In the below figure, this explanation can be nicely observed, where a file is
shared between multiple users. If any user makes a change, it would be
reflected to both the users.
Advantages:
• Sharing of files and directories is allowed between multiple users.
• Searching becomes too easy.
• Flexibility is increased as file sharing and editing access is there for
multiple users.
Disadvantages:
• Because of the complex structure it has, it is difficult to implement this
directory structure.
• The user must be very cautious to edit or even deletion of file as the file
is accessed by multiple users.
• If we need to delete the file, then we need to delete all the references of
the file inorder to delete it permanently.
(b) Explain various file allocation methods on secondary storage.
Answer: The allocation methods define how the files are stored in the disk
blocks. There are three main disk space or file allocation methods.
• Contiguous Allocation
• Linked Allocation
• Indexed Allocation
The main idea behind these methods is to provide:
• Efficient disk space utilization.
• Fast access to the file blocks.
All the three methods have their own advantages and disadvantages as
discussed below:
Contiguous Allocation: In this scheme, each file occupies a contiguous set
of blocks on the disk. For example, if a file requires n blocks and is given a
block b as the starting location, then the blocks assigned to the file will
be: b, b+1, b+2,……b+n-1. This means that given the starting block address
and the length of the file (in terms of blocks required), we can determine the
blocks occupied by the file.
The directory entry for a file with contiguous allocation contains
• Address of starting block
• Length of the allocated portion.
The file ‘mail’ in the following figure starts from the block 19 with length =
6 blocks. Therefore, it occupies 19, 20, 21, 22, 23, 24 blocks.
18 | P a g e
Advantages:
• Both the Sequential and Direct Accesses are supported by this.
For direct access, the address of the kth block of the file which
starts at block b can easily be obtained as (b+k).
• This is extremely fast since the number of seeks are minimal
because of contiguous allocation of file blocks.
Disadvantages:
• This method suffers from both internal and external
fragmentation. This makes it inefficient in terms of memory
utilization.
• Increasing file size is difficult because it depends on the
availability of contiguous memory at a particular instance.
Linked List Allocation: In this scheme, each file is a linked list of disk
blocks which need not be contiguous. The disk blocks can be scattered
anywhere on the disk. The directory entry contains a pointer to the starting
and the ending file block.
Each block contains a pointer to the next block occupied by the file. The
file ‘jeep’ in following image shows how the blocks are randomly
19 | P a g e
distributed. The last block (25) contains -1 indicating a null pointer and
does not point to any other block.
Advantages:
• This is very flexible in terms of file size. File size can be increased
easily since the system does not have to look for a contiguous
chunk of memory.
• This method does not suffer from external fragmentation. This
makes it relatively better in terms of memory utilization.
Disadvantages:
• Because the file blocks are distributed randomly on the disk, a
large number of seeks are needed to access every block
individually. This makes linked allocation slower.
• It does not support random or direct access. We can not directly
access the blocks of a file. A block k of a file can be accessed by
traversing k blocks sequentially (sequential access ) from the
starting block of the file via block pointers.
• Pointers required in the linked allocation incur some extra
overhead.
Indexed Allocation: In this scheme, a special block known as the Index
block contains the pointers to all the blocks occupied by a file. Each file has
its own index block. The ith entry in the index block contains the disk address
of the ith file block. The directory entry contains the address of the index
block as shown in the image:
Advantages:
• This supports direct access to the blocks occupied by the file and
therefore provides fast access to the file blocks.
• It overcomes the problem of external fragmentation.
Disadvantages:
• The pointer overhead for indexed allocation is greater than
linked allocation.
• For very small files, say files that expand only 2-3 blocks, the
indexed allocation would keep one entire block (index block) for
the pointers which is inefficient in terms of memory utilization.
However, in linked allocation we lose the space of only 1 pointer
per block.
20 | P a g e