QUESTION AND ANSWER BANK
CST 206 OPERATING SYSTEM
MODULE II
PART A (3 MARKS)
1. What is a process? What is a child process?
A process is a program in execution. It is the unit of work in a modern operating system. A
process is an active entity with a program counter specifying the next instructions to execute and a set
of associated resources. It also includes the process stack, containing temporary data and a data section
containing global variables.
A process may create several new processes, via a create-process system call, during the course
of execution. The creating process is called a parent process, whereas the new processes are called the
children of that process.
2. What are the various states of process?
As a process executes, it changes state. The state of a process is defined in part by the current
activity of that process. Each process may be in one of the following states:
a. New: The process being created.
b. Running: Instructions are being executed.
c. Waiting: The process is waiting for some event to occur (such as an I/O completion or
reception of a signal).
d. Ready: The process is waiting to be assigned to a processor.
e. Terminated: The process has finished execution.
3. What is a process control block?
Each process is represented in the operating system by a process control block (PCB) – also called
a task control block. It contains many pieces of information associated with a specific process as shown
below:
4. Explain CPU-bound and I/O-bound processes.
CPU-bound processes are those that spend most of their time in computer. They have long CPU
bursts.
I/O-bound processes are those that spend most of their time waiting for I/O. They have short
CPU bursts.
5. What is the use of job queues, ready queues & device queues?
As a process enters a system, they are put into a job queue. This queue consists of all jobs in the
system.
The processes that are residing in main memory and are ready & waiting to execute are kept on
a list called ready queue.
The list of processes waiting for a particular I/O device is kept in the device queue
6. What is meant by degree of multiprogramming?
The long-term scheduler controls the degree of multiprogramming – the number of processes
in memory.
7. What is context switch?
Switching the CPU to another process requires saving the state of the old process and loading
the saved state for the new process. This task is known as context switch. The context of a process is
represented in the PCB of a process.
8. What is meant by CPU scheduler?
Whenever the CPU becomes idle, the operating system must select one of the processes in the
ready queue to be executed. The selection process is carried out by the short-term-scheduler (or CPU
scheduler).
9. Define CPU scheduling. What are the two types of scheduling? Explain.
CPU scheduling is the process of switching the CPU among various processes. CPU scheduling is
the basis of multi-programmed operating systems. By switching the CPU among processes, the
operating system can make the computer more productive.
The two types of scheduling are:
a. Preemptive scheduling
b. Non-preemptive scheduling
Preemptive scheduling is the process of scheduling in which, a process can run for a maximum
of fixed time. Once the end of time interval is reached, it is suspended and the next process runs.
Non-preemptive scheduling is the process of scheduling in which, a process runs until it
completes its operation or it voluntarily releases the CPU.
10. What is a dispatcher? What is meant by dispatch latency?
The dispatcher is the module that gives control of the CPU to the process selected by the short-
term-scheduler. This function involves:
a. Switching context
b. Switching to user mode
c. Jumping to proper location in the user program to restart that program.
The time taken for the dispatcher to stop one process and start another running is known as
dispatch latency.
11. List the various scheduling criterions.
The scheduling criterions include the following:
a. CPU utilization
b. Throughput
c. Turnaround time
d. Waiting time
e. Response time
12. Define throughput, turnaround time, waiting time, response time.
The number of processes completed per time unit is called throughput.
The interval from the time of submission of a process to the time of completion is the turnaround
time. It is the sum of the periods spent waiting to get into memory, waiting in the ready queue,
executing on the CPU, and doing I/O.
Waiting time is the sum of the periods spent waiting in the ready queue.
Response time is the amount of time it takes to starts responding.
13. Name some CPU scheduling algorithms.
Some of the CPU scheduling algorithms are as follows:
a. First-Come, First-Served scheduling
b. Shortest-Job-First scheduling
c. Priority scheduling
d. Round-Robin scheduling
e. Multilevel queue scheduling
f. Multilevel feedback queue scheduling
14. Explain First-Come, First-Served scheduling.
In the First-Come, First-Served scheduling, the process that requests the CPU first is allocated
the CPU first. The implementation of the FCFS policy is easily managed with a FIFO queue. When a
process enters the ready queue, its PCB is linked onto the tail of the queue. When the CPU is free, it is
allocated to the process at the head of the queue. The running process is then removed from the queue.
15. How processes are scheduled using Shortest-Job-First scheduling?
Shortest-Job-First scheduling algorithm associates with each process the length of the latter’s
next CPU burst. When the CPU is available, it is assigned to the process that has the smallest next CPU
burst. A more appropriate term would be the shortest next CPU burst, because the scheduling is done
by examining the length of the next CPU burst of a process, rather than its total length.
16. Write short notes on priority scheduling.
A priority scheduling is associated with each process, and the CPU is allocated to the process
with the highest priority. Equal-priority processes are scheduled in FCFS order.
17. What is aging?
Aging is a technique of gradually increasing the priority of processes that wait in the system for
a long time.
18. Which scheduling does time-sharing systems use? Explain.
The Round-Robin (RR) scheduling algorithm is designed especially for time-sharing systems. It is
similar to FCFS scheduling, but preemption is added to switch between processes. A small unit of time,
called a time quantum (or time slice), is defined. A time quantum is generally from 10 to 100
milliseconds. The ready queue is treated as a circular queue. The CPU scheduler goes around the ready
queue, allocating the CPU to each process for a time interval of up to 1 time quantum.
19. Explain multilevel queue scheduling.
A multilevel queue scheduling algorithm partitions the ready queue into several separate
queues. The processes are permanently assigned to one queue, generally based on some property of
process, such as memory size, process priority, or process type. Each queue has its own scheduling
algorithm. For example, separate queue might be used for foreground and background processes. The
foreground queue might be scheduled by an RR algorithm, while the background queue is scheduled by
an FCFS algorithm.
20. How are processes scheduled in multilevel feedback queue scheduling?
Multilevel feedback queue scheduling, allows a process to move between queues. The idea is to
separate processes with different CPU-burst characteristics. If a process uses too much CPU time, it will
be moved to a lower-priority queue. This scheme leaves I/O-bound and interactive processes in the
higher-priority queues. Similarly, a process that waits too long in a lower-priority queue may be moved
to a higher-priority queue. This form of aging prevents starvation.
21. What is meant by thread?
A thread, sometimes called a lightweight process (LWP), is a basic unit of CPU utilization; it
comprises a thread ID, a program counter, a register set, and a stack. It shares with other threads
belonging to the same process its code section, data section, and other operating-system resources,
such as open files and signals.
22. What are the benefits of multithreaded programming?
The benefits of multithreaded programming can be broken down into four major categories:
• Responsiveness
• Resource sharing
• Economy
• Utilization of multiprocessor architectures
23. List the types of threading implementation.
The three common types of threading implementation are:
a. Many-to-one model: maps many user-level threads to one kernel thread.
b. One-to-one model: maps each user thread to a kernel thread.
c. Many-to-many model: multiplexes many user-kernel threads to a smaller or equal number of
kernel threads.
24. What is the use of fork and exec system calls?
Fork is a system call by which a new process is created.
Exec is also a system call, which is used after a fork by one of the two processes to replace the
process memory space with a new program
25. What are the two models of communication?
The two models of communication are:
a. Message-passing model: information is exchanged through an inter process
communication (IPC) facility provided by the operating system.
b. Shared-memory model: processes use map memory system calls to gain access to
regions of memory owned by other processes.
26. What is meant by Inter Process Communication?
IPC provides a mechanism to allow processes to communicate and to synchronize their actions
without sharing the same address space. IPC is particularly useful in a distributed environment where
the communicating processes may reside on different computers connected with a network.
27. Write notes on message-passing system.
The function of a message system is to allow processes to communicate with one another
without the need to resort to shared data. In this scheme, services are provided as ordinary user
processes. That is, the services operate outside of the kernel. Communication among the user processes
is accomplished through the passing of messages. An IPC facility provides at least the two operations:
send (message) and receive (message).
28. Mention the methods for implementing a link and the send/receive operations.
The methods for logically implementing a link and send/receive operations are as follows:
a. Direct or indirect communication
b. Symmetric or asymmetric communication
c. Automatic or explicit buffering
d. Send by copy or send by reference
e. Fixed-size or variable-sized messages
29. Explain direct communication.
With direct communication, each process that wants to communicate must explicitly name the
recipient or sender of the communication. In this scheme, the send and receive primitives are defined
as follows:
a. send(P, message) – Send a message to process P.
b. receive(Q, message) – Receive a message from process Q.
30. Mention the properties of the communication link in direct communication.
A communication link in direct communication scheme has the following properties:
a. A link is established automatically between every pair of processes that want to communicate.
The process need to know only each other’s identity to communicate.
b. A link is associated with exactly two processes.
c. Exactly one link exists between each pair of processes.
31. Write notes on indirect communication.
With indirect communication, the messages are sent to and receive from mailboxes, or ports. A
mailbox can be viewed abstractly as an object into which messages can be placed by processes and from
which messages can be removed. Each mailbox has a unique identification. In this scheme, a process
can communicate with some other process via a number of different mailboxes. Two processes can
communicate only if they share a mailbox. The send and receive primitives are defined as follows:
a. send(A, message) – Send a message to mailbox A.
b. receive(A, message) – Receive a message from mailbox A.
32. List out the properties of a communication link in indirect communication scheme.
In indirect communication scheme, a communication link has the following properties:
a. A link is established between a pair of processes only if both members of the pair have a
shared mailbox.
b. A link may be associated with more than two processes.
c. A number of different links may exist between each pair of communicating processes, with
each link corresponding to one mailbox.
33. Name the different options for implementing send and receive primitives.
Communication between processes takes place by calls to send and receive primitives. There
are different design options for implementing each primitive. Message passing may be either blocking
or non-blocking – also known as synchronous and synchronous.
a. Blocking send: The sending process is blocked until the message is received by the receiving
process or by the mailbox.
b. Non-blocking send: The sending process sends the message and resumes operation.
c. Blocking receive: The receiver blocks until a message is available.
d. Non-blocking receive: The receiver retrieves either a valid message or a null.
34. Write notes on buffering in IPC.
Whether the communication is direct or indirect, messages exchanged by communicating
processes reside in a temporary queue. Basically, such a queue can be implemented in three ways:
a. Zero capacity: The queue has maximum length 0; thus the link cannot have any messages
waiting in it. In this case, the sender must block until the recipient receives the message.
b. Bounded capacity: The queue has finite length n; thus, at most n messages can reside in it. If
the queue is not full when a new message is sent, the latter is placed in the queue (either the
message is copied or a pointer to the message is kept), and the sender can continue execution
without waiting. The link has a finite capacity, however. If the link is full, the sender must block
until space is available in the queue.
c. Unbounded capacity: The queue has potentially infinite length; thus, any number of messaged
can wait in it. The sender never blocks.
PART B ( 14 MARKS)
35. Explain various CPU scheduling algorithms
First Come First Serve (FCFS)
In the First-Come, First-Served scheduling, the process that requests the CPU first is allocated
the CPU first. The implementation of the FCFS policy is easily managed with a FIFO queue. When a
process enters the ready queue, its PCB is linked onto the tail of the queue. When the CPU is free,
it is allocated to the process at the head of the queue. The running process is then removed from
the queue.
○ Advantages of FCFS
■ Simple
■ Easy
■ First come, First serv
○ Disadvantages of FCFS
■ The scheduling method is non preemptive, the process will run
tocompletion
■ Due to the non-preemptive nature of the algorithm, the problem
ofstarvation may occur.
■ Although it is easy to implement, it is poor in performance since
theaverage waiting time is higher as compared to other scheduling
algorithms.
Shortest Job First (SJF)
Shortest-Job-First scheduling algorithm associates with each process the length of the latter’s
next CPU burst. When the CPU is available, it is assigned to the process that has the smallest next CPU
burst. A more appropriate term would be the shortest next CPU burst, because the scheduling is done
by examining the length of the next CPU burst of a process, rather than its total length.
○ Advantages of SJF
■ Maximum throughput
■ Minimum average waiting and turnaround time
○ Disadvantages of SJF
■ May suffer with the problem of starvation
■ It is not implementable because the exact Burst time for a process
can'tbe known in advance.
Priority scheduling.
A priority scheduling is associated with each process, and the CPU is allocated to the process
with the highest priority. Equal-priority processes are scheduled in FCFS order.
Round Robin Scheduling
In RR scheduling the CPU time is divided as small slots and assigned to each process. If the CPU
time slot is 2ms then this slot is provided to all processes one by one. When one process executes for
2ms it is pre-empted and next process is allotted the CPU and so on.
○ Advantages
1. It can be actually implementable in the system because it is
not dependent on the burst time.
2. It doesn't suffer from the problem of starvation or convoy effect.
3. All the jobs get a fair allocation of CPU.
○ Disadvantages
1. The higher the time quantum, the higher the response time in
thesystem.
2. The lower the time quantum, the higher the context switching
overheadin the system.
3. Deciding a perfect time quantum is really a very difficult task in the
system.
36. There are 5 processes with process ID P0, P1, P2, P3 and P4. P0 arrives at time 0, P1 at
time 1, P2 at time 2, P3 arrives attime 3 and Process P4 arrives at time 4 in the ready
queue. The processes and their respective Arrival and Burst time are given in the following
table.
Calculate Average waiting time using FCFS scheduling
● The Turnaround time and the waiting time are calculated by
using the following formula.
Turn Around Time = Completion Time - Arrival Time
Waiting Time = Turnaround time - Burst Time
The average waiting Time is determined by summing the respective waiting time of all
the processes and dividing the sum by the total number of processes.
● Solution:
● Avg Waiting Time=31/5
● (Gantt chart)
37. There are five jobs named P1, P2, P3, P4 andP5. Their arrival time and burst time are given
in the table below.
Calculate the average waiting time using SJF scheduling
■ Since, No Process arrives at time 0 hence; there will be an empty slot
in the Gantt chart from time 0 to 1 (the time at which the first
processarrives).
■ According to the algorithm, the OS schedules the process which is
having the lowest burst time among the available processes in the
ready queue.
■ Till now, we have only one process in the ready queue hence the
scheduler will schedule this to the processor no matter what is its burst
time.
■ This will be executed till 8 units of time.
■ Till then we have three more processes arriving in the ready queue
hence the scheduler will choose the process with the lowest burst
time.
■ Among the processes given in the table, P3 will be executed next
sinceit is having the lowest burst time among all the available
processes.
■ So that's how the procedure will go on in the shortest job first (SJF)
scheduling algorithm.
■ Avg Waiting Time = 27/5
38. There are SIX jobs P1, P2, P3, P4, P5 and P6. Their arrival time and burst time are given
below in the table. Calculate average waiting time using shortest remaining time
scheduling
■ Avg Waiting Time = 24/6
■ The Gantt chart is prepared according to the arrival and burst
timegiven in the table.
1. Since, at time 0, the only available process is P1 with CPU
burst time 8. This is the only available process in the list
therefore it is scheduled.
2. The next process arrives at time unit 1. Since the algorithm
weare using is SRTF which is a preemptive one, the current
execution is stopped and the scheduler checks for the
process with the least burst time.
Till now, there are two processes available in the ready queue.
The OS has executed P1 for one unit of time till now; the
remaining burst time of P1 is 7 units. The burst time of Process
P2 is 4 units. Hence Process P2 is scheduled on the CPU
according to the algorithm.
3. The next process P3 arrives at time unit 2. At this time, the
execution of process P3 is stopped and the process with the
least remaining burst time is searched. Since the process P3
has2 units of burst time hence it will be given priority over
others.
4. The Next Process P4 arrives at time unit 3. At this arrival, the
scheduler will stop the execution of P4 and check which
process is having least burst time among the available
processes (P1, P2, P3 and P4). P1 and P2 are having the
remaining burst time 7 units and 3 units respectively.
5. P3 and P4 are having the remaining burst time 1 unit each.
Since, both are equal hence the scheduling will be done
according to their arrival time. P3 arrives earlier than P4
andtherefore it will be scheduled again.The Next Process
P5 arrives at time unit 4. Till this time, the Process P3 has
completed its execution and it is no longer in the list. The
scheduler will compare the remaining burst time of all the
available processes. Since the burst time of process P4 is 1
which is least among all hence this will be scheduled.
6. The Next Process P6 arrives at time unit 5, till this time, the
Process P4 has completed its execution. We have 4 available
processes till now, that are P1 (7), P2 (3), P5 (3) and P6 (2).
The Burst time of P6 is the least among all hence P6 is
scheduled. Since, now, all the processes are available hence
thealgorithm will now work the same as SJF. P6 will be
executed till its completion and then the process with the
least remainingtime will be scheduled.
7. Once all the processes arrive, No preemption is
done and thealgorithm will work as SJF.
39. There are six processes named as P1, P2, P3,P4, P5 and P6. Their arrival time and
burst time are given below in thetable. The time quantum of the system is 4 units.
Calculate average waiting time using RR scheduling.
■ Avg Waiting Time = (12+16+6+8+15+11)/6 = 76/6 units
40. There are 7 processes P1, P2, P3, P4, P5, P6 and P7. Their priorities, Arrival Time
and burst time are givenin the table. Calculate average waiting time using Non
preemptive priority scheduling.
● Avg Waiting Time = (0+11+2+7+12+2+18)/7 = 52/7 units
41. There are 7 processes P1, P2, P3, P4, P5, P6 and P7 given. Their respective
priorities, Arrival Times and Burst times aregiven in the table below.
Calculate average waiting time using Pre emptive priority scheduling
● Avg Waiting Time = (0+14+0+7+1+25+16)/7 = 63/7 = 9
units