KEMBAR78
Basics OS Interview Questions 1. What Is A Process and Process Table? | PDF | Thread (Computing) | Process (Computing)
0% found this document useful (0 votes)
24 views15 pages

Basics OS Interview Questions 1. What Is A Process and Process Table?

Uploaded by

n65622544
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views15 pages

Basics OS Interview Questions 1. What Is A Process and Process Table?

Uploaded by

n65622544
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

An operating system acts as a GUI between the user and the computer

system. In other words, an OS acts as an intermediary between the


user and the computer hardware, managing resources such as memory,
processing power, and input/output operations. Here some examples of
popular operating systems include Windows, MacOS, Linux, Android etc.

Basics OS Interview Questions

1. What is a process and process table?

A process is an instance of a program in execution. For example, a


Web Browser is a process, and a shell (or command prompt) is a
process. The operating system is responsible for managing all the
processes that are running on a computer and allocates each process a
certain amount of time to use the processor.

In addition, the operating system also allocates various other


resources that processes will need, such as computer memory or disks.
To keep track of the state of all the processes, the operating system
maintains a table known as the process table. Inside this table,
every process is listed along with the resources the process is using
and the current state of the process.

2. What are the different states of the process?

Processes can be in one of three states: running, ready, or waiting.


The running state means that the process has all the resources it
needs for execution and it has been given permission by the operating
system to use the processor. Only one process can be in the running
state at any given time.

The remaining processes are either in a waiting state (i.e., waiting


for some external event to occur such as user input or disk access)
or a ready state (i.e., waiting for permission to use the processor).
In a real operating system, the waiting and ready states are
implemented as queues that hold the processes in these states.

3. What is a Thread?

A thread is a single sequence stream within a process. Because


threads have some of the properties of processes, they are sometimes
called lightweight processes. Threads are a popular way to improve
the application through parallelism.

For example, in a browser, multiple tabs can be different threads. MS


Word uses multiple threads, one thread to format the text, another
thread to process inputs, etc.
4. What are the differences between process and thread? processes don't need to care whether the memory is from RAM or disk.
The illusion of such a large amount of memory is created by
Process is program under action and thread is the smallest segment of
subdividing the virtual memory into smaller pieces, which can be
instructions (segment of a process) that can be handled independently
loaded into physical memory whenever they are needed by a process.
by a scheduler.
9. Explain the main purpose of an operating system?
Threads are lightweight processes that share the same address space
including the code section, data section and operating system An operating system acts as an intermediary between the user of a
resources such as the open files and signals. However, each thread computer and computer hardware. The purpose of an operating system is
has its own program counter (PC), register set and stack space to provide an environment in which a user can execute programs
allowing them to the execute independently within the same process conveniently and efficiently.
context.
An operating system is a software that manages computer hardware. The
Unlike processes, threads are not fully independent entities and can hardware must provide appropriate mechanisms to ensure the correct
communicate and synchronize more efficiently making them suitable for operation of the computer system and to prevent user programs from
the concurrent and parallel execution in the multi-threaded interfering with the proper operation of the system.
environment.
10. What is demand paging?

The process of loading the page into memory on demand (whenever a


5. What are the benefits of multithreaded programming? page fault occurs) is known as demand paging.

Multithreaded programming makes the system more responsive and 11. What is a kernel?
enables resource sharing. It leads to the use of multiprocess
A kernel is the central component of an operating system that manages
architecture. It is more economical and preferred.
the operations of computers and hardware. It basically manages
6. What is Thrashing? operations of memory and CPU time. It is a core component of an
operating system. Kernel acts as a bridge between applications and
Thrashing is a situation when the performance of a computer degrades
data processing performed at the hardware level using inter-process
or collapses. Thrashing occurs when a system spends more time
communication and system calls.
processing page faults than executing transactions. While processing
page faults is necessary in order to appreciate the benefits of 12. What are the different scheduling algorithms?
virtual memory, thrashing has a negative effect on the system.
1. First-Come, First-Served (FCFS) Scheduling
As the page fault rate increases, more transactions need processing
2. Shortest-Job-Next (SJN) Scheduling
from the paging device. The queue at the paging device increases,
resulting in increased service time for a page fault. 3. Priority Scheduling
7. What is Buffer? 4. Shortest Remaining Time
A buffer is a memory area that stores data being transferred between 5. Round Robin(RR) Scheduling
two devices or between a device and an application.
6. Multiple-Level Queues Scheduling
8. What is virtual memory?
13. Describe the objective of multi-programming?
Virtual memory creates an illusion that each user has one or more
Multi-programming increases CPU utilization by organizing jobs (code
contiguous address spaces, each beginning at address zero. The sizes
and data) so that the CPU always has one to execute. The main
of such virtual address spaces are generally very high. The idea of
objective of multi-programming is to keep multiple jobs in the main
virtual memory is to use disk space to extend the RAM. Running
memory. If one job gets occupied with IO, the CPU can be assigned to  No priority or special importance is given to any process or
other jobs. task

14. What is the time-sharing system?  RR scheduling is also known as Time slicing scheduling

Time-sharing is a logical extension of multiprogramming. The CPU 19. Enumerate the different RAID levels?
performs many tasks by switches that are so frequent that the user
A redundant array of independent disks is a set of several physical
can interact with each program while it is running. A time-shared
disk drives that the operating system sees as a single logical unit.
operating system allows multiple users to share computers
It played a significant role in narrowing the gap between
simultaneously.
increasingly fast processors and slow disk drives. RAID has different
15. What problem we face in computer system without OS? levels:

 Poor resource management  Level-0

 Lack of User Interface  Level-1

 No File System  Level-2

 No Networking  Level-3

 Error handling  Level-4

16. Give some benefits of multithreaded programming?  Level-5

A thread is also known as a lightweight process. The idea is to  Level-6


achieve parallelism by dividing a process into multiple threads.
20. What is Banker’s algorithm?
Threads within the same process run in shared memory space,
The banker’s algorithm is a resource allocation and deadlock
17. Briefly explain FCFS?
avoidance algorithm that tests for safety by simulating the
FCFS stands for First Come First served. In the FCFS scheduling allocation for the predetermined maximum possible amounts of all
algorithm, the job that arrived first in the ready queue is allocated resources, then makes an “s-state” check to test for possible
to the CPU and then the job that came second and so on. FCFS is a activities, before deciding whether allocation should be allowed to
non-pre-emptive scheduling algorithm as a process that holds the CPU continue.
until it either terminates or performs I/O.
21. State the main difference between logical and physical address
Thus, if a longer job has been assigned to the CPU then many shorter space?
jobs after it will have to wait.
Parameter LOGICAL ADDRESS PHYSICAL ADDRESS
18. What is the RR scheduling algorithm?

A round-robin scheduling algorithm is used to schedule the process


Logical address is It is located in a memory
fairly for each job in a time slot or quantum and interrupting the Basic
generated by the CPU. unit.
job if it is not completed by then the job comes after the other job
which is arrived in the quantum time makes these scheduling fairly.
Address Logical Address Space is Physical Address is a set of
 Round-robin is cyclic in nature, so starvation doesn’t occur
Space a set of all logical all physical addresses
 Round-robin is a variant of first-come, first-served scheduling addresses generated by mapped to the corresponding
dynamic memory allotment framework when free blocks are small, so it
Parameter LOGICAL ADDRESS PHYSICAL ADDRESS
can't satisfy any request.

25. What is the basic function of paging?


the CPU in reference to a logical addresses.
program. Paging is a method or technique which is used for non-contiguous
memory allocation. It is a fixed-size partitioning theme (scheme). In
paging, both main memory and secondary memory are divided into equal
Users can view the Users can never view the
fixed-size partitions. The partitions of the secondary memory area
Visibility logical address of a physical address of the
unit and the main memory area unit are known as pages and frames
program. program.
respectively.

Paging is a memory management method accustomed fetch processes from


Generation generated by the CPU. Computed by MMU. the secondary memory into the main memory in the form of pages. in
paging, each process is split into parts wherever the size of every
The user can use the The user can indirectly part is the same as the page size. The size of the last half could
Access logical address to access access physical addresses also be but the page size. The pages of the process area unit hold on
the physical address. but not directly. within the frames of main memory relying upon their accessibility

26. How does swapping result in better memory management?

Swapping is a simple memory/process management technique used by the


22. How does dynamic loading aid in better memory space utilization? operating system(os) to increase the utilization of the processor by
moving some blocked processes from the main memory to the secondary
With dynamic loading, a routine is not loaded until it is called.
memory thus forming a queue of the temporarily suspended processes
This method is especially useful when large amounts of code are
and the execution continues with the newly arrived process.
needed in order to handle infrequently occurring cases such as error
routines. During regular intervals that are set by the operating system,
processes can be copied from the main memory to a backing store and
23. What are overlays?
then copied back later. Swapping allows more processes to be run that
The concept of overlays is that whenever a process is running it will can fit into memory at one time
not use the complete program at the same time, it will use only some
27. Write a name of classic synchronization problems?
part of it. Then overlay concept says that whatever part you
required, you load it and once the part is done, then you just unload  Bounded-buffer
it, which means just pull it back and get the new part you required
and run it.  Readers-writers

Formally, “The process of transferring a block of program code or  Dining philosophers


other data into internal memory, replacing what is already stored”.  Sleeping barber
24. What is fragmentation? 28. What is the Direct Access Method?
Processes are stored and removed from memory, which makes free memory The direct Access method is based on a disk model of a file, such
space, which is too little to even consider utilizing by different that it is viewed as a numbered sequence of blocks or records. It
processes. Suppose, that process is not ready to dispense to memory allows arbitrary blocks to be read or written. Direct access is
blocks since its little size and memory hinder consistently staying advantageous when accessing large amounts of information. Direct
unused is called fragmentation. This kind of issue occurs during a memory access (DMA) is a method that allows an input/output (I/O)
device to send or receive data directly to or from the main memory, The interrupts are a signal emitted by hardware or software when a
bypassing the CPU to speed up memory operations. The process is process or an event needs immediate attention. It alerts the
managed by a chip known as a DMA controller (DMAC). processor to a high-priority process requiring interruption of the
current working process. In I/O devices one of the bus control lines
29. When does thrashing occur?
is dedicated to this purpose and is called the Interrupt Service
Thrashing occurs when processes on the system frequently access Routine (ISR).
pages, not available memory.
36. What is GUI?
30. What is the best page size when designing an operating system?
GUI is short for Graphical User Interface. It provides users with an
The best paging size varies from system to system, so there is no interface wherein actions can be performed by interacting with icons
single best when it comes to page size. There are different factors and graphical symbols.
to consider in order to come up with a suitable page size, such as
37. What is preemptive multitasking?
page table, paging time, and its effect on the overall efficiency of
the operating system. Preemptive multitasking is a type of multitasking that allows
computer programs to share operating systems (OS) and underlying
31. What is multitasking?
hardware resources. It divides the overall operating and computing
Multitasking is a logical extension of a multiprogramming system that time between processes, and the switching of resources between
supports multiple programs to run concurrently. In multitasking, more different processes occurs through predefined criteria.
than one task is executed at the same time. In this technique, the
38. What is a pipe and when is it used?
multiple tasks, also known as processes, share common processing
resources such as a CPU. A Pipe is a technique used for inter-process communication. A pipe is
a mechanism by which the output of one process is directed into the
32. What is caching?
input of another process. Thus it provides a one-way flow of data
The cache is a smaller and faster memory that stores copies of the between two related processes.
data from frequently used main memory locations. There are various
39. What are the advantages of semaphores?
different independent caches in a CPU, which store instructions and
data. Cache memory is used to reduce the average time to access data  They are machine-independent.
from the Main memory.
 Easy to implement.
33. What is spooling?
 Correctness is easy to determine.
Spooling refers to simultaneous peripheral operations online,
 Can have many different critical sections with different
spooling refers to putting jobs in a buffer, a special area in
semaphores.
memory, or on a disk where a device can access them when it is ready.
Spooling is useful because devices access data at different rates.  Semaphores acquire many resources simultaneously.
34. What is the functionality of an Assembler?  No waste of resources due to busy waiting.
The Assembler is used to translate the program written in Assembly 40. What is a bootstrap program in the OS?
language into machine code. The source program is an input of an
Bootstrapping is the process of loading a set of instructions when a
assembler that contains assembly language instructions. The output
computer is first turned on or booted. During the startup process,
generated by the assembler is the object code or machine code
diagnostic tests are performed, such as the power-on self-test
understandable by the computer.
(POST), which set or checks configurations for devices and implements
35. What are interrupts? routine testing for the connection of peripherals, hardware, and
external memory devices. The bootloader or bootstrap program is then a process starts executing, it continues to hold the CPU until it
loaded to initialize the OS. completes its task or voluntarily yields the CPU (e.g., by entering a
waiting state for I/O), meaning it cannot be interrupted by other
41. What is IPC?
processes, even if they have higher priority.
Inter-process communication (IPC) is a mechanism that allows
processes to communicate with each other and synchronize their
actions. The communication between these processes can be seen as a 44. What is the zombie process?
method of cooperation between them.
A process that has finished the execution but still has an entry in
42. What are the different IPC mechanisms? the process table to report to its parent process is known as
a zombie process. A child process always first becomes a zombie
These are the methods in IPC:
before being removed from the process table. The parent process reads
 Pipes (Same Process): This allows a flow of data in one the exit status of the child process which reaps off the child
direction only. Analogous to simplex systems (Keyboard). Data process entry from the process table.
from the output is usually buffered until the input process
45. What are orphan processes?
receives it which must have a common origin.
A process whose parent process no more exists i.e. either finished or
 Named Pipes (Different Processes): This is a pipe with a
terminated without waiting for its child process to terminate is
specific name it can be used in processes that don’t have a
called an orphan process.
shared common process origin. E.g. FIFO where the details
written to a pipe are first named. 46. What are starvation and aging in OS?

 Message Queuing: This allows messages to be passed between Starvation: Starvation is a resource management problem where a
processes using either a single queue or several message queues. process does not get the resources it needs for a long time because
This is managed by the system kernel these messages are the resources are being allocated to other processes.
coordinated using an API.
Aging: Aging is a technique to avoid starvation in a scheduling
 Semaphores: This is used in solving problems associated with system. It works by adding an aging factor to the priority of each
synchronization and avoiding race conditions. These are integer request. The aging factor must increase the priority of the request
values that are greater than or equal to 0. as time passes and must ensure that a request will eventually be the
highest priority request
 Shared Memory: This allows the interchange of data through a
defined area of memory. Semaphore values have to be obtained 47. Write about monolithic kernel?
before data can get access to shared memory.
A Monolithic Kernel is another classification of Kernel. Like
 Sockets: This method is mostly used to communicate over a microkernel, this one also manages system resources between
network between a client and a server. It allows for a standard application and hardware, but user services and kernel services are
connection which is computer and OS independent implemented under the same address space.

43. What is the difference between preemptive and non-preemptive It increases the size of the kernel, thus increasing the size of an
scheduling? operating system as well. This kernel provides CPU scheduling, memory
management, file management, and other operating system functions
Preemptive scheduling allows the operating system to interrupt a
through system calls. As both services are implemented under the same
running process and allocate the CPU to another process, typically
address space, this makes operating system execution faster.
based on priority or a time quantum, ensuring that high-priority
tasks or tasks requiring frequent attention get timely access to the 48. What is Context Switching?
processor. In contrast, non-preemptive scheduling dictates that once
Switching of CPU to another process means saving the state of the old
Process Thread
process and loading the saved state for the new process. In Context
Switching the process is stored in the Process Control Block to serve
the new process so that the old process can be resumed from the same The process is less efficient Thread is more efficient in terms
part it was left. in terms of communication. of communication.

49. What is the difference between the Operating system and kernel?
The process is isolated. Threads share memory.
Operating System Kernel

The process is called Thread is called lightweight


heavyweight the process. process.
The kernel is a core component of an
Operating System is system operating system and serves as the
software. I main interface between the computer's Process switching uses, another Thread switching does not require
physical hardware and the processes process interface in operating to call an operating system and
running on it. system. cause an interrupt to the kernel.

Operating System provides an If one process is blocked then The second, thread in the same task
The kernel provides an interface
interface between the user it will not affect the could not run, while one server
between the application and hardware.
and the computer hardware. execution of other process thread is blocked.

Its main purpose is memory The process has its own Process Thread has Parents’ PCB, its own
It manages the system resources,
management, disk management, Control Block, Stack and Thread Control Block and Stack and
including processor, memory and
process management and task Address Space. common Address space.
device drivers.
management.

Type of operating system 51. What is PCB?


includes single and multiuser Type of kernel includes Monolithic
the process control block (PCB) is a block that is used to track the
OS, multiprocessor OS, real- and Microkernel.
process’s execution status. A process control block (PCB) contains
time OS, Distributed OS.
information about the process, i.e. registers, quantum, priority,
etc. The process table is an array of PCBs, that means logically
contains a PCB for all of the current processes in the system.
50. What is the difference between process and thread? 52. When is a system in a safe state?

Process Thread The set of dispatchable processes is in a safe state if there exists
at least one temporal order in which all processes can be run to
completion without resulting in a deadlock.
Process means any program is in Thread means a segment of a
execution. process. 53. What is Cycle Stealing?

cycle stealing is a method of accessing computer memory (RAM) or bus


without interfering with the CPU. It is similar to direct memory
access (DMA) for allowing I/O controllers to read or write RAM 56. What is a dispatcher?
without CPU intervention.
The dispatcher is the module that gives process control over the CPU
54. What are a Trap and Trapdoor? after it has been selected by the short-term scheduler. This function
involves the following:
A trap is a software interrupt, usually the result of an error
condition, and is also a non-maskable interrupt and has the highest  Switching context
priority Trapdoor is a secret undocumented entry point into a program
 Switching to user mode
used to grant access without normal methods of access
authentication.  Jumping to the proper location in the user program to restart
that program

57. Define the term dispatch latency?


55. Write a difference between program and process?
Dispatch latency can be described as the amount of time it takes for
Program Process a system to respond to a request for a process to begin operation.
With a scheduler written specifically to honor application
priorities, real-time applications can be developed with a bounded
Program contains a set of
Process is an instance of an dispatch latency.
instructions designed to complete
executing program.
a specific task. 58. What are the goals of CPU scheduling?

 Max CPU utilization [Keep CPU as busy as possible]Fair


Process is an active entity as it allocation of CPU.
Program is a passive entity as it
is created during execution and
resides in the secondary memory.  Max throughput [Number of processes that complete their
loaded into the main memory.
execution per time unit]

 Min turnaround time [Time taken by a process to finish


The program exists in a single Process exists for a limited span
execution]
place and continues to exist of time as it gets terminated
until it is deleted. after the completion of a task.  Min waiting time [Time a process waits in ready queue]

 Min response time [Time when a process produces the first


A program is a static entity. The process is a dynamic entity. response]

59. What is a critical- section?


Program does not have any Process has a high resource When more than one processes access the same code segment that
resource requirement, it only requirement, it needs resources segment is known as the critical section. The critical section
requires memory space for storing like CPU, memory address, and I/O contains shared variables or resources which are needed to be
the instructions. during its lifetime. synchronized to maintain the consistency of data variables. In simple
terms, a critical section is a group of instructions/statements or
regions of code that need to be executed atomically such as accessing
The program does not have any The process has its own control
a resource (file, input or output port, global data, etc.).
control block. block called Process Control
Block. 60. Write the name of synchronization techniques?

 Mutexes
 Condition variables  Improved throughput. Many concurrent compute operations and I/O
requests within a single process.
 Semaphores
 Simultaneous and fully symmetric use of multiple processors for
 File locks
computation and I/O.

 Superior application responsiveness. If a request can be


launched on its own thread, applications do not freeze or show
Intermediate OS Interview Questions
the "hourglass". An entire application will not block or
61. Write a difference between a user-level thread and a kernel- otherwise wait, pending the completion of another request.
level thread?
 Improved server responsiveness. Large or complex requests or
slow clients don't block other requests for service. The overall
User-level thread Kernel level thread
throughput of the server is much greater.

 Minimized system resource usage. Threads impose minimal impact


User threads are implemented by kernel threads are implemented by
on system resources. Threads require less overhead to create,
users. OS.
maintain, and manage than a traditional process.

OS doesn’t recognize user-level Kernel threads are recognized by


threads. OS. 63. Difference between Multithreading and Multitasking?

Implementation of User threads Implementation of the perform


is easy. kernel thread is complicated. Multi-threading Multi-tasking

Context switch time is less. Context switch time is more. Multiple threads are executing
Several programs are executed
at the same time at the same or
concurrently.
different part of the program.
Context switch requires no
Hardware support is needed.
hardware support.

CPU switches between multiple CPU switches between multiple


If one user-level thread threads. tasks and processes.
If one kernel thread perform a the
performs a blocking operation
blocking operation then another
then entire process will be
thread can continue execution.
blocked.
It is the process of a
It is a heavyweight process.
lightweight part.
User-level threads are designed Kernel level threads are designed
as dependent threads. as independent threads.

62. Write down the advantages of multithreading? It is a feature of the process. It is a feature of the OS.

Some of the most important benefits of MT are:


resources, then makes an “s-state” check to test for possible
activities, before deciding whether allocation should be allowed to
continue.
Multi-threading Multi-tasking
69. What is concurrency?
Multi-threading is sharing of Multitasking is sharing of A state in which a process exists simultaneously with another process
computing resources among computing resources(CPU, memory, than those it is said to be concurrent.
threads of a single process. devices, etc.) among processes.
70. Write a drawback of concurrency?

 It is required to protect multiple applications from one


another.
64. What are the drawbacks of semaphores?
 It is required to coordinate multiple applications through
 Priority Inversion is a big limitation of semaphores.
additional mechanisms.
 Their use is not enforced but is by convention only.
 Additional performance overheads and complexities in operating
 The programmer has to keep track of all calls to wait and signal systems are required for switching among applications.
the semaphore.
 Sometimes running too many applications concurrently leads to
 With improper use, a process may block indefinitely. Such a severely degraded performance.
situation is called Deadlock.
71. What are the necessary conditions which can lead to a deadlock in
65. What is Peterson's approach? a system?
It is a concurrent programming algorithm. It is used to synchronize Mutual Exclusion: There is a resource that cannot be shared.
two processes that maintain the mutual exclusion for the shared Hold and Wait: A process is holding at least one resource and waiting
resource. It uses two variables, a bool array flag of size 2 and an for another resource, which is with some other process.
int variable turn to accomplish it.

66. Define the term Bounded waiting? No Preemption: The operating system is not allowed to take a resource
back from a process until the process gives it back.
A system is said to follow bounded waiting conditions if a process
Circular Wait: A set of processes waiting for each other in circular
wants to enter into a critical section will enter in some finite
form.
time.
72. What are the issues related to concurrency?
67. What are the solutions to the critical section problem?
 Non-atomic: Operations that are non-atomic but interruptible by
There are three solutions to the critical section problem:
multiple processes can cause problems.
 Software solutions
 Race conditions: A race condition occurs of the outcome depends
 Hardware solutions on which of several processes gets to a point first.

 Semaphores  Blocking: Processes can block waiting for resources. A process


could be blocked for a long period of time waiting for input
68. What is a Banker's algorithm?
from a terminal. If the process is required to periodically
The banker’s algorithm is a resource allocation and deadlock update some data, this would be very undesirable.
avoidance algorithm that tests for safety by simulating the
allocation for the predetermined maximum possible amounts of all
 Starvation: It occurs when a process does not obtain service to  Physical organization
progress.
77. Write a difference between physical address and logical address?
 Deadlock: It occurs when two processes are blocked and hence
neither can proceed to execute Parameters Logical address Physical Address

73. Why do we use precedence graphs?


It is the virtual address The physical address is a
A precedence graph is a directed acyclic graph that is used to show Basic
generated by CPU. location in a memory unit.
the execution level of several processes in the operating system. It
has the following properties also:
Set of all logical Set of all physical
 Nodes of graphs correspond to individual statements of program
addresses generated by addresses mapped to the
code.
Address the CPU in reference to a corresponding logical
 An edge between two nodes represents the execution order. program is referred to as addresses is referred to as
Logical Address Space. a Physical Address.
 A directed edge from node A to node B shows that statement A
executes first and then Statement B executes
The user can view the The user can never view the
74. Explain the resource allocation graph?
Visibility logical address of a physical address of the
The resource allocation graph is explained to us what is the state of program. program
the system in terms of processes and resources. One of the advantages
of having a diagram is, sometimes it is possible to see a deadlock
directly by using RAG. The user uses the logical
The user can not directly
Access address to access the
75. What is a deadlock? access the physical address
physical address.
Deadlock is a situation when two or more processes wait for each
other to finish and none of them ever finish. Consider an example The Logical Address is Physical Address is Computed
when two trains are coming toward each other on the same track and Generation
generated by the CPU by MMU
there is only one track, none of the trains can move once they are in
front of each other.
78. Explain address binding?
A similar situation occurs in operating systems when there are two or
The Association of program instruction and data to the actual
more processes that hold some resources and wait for resources held
physical memory locations is called Address Binding.
by other(s).
79. Write different types of address binding?
76. What is the goal and functionality of memory management?
Address Binding is divided into three types as follows.
The goal and functionality of memory management are as follows;
 Compile-time Address Binding
 Relocation
 Load time Address Binding
 Protection
 Execution time Address Binding
 Sharing
80. Write an advantage of dynamic allocation algorithms?
 Logical organization
 When we do not know how much amount of memory would be needed
Internal fragmentation External fragmentation
for the program beforehand.

 When we want data structures without any upper limit of memory


or memory is called Internal too small to serve a new process,
space.
fragmentation. which is called External
 When you want to use your memory space more efficiently. fragmentation.

 Dynamically created lists insertions and deletions can be done


very easily just by the manipulation of addresses whereas in the
case of statically allocated memory insertions and deletions 82. Define the Compaction?
lead to more movements and wastage of memory.
The process of collecting fragments of available memory space into
 When you want to use the concept of structures and linked lists contiguous blocks by moving programs and data in a computer's memory
in programming, dynamic memory allocation is a must or disk.

81. Write a difference between internal fragmentation and external 83. Write about the advantages and disadvantages of a hashed-page
fragmentation? table?

Advantages
Internal fragmentation External fragmentation
 The main advantage is synchronization.
In internal fragmentation  In many situations, hash tables turn out to be more efficient
In external fragmentation, variable-
fixed-sized memory, blocks than search trees or any other table lookup structure. For this
sized memory blocks square measure
square measure appointed to reason, they are widely used in many kinds of computer software,
appointed to the method.
process. particularly for associative arrays, database indexing, caches,
and sets.

Internal fragmentation Disadvantages


happens when the method or External fragmentation happens when
 Hash collisions are practically unavoidable. when hashing a
process is larger than the the method or process is removed.
random subset of a large set of possible keys.
memory.
 Hash tables become quite inefficient when there are many
collisions.
The solution to internal Solution for external fragmentation
fragmentation is the best-fit is compaction, paging and  Hash table does not allow null values, like a hash map.
block. segmentation.
 Define Compaction.

84. Write a difference between paging and segmentation?


External fragmentation occurs when
Internal fragmentation occurs
memory is divided into variable-size Paging Segmentation
when memory is divided into
partitions based on the size of
fixed-sized partitions.
processes.
In paging, program is divided
In segmentation, the program is
into fixed or mounted-size
divided into variable-size sections.
The difference between memory The unused spaces formed between pages.
allocated and required space non-contiguous memory fragments are
Paging Segmentation Paging Segmentation

For the paging operating For segmentation compiler is address.


system is accountable. accountable.
85. Write a definition of Associative Memory and Cache Memory?

Page size is determined by Here, the section size is given by


Associative Memory Cache Memory
hardware. the user.

A memory unit accessed by content Fast and small memory is called


It is faster in comparison of is called associative memory. cache memory.
Segmentation is slow.
segmentation.

It reduces the time required to It reduces the average memory


Paging could result in Segmentation could result in find the item stored in memory. access time.
internal fragmentation. external fragmentation.

Here data is accessed by its Here, data are accessed by their


In paging, logical address is content. address.
Here, logical address is split into
split into that page number
section number and section offset.
and page offset.
It is used when a particular
It is used where search time is
group of data is accessed
Paging comprises a page table While segmentation also comprises very short.
repeatedly.
which encloses the base the segment table which encloses the
address of every page. segment number and segment offset.
Its basic characteristic is its
Its basic characteristic is its
logic circuit for matching its
A page table is employed to Section Table maintains the section fast access
content.
keep up the page data. data.

In paging, operating system In segmentation, the operating 86. What is "Locality of reference"?
must maintain a free frame system maintains a list of holes in
list. the main memory. The locality of reference refers to a phenomenon in which a computer
program tends to access the same set of memory locations for a
particular time period. In other words, Locality of Reference refers
Paging is invisible to the to the tendency of the computer program to access instructions whose
Segmentation is visible to the user.
user. addresses are near one another.

87. Write down the advantages of virtual memory?


In paging, processor needs In segmentation, the processor uses
 A higher degree of multiprogramming.
page number, offset to segment number, and offset to
calculate the absolute calculate the full address.  Allocating memory is easy and cheap

 Eliminates external fragmentation


 Data (page frames) can be scattered all over the PM 91. Define the term Bit-Vector?

 Pages are mapped appropriately anyway A Bitmap or Bit Vector is a series or collection of bits where each
bit corresponds to a disk block. The bit can take two values: 0 and
 Large programs can be written, as the virtual space available is
1: 0 indicates that the block is allocated and 1 indicates a free
huge compared to physical memory.
block.
 Less I/O required leads to faster and easy swapping of
92. What is a File allocation table?
processes.
FAT stands for File Allocation Table and this is called so because it
 More physical memory is available, as programs are stored on
allocates different files and folders using tables. This was
virtual memory, so they occupy very less space on actual
originally designed to handle small file systems and disks. A file
physical memory.
allocation table (FAT) is a table that an operating system maintains
 More efficient swapping on a hard disk that provides a map of the cluster (the basic units of
logical storage on a hard disk) that a file has been stored in.
88. How to calculate performance in virtual memory?
93. What is rotational latency?
The performance of virtual memory of a virtual memory management
system depends on the total number of page faults, which depend on Rotational Latency: Rotational Latency is the time taken by the
"paging policies" and "frame allocation". desired sector of the disgeek to rotate into a position so that it
can access the read/write heads. So the disk scheduling algorithm
Effective access time = (1-p) x Memory access time + p x page fault
that gives minimum rotational latency is better.
time
94. What is seek time?
89. Write down the basic concept of the file system?
Seek Time: Seek time is the time taken to locate the disk arm to a
A file is a collection of related information that is recorded on
specified track where the data is to be read or written. So the disk
secondary storage. Or file is a collection of logically related
scheduling algorithm that gives a minimum average seek time is
entities. From the user’s perspective, a file is the smallest
better.
allotment of logical secondary storage.
Advanced OS Interview Questions
90. Write the names of different operations on file?
95. What is Belady's Anomaly?
Operation on file:
Bélády's anomaly is an anomaly with some page replacement policies
 Create
increasing the number of page frames resulting in an increase in the
 Open number of page faults. It occurs when the First In First Out page
replacement is used.
 Read
96. What happens if a non-recursive mutex is locked more than once?
 Write
Deadlock. If a thread that had already locked a mutex, tries to lock
 Rename the mutex again, it will enter into the waiting list of that mutex,
 Delete which results in a deadlock. It is because no other thread can unlock
the mutex. An operating system implementer can exercise care in
 Append identifying the owner of the mutex and return it if it is already
 Truncate locked by the same thread to prevent deadlocks.

 Close 97. What are the advantages of a multiprocessor system?


There are some main advantages of a multiprocessor system:

 Enhanced performance.

 Multiple applications.

 Multi-tasking inside an application.

 High throughput and responsiveness.

 Hardware sharing among CPUs.

98. What are real-time systems?

A real-time system means that the system is subjected to real-time,


i.e., the response should be guaranteed within a specified timing
constraint or the system should meet the specified deadline.

99. How to recover from a deadlock?

We can recover from a deadlock by following methods:

 Process termination

o Abort all the deadlock processes

o Abort one process at a time until the deadlock is eliminated

 Resource preemption

o Rollback

o Selecting a victim

100. What factors determine whether a detection algorithm must be


utilized in a deadlock avoidance system?

One is that it depends on how often a deadlock is likely to occur


under the implementation of this algorithm. The other has to do with
how many processes will be affected by deadlock when this algorithm
is applied.

101. Explain the resource allocation graph?

The resource allocation graph is explained to us what is the state of


the system in terms of processes and resources. One of the advantages
of having a diagram is, sometimes it is possible to see a deadlock
directly by using RAG.

You might also like