KEMBAR78
Assignment 1 OperatingSystemConcepts CMPLT | PDF | Thread (Computing) | Process (Computing)
0% found this document useful (0 votes)
7 views10 pages

Assignment 1 OperatingSystemConcepts CMPLT

The document outlines various concepts related to operating systems, including objectives, system calls, and memory management techniques. It discusses topics such as process management, multithreading, scheduling, deadlocks, and synchronization problems like the Dining Philosopher's Problem. Additionally, it covers memory management strategies like demand paging, fragmentation, and page replacement algorithms.

Uploaded by

Mark
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views10 pages

Assignment 1 OperatingSystemConcepts CMPLT

The document outlines various concepts related to operating systems, including objectives, system calls, and memory management techniques. It discusses topics such as process management, multithreading, scheduling, deadlocks, and synchronization problems like the Dining Philosopher's Problem. Additionally, it covers memory management strategies like demand paging, fragmentation, and page replacement algorithms.

Uploaded by

Mark
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

University of Petroleum & Energy Studies, Dehradun

Centre for Continuing Education


Assignment #1

• What are the objectives of operating system?


Ans- The following are the main objectives of an operating system:
1.Efficiency
2.Hardware abstraction
3.Convenience
4.System resource management

• What is the purpose of system programs/system calls?

Ans - System calls act as an interface between user-level programs and the operating
system, allowing user programs to request services from the operating system. These
services include: File operations, Process management, Network communication, Device
management, and Communication between processes.

• Why API’s need to be used rather than system call?

Ans:- API is a set of protocols, routines, and functions which allows the exchange data
among various applications and devices. System call allows a program to request services
from the kernel. The protocols and functions in API that define the methods of
communication among various components.

• Compare and contrast DMA and cache memory..

Ans:- DMA is a hardware device that can move to/from memory without using CPU
instructions. Cache stores memory to somewhere closer to where you can access so you
don't have to spend as much time for the I/O. DMA provides this capability to carry out
memory specific operations with minimal processor intervention.

• Distinguish between batch systems and time sharing systems.


Batch operating systems execute jobs in groups or batches, time-sharing systems allow
multiple users to share system resources simultaneously, while real-time systems respond
to input immediately.

The main difference between Multiprogrammed Batch Systems and Time-Sharing


Systems is that in case of Multiprogrammed batch systems, the objective is to maximize
processor use, whereas in Time-Sharing Systems, the objective is to minimize response time.

• What is meant by privileged instructions?

Privileged instructions are instructions that can only be executed in kernel mode by the
operating system kernel or a privileged process, such as a device driver. They are used to
perform operations that require direct access to hardware or other privileged resources,
such as accessing I/O devices or setting up memory mappings.

• Discuss the difference between symmetric and asymmetric multiprocessing

Some main differences between Symmetric and Asymmetric Multiprocessing in the


operating system are as follows:

1.Symmetric multiprocessing occurs when many processors work together to process


programs using the same OS and memory. On the other hand, asymmetric multiprocessing
refers to programs' master-slave processing by many processors.

2.Asymmetric multiprocessors are easy as only the master processor accesses the data
structure. On the other hand, symmetric multiprocessors are hard since all processors must
function in synchronization.

3.The architecture of all Symmetric Multiprocessing processors is the same. On the other
hand, the processor structure in an asymmetric multiprocessor may differ.

4.Asymmetric multiprocessing systems are also less expensive than symmetric


multiprocessing systems.

• Identify what the virtual machine is and what are the advantages of virtualization.

A virtual machine (VM) is a technology that allows you to run many operating systems
simultaneously on a single piece of hardware. VMs will enable you to create a virtual
environment on your computer. This environment allows you to install new software and
programs without affecting the existing system.

Here are some of the top benefits and features of a virtual machine in OS:
Cost Efficiency: One of the biggest advantages of virtual machines over physical servers is
that they require fewer hardware resources. This translates into cost savings as you don't
need additional hard drives or server space because a single machine can host multiple VMs
and run multiple applications.

Security: Since each virtual machine runs independently, it's isolated from other machines.
This helps provide a greater level of security for each machine, as any malware or malicious
attacks will not spread across multiple machines.

Flexibility: This is one of the advantages of virtual machines in cloud computing. Since
virtual machines are software-based, it is easy to spin up new ones if needed. This makes it
incredibly convenient to scale your computing resources per your needs without purchasing
additional hardware.

Ease of Use: Virtual machines are incredibly easy to manage as you don't have to worry
about each machine's hardware or software configurations. You can install the VM software
and create new virtual environments with a few clicks.

• What are the five major activities of an operating system with regard
to process management?

The five main activities of an operating system in regard to process


management are:
1.Creating and removing processes: For user and system processes
2.Starting and stopping processes: For suspension and resumption
3.Providing synchronization methods: For multiple processes
4.Providing communication methods: For processes
5.Handling stalemate: For processes

• List the various services provided by operating systems.

Operating System Services are:

1.Program Execution.
2.Control Input/Output Devices.
3.Program Creation.
4.Error Detection and Response.
5.Accounting.
6.Security and Protection.
7.File Management.
8.Communication.
• What is the meant by the term busy waiting?

Busy waiting, also known as spinning, or busy looping is a process synchronization


technique in which a process/task waits and constantly checks for a condition to be satisfied
before proceeding with its execution

• Discuss the benefits of multithreaded programming?

1. Responsiveness
2. Resource Sharing
3. Scalability
4. Utilization of multiprocessor architecture
5. Minimized system resource usage

• Distinguish between user-level threads and kernel-level threads? Under what


circumstances is one type better than the other?

The operating system has various differences between the User Level and Kernel Level
Threads are as follows:

1.Users implement the user-level threads. On the other hand, the OS implements kernel-
level threads.

2.User-level threads may be created and handled much faster. In contrast, kernel-level
threads take longer to create and maintain.

3.The entire process is halted if a single user-level thread carries out a blocking operation.
On the other hand, if a kernel thread carries out a blocking operation, another thread may
continue to run.

4.User-level threads do not invoke system calls for scheduling. On the other hand, system
calls are used to generate and manage threads at the kernel level.

• List down the conditions under which a deadlock situation may arise?

The four necessary conditions for a deadlock to arise are as follows.


Mutual Exclusion: Only one process can use a resource at any given time i.e. the resources
are non-sharable.

Hold and wait: A process is holding at least one resource at a time and is waiting to acquire
other resources held by some other process.

No preemption: The resource can be released by a process voluntarily i.e. after execution of
the process.

Circular Wait: A set of processes are waiting for each other in a circular fashion.

• Can a multithreaded solution using multiple user-level threads achieve better


performance on a multiprocessor system than on a single-processor system?

Ans:- No, a multithreaded solution using multiple user-level threads cannot achieve
better performance on a multiprocessor system than on a single processor system. This is
because the operating system only sees a single process and will not schedule the
different threads of the process on separate processors. The kernel is also not aware of
the user-level threads that are created, so it is not able to run them on different
processors

• Explain the difference between preemptive and nonpreemptive scheduling.

Key Differences Between Preemptive and Non-Preemptive Scheduling

1. In preemptive scheduling, the CPU is allocated to the processes for a limited time
whereas, in Non-preemptive scheduling, the CPU is allocated to the process till it terminates
or switches to the waiting state.

2. The executing process in preemptive scheduling is interrupted in the middle of execution


when a higher priority one comes whereas, the executing process in non-preemptive
scheduling is not interrupted in the middle of execution and waits till its execution.

3. In Preemptive Scheduling, there is the overhead of switching the process from the ready
state to the running state, vise-verse, and maintaining the ready queue. Whereas in the case
of non-preemptive scheduling has no overhead of switching the process from running state
to ready state.

4. In preemptive scheduling, if a high-priorThe process The process non-preemptive low-


priority process frequently arrives in the ready queue then the process with low priority has
to wait for a long, and it may have to starve. , in non-preemptive scheduling, if CPU is
allocated to the process having a larger burst time then the processes with a small burst
time may have to starve.

• Elaborate the actions taken by the kernel to context-switch between processes.

Actions taken by a kernel to context-switch between processes are -

1. The OS must save the PC and user stack pointer of the currently executing process, in
response to a clock interrupt and transfers control to the kernel clock interrupt handler

2. Saving the rest of the registers, as well as other machine state, such as the state of the
floating point registers, in the process PCB is done by the clock interrupt handler.

3. The scheduler to determine the next process to execute is invoked the OS.

4. Then the state of the next process from its PCB is retrieved by OS and restores the
registers. The restore operation takes the processor back to the state in which the previous
process was previously interrupted, executing in user code with user-mode privileges.

• Explain dining philosopher’s problem.

The Dining Philosopher Problem states that K philosophers are seated around a circular
table with one chopstick between each pair of philosophers. There is one chopstick between
each philosopher. A philosopher may eat if he can pick up the two chopsticks adjacent to
him. One chopstick may be picked up by any one of its adjacent followers but not both.

The Dining Philosopher Problem is a classic synchronization problem in computer science


that involves multiple processes (philosophers) sharing a limited set of resources (forks) in
order to perform a task (eating). In order to avoid deadlock or starvation, a solution must be
implemented that ensures that each philosopher can access the resources they need to
perform their task without interference from other philosophers.

One common solution to the Dining Philosopher Problem uses semaphores, a


synchronization mechanism that can be used to control access to shared resources. In this
solution, each fork is represented by a semaphore, and a philosopher must acquire both the
semaphore for the fork to their left and the semaphore for the fork to their right before
they can begin eating. If a philosopher cannot acquire both semaphores, they must wait
until they become available.

Solution of Dining Philosophers Problem


A solution of the Dining Philosophers Problem is to use a semaphore to represent a
chopstick. A chopstick can be picked up by executing a wait operation on the semaphore
and released by executing a signal semaphore.

The structure of a random philosopher i is given as follows −

do {

wait( chopstick[i] );

wait( chopstick[ (i+1) % 5] );

..

. EATING THE RICE

signal( chopstick[i] );

signal( chopstick[ (i+1) % 5] );

. THINKING

} while(1);

chopstick[ (i+1) % 5]. This means that the philosopher i has picked up the chopsticks on
In the above structure, first wait operation is performed on chopstick[i] and

his sides. Then the eating function is performed.

After that, signal operation is performed on chopstick[i] and chopstick[ (i+1) % 5]. This
means that the philosopher i has eaten and put down the chopsticks on his sides. Then
the philosopher goes back to thinking.

• Consider the following page reference string 7, 0,1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1,


2, 0, 1, 7, 0, 1 How many page faults would occur for the following replacement
algorithms, assuming three frames that all frames are initially empty?

4 Marks

• What do you meant by thrashing?

In computer science, thrash is the poor performance of a virtual memory (or paging)
system when the same pages are being loaded repeatedly due to a lack of main memory to
keep them in memory. Depending on the configuration and algorithm, the actual
throughput of a system can degrade by multiple orders of magnitude.

• Write short notes on the following:


• Demand Paging
Demand paging can be described as a memory management technique that is used in
operating systems to improve memory usage and system performance. Demand paging is a
technique used in virtual memory systems where pages enter main memory only when
requested or needed by the CPU.
In demand paging, the operating system loads only the necessary pages of a program into
memory at runtime, instead of loading the entire program into memory at the start

• Internal Fragmentation

Fragmentation is an unwanted problem in the operating system in which the processes


are loaded and unloaded from memory, and free memory space is fragmented. Processes
can't be assigned to memory blocks due to their small size, and the memory blocks stay
unused. It is also necessary to understand that as programs are loaded and deleted from
memory, they generate free space or a hole in the memory. These small blocks cannot be
allotted to new arriving processes, resulting in inefficient memory use.
The conditions of fragmentation depend on the memory allocation system. As the process is
loaded and unloaded from memory, these areas are fragmented into small pieces of
memory that cannot be allocated to incoming processes. It is called fragmentation.

• Free Space Management on I/O Buffering

Free Space Management


A file system is responsible to allocate the free blocks to the file therefore it has to keep
track of all the free blocks present in the disk. There are mainly two approaches by using
which, the free blocks in the disk are managed.

1. Bit Vector
In this approach, the free space list is implemented as a bit map vector. It contains the
number of bits where each bit represents each block.
If the block is empty then the bit is 1 otherwise it is 0. Initially all the blocks are empty
therefore each bit in the bit map vector contains 1.
LAs the space allocation proceeds, the file system starts allocating blocks to the files and
setting the respective bit to 0.

2. Linked List
It is another approach for free space management. This approach suggests linking together
all the free blocks and keeping a pointer in the cache which points to the first free block.
Therefore, all the free blocks on the disks will be linked together with a pointer. Whenever a
block gets allocated, its previous free block will be linked to its next free block.

• LRU Replacement

In operating systems that use paging for memory management, page replacement
algorithm are needed to decide which page needed to be replaced when new page comes
in. Whenever a new page is referred and not present in memory, page fault occurs and
Operating System replaces one of the existing pages with newly needed page. Different
page replacement algorithms suggest different ways to decide which page to replace. The
target for all algorithms is to reduce number of page faults.
In Least Recently Used (LRU) algorithm is a Greedy algorithm where the page to be replaced
is least recently used. The idea is based on locality of reference, the least recently used page
is not likely
Let say the page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 . Initially we have 4 page slots
empty.
Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already there so —> 0 Page fault.
when 3 came it will take the place of 7 because it is least recently used —>1 Page fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are already
available in the memory.

• Optiml
Replacement (f)
Paging

• Segmentation

In Operating Systems, Segmentation is a memory management technique in which the


memory is divided into the variable size parts. Each part is known as a segment which can
be allocated to a process.
The details about each segment are stored in a table called a segment table. Segment table
is stored in one (or many) of the segments.

Segment table contains mainly two information about segment:


Base: It is the base address of the segment
Limit: It is the length of the segment.
• Global Page Replacement Algorithm

A global page replacement algorithm is able to select any page in memory, whereas local
page replacement algorithms assume some form of memory partitioning. The goal of page
replacement algorithms is to reduce the number of page faults, and they help the operating
system (OS) decide which page to move out to make space for the one that is currently
needed.

Global replacement algorithms have the following advantages:


1. They don't hinder the performance of processes
2. They result in greater system throughput

However, they also have the following disadvantages:


1. The process itself can't solely control the page fault ratio
2.The pages in memory for a process depend on the paging behavior of other processes as
well

Local page replacement algorithms select a page that belongs to the same process that
incurred the page fault. The most popular forms of partitioning are fixed partitioning and
balanced set algorithms based on the working set model. The advantage of local page
replacement is its scalability.
Here are some types of page replacement algorithms: Optimal Page Replacement
Algorithm, First In First Out Page Replacement Algorithm, and Least Recently Used (LRU)
Page Replacement Algorithm

You might also like