KEMBAR78
R074 OS Assignment-2 | PDF | Kernel (Operating System) | Scheduling (Computing)
0% found this document useful (0 votes)
11 views23 pages

R074 OS Assignment-2

Uploaded by

MAYANK DEORE
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views23 pages

R074 OS Assignment-2

Uploaded by

MAYANK DEORE
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

SVKM’s NMIMS

Mukesh Patel School of Technology Management & Engineering Artificial Intelligence


Department

Program: B.Tech/MBA Tech. Semester: III

Course: Operating System Assignment 1

Faculty :Prof. Jish Joy Submission:19th August 2025

Name: Mayank Deore

Roll no: R074

Question 1: What is an Operating System? Explain various functions and objectives.

Answer:

An Operating System (OS) is a collection of system programs that manages computer


hardware resources and provides a platform for other software applications to run. It acts as
an intermediary between users and computer hardware, making the computer system
convenient to use.

Functions of Operating System:

1. Process Management

o Creating, scheduling, and terminating processes

o Managing process synchronization and communication

o Handling deadlocks and resource allocation

2. Memory Management

o Allocating and deallocating memory space

o Managing virtual memory and paging

o Implementing memory protection mechanisms

3. File System Management

o Creating, deleting, and organizing files and directories

o Managing file access permissions and security

o Handling file storage and retrieval operations

4. Device Management

o Managing input/output devices through device drivers


o Controlling device allocation and deallocation

o Handling device communication and buffering

5. Security and Protection

o User authentication and authorization

o Access control mechanisms

o Protection against malicious software and unauthorized access

Objectives of Operating System:

1. Convenience: Making the computer system easy to use

2. Efficiency: Optimal utilization of computer resources

3. Ability to Evolve: Supporting new functions and hardware without interfering with
existing services

4. Throughput: Maximizing the number of tasks completed per unit time

5. Response Time: Minimizing the time between request and response

6. Reliability: Ensuring system stability and fault tolerance

Question 2: Explain any 5 services provided by OS.

Answer:

1. Program Execution

o Loading programs into memory

o Executing programs with proper resource allocation

o Handling program termination and cleanup

2. I/O Operations

o Managing input/output operations for programs

o Providing device-independent I/O interfaces

o Handling device drivers and hardware communication

3. File System Manipulation

o Creating, reading, writing, and deleting files

o Managing directories and file organization


o Implementing file access control and permissions

4. Communications

o Inter-process communication (IPC) mechanisms

o Network communication support

o Message passing and shared memory facilities

5. Error Detection and Handling

o Detecting hardware and software errors

o Taking appropriate corrective actions

o Logging errors for system maintenance

Question 3: Describe kernel. Explain difference between Monolithic and Microkernel.

Answer:

Kernel:

The kernel is the core component of an operating system that manages system resources and
provides essential services. It runs in privileged mode and has direct access to hardware
resources. The kernel acts as a bridge between applications and hardware.

Monolithic Kernel vs Microkernel:

Aspect Monolithic Kernel Microkernel


Structure Single large executable with all OS Small kernel with services in
services user space
Size Larger in size Smaller in size
Performance Faster due to direct function calls Slower due to message passing
overhead
Reliability Less reliable - one component failure More reliable - isolated services
can crash entire system
Maintenance Difficult to maintain and modify Easier to maintain and extend
Security Less secure - all services run in kernel More secure - services isolated
mode in user space
Examples Linux, Unix, Windows (early versions) QNX, Minix, L4
Communication Direct function calls Inter-process communication
(IPC)
Modularity Less modular Highly modular
Advantages of Monolithic Kernel:

• High performance due to minimal overhead


• Simple design and implementation

• Direct access to hardware

Advantages of Microkernel:

• Better fault isolation and system stability

• Easier to port to different architectures

• Enhanced security through privilege separation

• Easier debugging and testing

Question 4: Explain System Calls in brief.

Answer:

System Calls are the interface between user programs and the operating system kernel. They
provide a way for applications to request services from the operating system.

Characteristics of System Calls:

1. Privileged Operations: Allow user programs to perform privileged operations safely

2. Controlled Access: Provide controlled access to system resources

3. Standard Interface: Offer a standardized way to interact with the OS

Types of System Calls:

1. Process Control

o fork() - Create a new process

o exec() - Execute a program

o exit() - Terminate a process

o wait() - Wait for process completion

2. File Management

o open() - Open a file

o read() - Read from a file

o write() - Write to a file

o close() - Close a file

3. Device Management
o ioctl() - Device-specific operations

o Device read/write operations

4. Information Maintenance

o getpid() - Get process ID

o time() - Get system time

o ps() - Get process information

5. Communication

o pipe() - Create communication pipe

o shmget() - Shared memory operations

o Socket operations for network communication

System Call Execution Process:

1. User program makes system call

2. Mode switches from user to kernel mode

3. Kernel validates parameters

4. Kernel performs requested operation

5. Results returned to user program

6. Mode switches back to user mode

Question 5: Explain the various fields of Process Control Block.

Answer:

The Process Control Block (PCB) is a data structure that contains all information about a
process. It serves as the repository for process-related information.

Fields of Process Control Block:

1. Process Identification

o Process ID (PID): Unique identifier for the process

o Parent Process ID (PPID): ID of the parent process

o User ID: Identity of the user who created the process

2. Process State Information


o Process State: Current state (ready, running, waiting, terminated)

o Program Counter: Address of next instruction to execute

o CPU Registers: Contents of processor registers

o Stack Pointer: Points to the top of process stack

3. Process Control Information

o Priority: Scheduling priority of the process

o Scheduling Information: Algorithm-specific data

o Memory Management Information: Page tables, segment tables

o Accounting Information: CPU usage, time limits, account numbers

4. I/O Status Information

o I/O Devices: List of allocated I/O devices

o Open File List: Files opened by the process

o Pending I/O Operations: Outstanding I/O requests

5. Memory Management Information

o Base and Limit Registers: Memory boundaries

o Page Tables: Virtual to physical memory mapping

o Segment Tables: Memory segmentation information

6. Accounting Information

o CPU Time Used: Total CPU time consumed

o Time Limits: Maximum allowed execution time

o Process Start Time: When the process was created

o Resource Usage: Memory, I/O usage statistics

Question 6: Explain hierarchical architecture of an Operating System.

Answer:

The Hierarchical Architecture (also called Layered Architecture) organizes the operating
system into layers, where each layer uses services from layers below it and provides services
to layers above it.

Structure of Layered OS:


Layer N (Highest) - User Applications

Layer N-1 - User Interface

Layer N-2 - I/O Management

Layer N-3 - File System

Layer N-4 - Device Drivers

Layer N-5 - Memory Management

Layer N-6 - Process Scheduling

Layer 1 - Basic I/O Functions

Layer 0 (Lowest) - Hardware

Characteristics:

1. Modularity: Each layer has specific functions and responsibilities

2. Abstraction: Upper layers are isolated from lower-level details

3. Hierarchy: Strict layering with defined interfaces

4. Encapsulation: Each layer encapsulates its functionality

Advantages:

1. Simplicity: Easy to understand and design

2. Debugging: Problems can be isolated to specific layers

3. Modularity: Changes to one layer don't affect others

4. Reusability: Layers can be reused in different systems

5. Verification: Each layer can be tested independently

Disadvantages:

1. Performance Overhead: Multiple layer transitions slow down operations

2. Rigid Structure: Difficult to implement cross-layer optimizations

3. Design Complexity: Careful layer definition required

4. Efficiency Issues: May not be the most efficient approach

Example - THE Operating System:

• Layer 5: User Programs

• Layer 4: Buffering for I/O devices


• Layer 3: Operator console device driver

• Layer 2: Memory management

• Layer 1: CPU scheduling

• Layer 0: Hardware

Question 7: Define Process. Explain various Process states with help of diagram.

Answer:

Definition of Process:

A Process is a program in execution. It includes the program code, current activity (program
counter), stack, data section, and heap. A process is an active entity that requires resources
like CPU time, memory, and I/O devices.

Process States:

NEW ──────────> READY ──────────> RUNNING

↑ │

│ │

│ ├── I/O or Event Wait ──> WAITING

│ │ │

│ │ │

│ ← ─ ─ ─ ─ ─ ─ ─ ─ ── I/O Completion ← ─ ─ ─ ─ ─┘

│ │

│ │

└── Time Slice ────┘

TERMINATED

State Descriptions:

1. NEW (Created)

o Process is being created


o Resources are being allocated

o PCB is being initialized

2. READY

o Process is ready to execute

o Waiting for CPU allocation

o All resources except CPU are available

3. RUNNING

o Process is currently executing on CPU

o Instructions are being executed

o Only one process can be in running state per CPU

4. WAITING (Blocked)

o Process is waiting for some event to occur

o Examples: I/O completion, signal from another process

o Cannot execute even if CPU is available

5. TERMINATED (Exit)

o Process has finished execution

o Resources are being deallocated

o PCB is being removed

State Transitions:

• NEW → READY: Admission (long-term scheduler)

• READY → RUNNING: Dispatch (short-term scheduler)

• RUNNING → READY: Interrupt/Time slice expired

• RUNNING → WAITING: I/O request or event wait

• WAITING → READY: I/O completion or event occurs

• RUNNING → TERMINATED: Process completes execution

Question 8: Compare Preemptive and Non-Preemptive Scheduling.

Answer:
Aspect Preemptive Scheduling Non-Preemptive Scheduling
Definition OS can forcibly remove a process Process runs until completion or
from CPU voluntary release
CPU Control OS has full control over CPU Running process controls CPU
allocation
Interruption Process can be interrupted mid- Process cannot be interrupted
execution
Response Time Better response time for Poor response time
interactive systems
Overhead Higher overhead due to context Lower overhead
switching
Complexity More complex implementation Simpler implementation
Starvation Less likely to cause starvation May cause starvation
Fairness More fair resource allocation Less fair allocation
Real-time Suitable for real-time systems Not suitable for real-time systems
Suitability
Examples Round Robin, Priority with FCFS, SJF without preemption
Preemption
Advantages of Preemptive Scheduling:

1. Responsive: Better for interactive and real-time systems

2. Fair: Prevents monopolization of CPU by single process

3. Flexible: Can adapt to changing system conditions

4. Multi-user Support: Better support for multi-user environments

Advantages of Non-Preemptive Scheduling:

1. Simple: Easier to implement and understand

2. Lower Overhead: No context switching overhead

3. Predictable: Process execution is predictable

4. Less Resource Usage: Requires fewer system resources

When to Use:

• Preemptive: Interactive systems, real-time systems, multi-user environments

• Non-Preemptive: Batch processing systems, embedded systems with predictable


workloads

Question 9: Mention the different scheduling algorithms and compare them.

Answer:
Scheduling Algorithms:

1. First-Come, First-Served (FCFS)

2. Shortest Job First (SJF)

3. Priority Scheduling

4. Round Robin (RR)

5. Shortest Remaining Time First (SRTF)

6. Multilevel Queue Scheduling

7. Multilevel Feedback Queue Scheduling

Comparison Table:

Algorithm Type Advantages Disadvantages Use Case


FCFS Non- Simple, Fair High average waiting Batch systems
preemptive ordering time, Convoy effect
SJF Non- Minimum average Starvation of long Known
preemptive waiting time processes, execution
Unpredictable times
SRTF Preemptive Optimal for High overhead, Time-sharing
average waiting Starvation systems
time
Priority Both Important Starvation, Priority Real-time
processes get inversion systems
priority
Round Preemptive Fair, Good Higher overhead, Time-sharing,
Robin response time Performance depends Interactive
on quantum
Multilevel Both Different Inflexible, No process Mixed
Queue algorithms for movement workloads
different types
MLFQ Preemptive Adaptive, Fair, Complex General-
Prevents implementation purpose OS
starvation
Performance Metrics:

1. CPU Utilization: Percentage of time CPU is busy

2. Throughput: Number of processes completed per unit time

3. Turnaround Time: Total time from submission to completion

4. Waiting Time: Time spent waiting in ready queue

5. Response Time: Time from request to first response


Algorithm Selection Criteria:

• Interactive Systems: Round Robin, MLFQ

• Real-time Systems: Priority-based scheduling

• Batch Systems: FCFS, SJF

• General Purpose: Multilevel Feedback Queue

Question 10: What do you understand by Context Switching.

Answer:

Context Switching is the process of storing the state of a currently running process and
restoring the state of another process to be executed. It allows the CPU to switch between
different processes.

Process of Context Switching:

1. Save Current Process State

o Save CPU registers to PCB

o Save program counter

o Save stack pointer

o Update process state to appropriate value

2. Select Next Process

o Scheduler selects next process to run

o Load the selected process's PCB

3. Restore New Process State

o Load CPU registers from new process's PCB

o Load program counter

o Load stack pointer

o Set process state to running

When Context Switching Occurs:

1. Time Slice Expiry: In preemptive scheduling

2. I/O Operations: Process blocks for I/O

3. System Calls: Process makes system call


4. Interrupts: Hardware or software interrupts

5. Process Termination: Current process finishes

6. Higher Priority Process: Preemption by priority

Information Saved During Context Switch:

• CPU Registers: General-purpose and special registers

• Program Counter: Next instruction address

• Stack Pointer: Current stack position

• Memory Management Information: Page tables, segments

• I/O Status: Open files, I/O operations

• Process State: Current execution state

Overhead of Context Switching:

1. Direct Costs:

o Time to save current process state

o Time to restore new process state

o Scheduler execution time

2. Indirect Costs:

o Cache misses and memory access patterns

o TLB (Translation Lookaside Buffer) flushes

o Pipeline stalls in modern processors

Factors Affecting Context Switch Time:

• Number of Registers: More registers = more time

• Memory Management Complexity: Virtual memory overhead

• Hardware Support: Special instructions for context switching

• Cache Architecture: Impact on cache performance

Optimization Techniques:

1. Hardware Support: Specialized registers for fast switching

2. Lazy Switching: Only save/restore necessary state

3. Thread Switching: Lighter weight than process switching


4. Register Windows: Hardware support for multiple register sets

Question 11: Explain Multilevel Queue Scheduling.

Answer:

Multilevel Queue Scheduling is a scheduling algorithm that partitions processes into several
separate queues based on process characteristics. Each queue has its own scheduling
algorithm and priority level.

Structure:

High Priority ┌─────────────────────────┐

│ System Processes Queue │ ← FCFS

└─────────────────────────┘

┌─────────────────────────┐

│ Interactive Process Queue│ ← Round Robin

└─────────────────────────┘

┌─────────────────────────┐

│ Interactive Editing Queue│ ← Round Robin

└─────────────────────────┘

┌─────────────────────────┐

Low Priority │ Batch Process Queue │ ← FCFS

└─────────────────────────┘

Characteristics:

1. Fixed Assignment: Processes are permanently assigned to queues

2. Priority Levels: Each queue has different priority

3. Different Algorithms: Each queue can use different scheduling algorithm

4. No Migration: Processes cannot move between queues

Queue Categories (Example):


1. System Processes (Highest Priority)

o OS kernel processes

o Critical system tasks

o Algorithm: FCFS

2. Interactive Processes

o User interactive applications

o GUI applications

o Algorithm: Round Robin (small time quantum)

3. Interactive Editing

o Text editors, IDEs

o Algorithm: Round Robin

4. Batch Processes (Lowest Priority)

o Background computational tasks

o Algorithm: FCFS

Scheduling Between Queues:

1. Fixed Priority Preemptive Scheduling

o Higher priority queues are served first

o Lower priority queues only execute when higher ones are empty

o Risk of starvation for lower priority queues

2. Time Slicing Between Queues

o Each queue gets fixed percentage of CPU time

o Example: 80% to foreground, 20% to background

Advantages:

1. Separation of Concerns: Different types of processes handled appropriately

2. Predictable Performance: System processes get guaranteed high priority

3. Customizable: Each queue can have optimal scheduling algorithm

4. Simple Implementation: Straightforward to implement

5. Good for Mixed Workloads: Handles different process types effectively


Disadvantages:

1. Inflexible: Processes cannot change priority levels

2. Starvation: Lower priority queues may never execute

3. Static Classification: Process classification is permanent

4. Overhead: Multiple queue management overhead

5. Unfair: Lower priority processes get unfair treatment

Real-World Examples:

1. Foreground vs Background Processes

o Interactive vs batch processing

o Different response time requirements

2. User vs System Processes

o System processes get higher priority

o User processes share remaining CPU time

3. Time-Critical vs Regular Processes

o Real-time processes in high-priority queue

o Regular processes in lower-priority queues

You might also like