KEMBAR78
Document 4 | PDF | Kernel (Operating System) | Peer To Peer
0% found this document useful (0 votes)
10 views17 pages

Document 4

Uploaded by

maryamallhyani0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views17 pages

Document 4

Uploaded by

maryamallhyani0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

1. What are the three main purposes of an operating system?

An operating system has three main purposes: managing hardware and software
resources (like CPU, memory, and storage), providing a user interface (CLI or GUI) for
interaction, and supporting the execution of application software by offering essential
services and security.

2. What is the purpose of system programs?

File Management: They help in creating, deleting, copying, and managing files and
directories on the storage devices .

Device Management: They provide utilities for managing peripheral devices like printers,
disk drives, and network interfaces, ensuring proper communication between hardware
and software.

System Utilities: They offer tools for system maintenance and performance optimization,
such as disk cleanup, antivirus, and backup utilities .

Development Tools: They include compilers, interpreters, and debuggers to assist in


software development .

Client-Server Model

1. Structure: Comprises distinct client and server roles. Clients request services; servers
fulfill these requests.

2. Control: Centralized control with servers managing resources and data, leading to easier
administration and enhanced security.

3. Scalability: Servers can become bottlenecks under heavy load, potentially limiting
scalability.

4. Examples: Common in web services, email services, and databases.

3. Distinguish between the client–server and peer-to-peer models of distributed systems.?

Peer-to-Peer (P2P) Model


1. Structure: All nodes are peers, meaning each node can act as both client and server,
sharing resources directly.

2. Control: Decentralized control without a central authority, making the system more
resilient and fault-tolerant.

3. Scalability: Highly scalable, as each peer adds resources to the network, reducing the
chance of bottlenecks.

4. Examples: Used in file-sharing systems like BitTorrent and decentralized platforms like
blockchain.

Key Differences

- Centralization: Client-server is centralized; P2P is decentralized.

- Resource Management: Client-server relies on a central server; P2P distributes resources


across all nodes.

- Scalability and Resilience: Client-server can struggle with scalability and single points of
failure; P2P scales well and is more fault-tolerant but harder to manage and secure.

4. Dual-mode operation allows OS to protect itself and other system components, explain this
statement.

Dual-mode operation in an operating system enhances security and stability by separating


tasks into user mode and kernel mode. In user mode, applications run with restricted
access to system resources, preventing them from directly performing harmful operations.
In kernel mode, the OS has unrestricted access to hardware and critical functions. This
separation ensures that user processes cannot interfere with the core operations of the
OS, providing controlled access to resources and protecting the system from crashes and
security breaches. This mechanism helps maintain a stable and secure computing
environment by isolating user processes from critical system functions.
5. What is the main advantage of the microkernel approach to system design? How do user
programs and system services interact in a microkernel architecture? What are the disadvantages
of using the microkernel approach?

Main Advantage of the Microkernel Approach


The main advantage of the microkernel approach is that it makes the system more secure
and easier to manage. By keeping the kernel small and running most services (like device
drivers and file systems) in user space, the system becomes more modular. This means
that if something goes wrong in one service, it doesn’t crash the whole system, making it
more stable and secure (Encyclopedia Britannica) (Ajamias Github Pages) .

Interaction Between User Programs and System Services

In a microkernel system:
User Programs run in user mode and interact with the kernel through system calls or inter-
process communication (IPC).

System Services, like drivers and file systems, also run in user mode and communicate
with user programs via IPC.

When a user program needs something, it sends a message to the service, and the
microkernel manages these messages to ensure they get to the right place. This setup
keeps the kernel simple and focuses on just handling basic functions like communication
and basic memory management (Ajamias Github Pages) (Front) .

Disadvantages of the Microkernel Approach


Performance Issues: Since the system relies a lot on IPC and context switches between
user programs and services, it can be slower compared to a monolithic kernel where
everything runs in kernel mode.

Development Complexity: Writing a microkernel can be tricky because you need to split
the OS into many small services and ensure they communicate efficiently.
Potential Latency: The extra steps involved in communication can make system calls
slower, which might be a problem for performance-critical applications (Encyclopedia
Britannica) (Front) .

Summary
The microkernel approach offers a modular and secure system design by running minimal
core functions in kernel mode and moving other services to user space. However, it can
suffer from performance overhead, increased development complexity, and potential
latency issues due to the heavy reliance on IPC and context switching.

6. What is the purpose of system calls?

System calls are essential commands that allow programs to interact with the operating
system. They facilitate hardware access, file operations, process management, inter-
process communication, error handling, and security enforcement. System calls serve as
a bridge between user-level programs and the OS, enabling efficient resource utilization
and coordination. They ensure smooth communication between programs and hardware,
manage processes effectively, and enforce security policies. Through error handling, they
provide feedback on the success or failure of operations, enhancing program reliability. In
summary, system calls are vital for program functionality, ensuring seamless interaction
with the underlying operating system.

7. List different services provided by OS to users, and to system hardware.

Services Provided by the Operating System:


To Users:

User Interface: Provides user-friendly interfaces.

File Management: Organizes and retrieves files.

Process Management: Controls running programs.

Memory Management: Allocates memory efficiently.

Device Management: Manages hardware devices.

Security: Protects data and system integrity.

To System Hardware:
Device Drivers: Facilitates software-hardware communication.
Interrupt Handling: Manages hardware signals.

Resource Allocation: Distributes CPU and memory resources.

Power Management: Regulates power usage.

Error Handling: Manages hardware errors.

Performance Monitoring: Tracks system performance.

These OS services ensure smooth operation and efficient utilization.

8. Provide two programming examples in which multithreading provides better performance than
a single-threaded solution.

demonstrate the performance difference between a multithreaded and a


single-threaded approach using a simple task of calculating the sum of
elements in a large array. We'll compare the execution time of both
approaches.
Code :

import java.util.Random;

public class SumCalculator {

private static final int ARRAY_SIZE = 100000000; // 100 million elements

public static void main(String[] args) {

int[] array = generateRandomArray(ARRAY_SIZE);

// Measure execution time for single-threaded approach

long singleThreadStartTime = System.currentTimeMillis();

long singleThreadResult = calculateSumSingleThread(array);

long singleThreadEndTime = System.currentTimeMillis();

System.out.println("Single-threaded sum: " + singleThreadResult);


System.out.println("Single-threaded execution time: " + (singleThreadEndTime -
singleThreadStartTime) + " ms");

// Measure execution time for multithreaded approach

long multiThreadStartTime = System.currentTimeMillis();

long multiThreadResult = calculateSumMultiThread(array);

long multiThreadEndTime = System.currentTimeMillis();

System.out.println("Multithreaded sum: " + multiThreadResult);

System.out.println("Multithreaded execution time: " + (multiThreadEndTime -


multiThreadStartTime) + " ms");

private static int[] generateRandomArray(int size) {

int[] array = new int[size];

Random random = new Random();

for (int i = 0; i < size; i++) {

array[i] = random.nextInt(100); // Generate random numbers between 0 and 99

return array;

private static long calculateSumSingleThread(int[] array) {

long sum = 0;

for (int num : array) {

sum += num;

}
return sum;

private static long calculateSumMultiThread(int[] array) {

final int NUM_THREADS = Runtime.getRuntime().availableProcessors();

long[] partialSums = new long[NUM_THREADS];

Thread[] threads = new Thread[NUM_THREADS];

for (int i = 0; i < NUM_THREADS; i++) {

final int threadIndex = i;

threads[i] = new Thread(() -> {

int startIndex = threadIndex * (ARRAY_SIZE / NUM_THREADS);

int endIndex = (threadIndex == NUM_THREADS - 1) ? ARRAY_SIZE : (threadIndex + 1)


* (ARRAY_SIZE / NUM_THREADS);

for (int j = startIndex; j < endIndex; j++) {

partialSums[threadIndex] += array[j];

});

threads[i].start();

// Wait for all threads to finish

try {

for (Thread thread : threads) {

thread.join();

}
} catch (InterruptedException e) {

e.printStackTrace();

// Combine partial sums

long totalSum = 0;

for (long partialSum : partialSums) {

totalSum += partialSum;

return totalSum;

In this program:

calculateSumSingleThread() calculates the sum of elements in the array using a single


thread.

calculateSumMultiThread() divides the array into equal parts and distributes the
summation task among multiple threads, each calculating the sum of its assigned portion
of the array. After all threads finish, their partial sums are combined to get the total sum.

By running this program, we'll observe that the multithreaded approach generally
completes the task faster than the single-threaded approach, demonstrating the
performance benefits of multithreading for computationally intensive tasks.

9. Provide two programming examples in which multithreading does not provide better
performance than a single-threaded solution.

Consider a task where you need to perform a sequential search for a target element in a
large array. In this scenario, the task is inherently sequential, and dividing it among
multiple threads may introduce overhead without significant performance gains.

Let's compare the execution time of a single-threaded search versus a multithreaded


search for a target element in an array:
import java.util.Random;

public class SearchExample {

private static final int ARRAY_SIZE = 100000000; // 100 million elements

private static final int TARGET_ELEMENT = 99999999; // Element to search for

public static void main(String[] args) {

int[] array = generateRandomArray(ARRAY_SIZE);

// Measure execution time for single-threaded search

long singleThreadStartTime = System.currentTimeMillis();

boolean singleThreadResult = searchSingleThread(array, TARGET_ELEMENT);

long singleThreadEndTime = System.currentTimeMillis();

System.out.println("Single-threaded search result: " + singleThreadResult);

System.out.println("Single-threaded execution time: " + (singleThreadEndTime -


singleThreadStartTime) + " ms");

// Measure execution time for multithreaded search

long multiThreadStartTime = System.currentTimeMillis();

boolean multiThreadResult = searchMultiThread(array, TARGET_ELEMENT);

long multiThreadEndTime = System.currentTimeMillis();

System.out.println("Multithreaded search result: " + multiThreadResult);

System.out.println("Multithreaded execution time: " + (multiThreadEndTime -


multiThreadStartTime) + " ms");

}
private static int[] generateRandomArray(int size) {

int[] array = new int[size];

Random random = new Random();

for (int i = 0; i < size; i++) {

array[i] = random.nextInt(100); // Generate random numbers between 0 and 99

return array;

private static boolean searchSingleThread(int[] array, int target) {

for (int num : array) {

if (num == target) {

return true;

return false;

private static boolean searchMultiThread(int[] array, int target) {

final int NUM_THREADS = Runtime.getRuntime().availableProcessors();

Thread[] threads = new Thread[NUM_THREADS];

SearchTask[] tasks = new SearchTask[NUM_THREADS];

int chunkSize = ARRAY_SIZE / NUM_THREADS;

for (int i = 0; i < NUM_THREADS; i++) {


int startIndex = i * chunkSize;

int endIndex = (i == NUM_THREADS - 1) ? ARRAY_SIZE : (i + 1) * chunkSize;

tasks[i] = new SearchTask(array, target, startIndex, endIndex);

threads[i] = new Thread(tasks[i]);

threads[i].start();

// Wait for all threads to finish

try {

for (Thread thread : threads) {

thread.join();

} catch (InterruptedException e) {

e.printStackTrace();

// Check if target was found by any thread

for (SearchTask task : tasks) {

if (task.isTargetFound()) {

return true;

return false;

private static class SearchTask implements Runnable {


private final int[] array;

private final int target;

private final int startIndex;

private final int endIndex;

private boolean targetFound;

public SearchTask(int[] array, int target, int startIndex, int endIndex) {

this.array = array;

this.target = target;

this.startIndex = startIndex;

this.endIndex = endIndex;

@Override

public void run() {

for (int i = startIndex; i < endIndex; i++) {

if (array[i] == target) {

targetFound = true;

return;

public boolean isTargetFound() {

return targetFound;

}
}

In this example:

searchSingleThread() performs a single-threaded search for the target element in the array.

searchMultiThread() divides the array into chunks and distributes the search task among
multiple threads. Each thread searches its assigned chunk for the target element.

We compare the execution time and the result of both approaches.

In scenarios where the task is inherently sequential, like searching for an element in an
array, multithreading may not provide better performance than a single-threaded solution.
This is because the overhead of managing multiple threads may outweigh the benefits of
parallel execution, especially when the task cannot be effectively parallelized.

10. Suppose that the following processes arrive for execution at the times indicated. Each process
will run for the amount of time listed. In answering the questions, use nonpreemptive scheduling,
and base all decisions on the information you have at the time the decision must be made.

a. What is the average turnaround time for these processes with the FCFS scheduling
algorithm?
We can calculate the completion time for each process using nonpreemptive FCFS
scheduling:

P1 arrives at time 0 and runs for 8 units, so it completes at time 8.


P2 arrives at time 0.4, but it can only start after P1 completes, so it starts at time 8 and runs
for 4 units, completing at time 12.

P3 arrives at time 1.0, but it can only start after P2 completes, so it starts at time 12 and
runs for 1 unit, completing at time 13.

Now, we can calculate the turnaround time for each process:

Turnaround time for P1 = Completion time of P1 - Arrival time of P1 = 8 - 0 = 8 units

Turnaround time for P2 = Completion time of P2 - Arrival time of P2 = 12 - 0.4 = 11.6 units

Turnaround time for P3 = Completion time of P3 - Arrival time of P3 = 13 - 1 = 12 units

Finally, we calculate the average turnaround time:

Average Turnaround Time = (Turnaround time of P1 + Turnaround time of P2 + Turnaround


time of P3) / Total number of processes

Average Turnaround Time = (8 + 11.6 + 12) / 3 ≈ 10.2 units

So, the average turnaround time for these processes with the FCFS scheduling algorithm is
approximately 10.2 units.

b. What is the average turnaround time for these processes with the SJF scheduling
algorithm?

To calculate the average turnaround time for these processes using the Shortest Job First
(SJF) scheduling algorithm, we need to schedule the processes based on their burst times.
In SJF scheduling, the shortest job (process with the smallest burst time) is executed first.

We arrange the processes in increasing order of burst time:

P3 with burst time 1.

P2 with burst time 4.

P1 with burst time 8.

Now, we can calculate the completion time for each process:

P3 arrives at time 1.0 and runs for 1 unit, so it completes at time 2.


P2 arrives at time 0.4, but it can only start after P3 completes, so it starts at time 2 and runs
for 4 units, completing at time 6.

P1 arrives at time 0.0, but it can only start after P2 completes, so it starts at time 6 and runs
for 8 units, completing at time 14.

Now, we calculate the turnaround time for each process:

Turnaround time for P3 = Completion time of P3 - Arrival time of P3 = 2 - 1 = 1 unit

Turnaround time for P2 = Completion time of P2 - Arrival time of P2 = 6 - 0.4 = 5.6 units

Turnaround time for P1 = Completion time of P1 - Arrival time of P1 = 14 - 0 = 14 units

Finally, we calculate the average turnaround time:

Average Turnaround Time = (Turnaround time of P3 + Turnaround time of P2 + Turnaround


time of P1) / Total number of processes

Average Turnaround Time = (1 + 5.6 + 14) / 3 ≈ 6.53 units

So, the average turnaround time for these processes with the SJF scheduling algorithm is
approximately 6.53 units.

c. The SJF algorithm is supposed to improve performance but notice that we chose to run
process P1 at time 0 because we did not know that two shorter processes would arrive soon.
Compute what the average turnaround time will be if the CPU is left idle for the first 1 unit
and then SJF scheduling is used. Remember that processes P1 and P2 are waiting during
this idle time, so their waiting time may increase. This algorithm could be called future-
knowledge scheduling.

To compute the average turnaround time if the CPU is left idle for the first 1 unit and then
SJF scheduling is used, we need to consider the arrival time and burst time of each
process, as well as the waiting time for each process.

Since the CPU is left idle for the first 1 unit, processes P1 and P2 will be waiting during this
time, increasing their waiting time.

After the idle time, we apply SJF scheduling to determine the order of execution based on
burst time.
We have:

P3 with burst time 1.

P2 with burst time 4.

P1 with burst time 8.

Now, we calculate the completion time for each process:

P3 arrives at time 1.0 and runs for 1 unit, so it completes at time 2.

P2 arrives at time 0.4, but it can only start after P3 completes, so it starts at time 2 and runs
for 4 units, completing at time 6.

P1 arrives at time 0.0, but it can only start after P2 completes, so it starts at time 6 and runs
for 8 units, completing at time 14.

Now, we calculate the turnaround time for each process:

Turnaround time for P3 = Completion time of P3 - Arrival time of P3 = 2 - 1 = 1 unit

Turnaround time for P2 = Completion time of P2 - Arrival time of P2 = 6 - 0.4 = 5.6 units

Turnaround time for P1 = Completion time of P1 - Arrival time of P1 = 14 - 0 = 14 units

Finally, we calculate the average turnaround time:

Average Turnaround Time = (Turnaround time of P3 + Turnaround time of P2 + Turnaround


time of P1) / Total number of processes

Average Turnaround Time = (1 + 5.6 + 14) / 3 ≈ 6.53 units

So, even with future-knowledge scheduling (idle time followed by SJF scheduling), the
average turnaround time remains approximately 6.53 units, which is the same as in the
original SJF scheduling scenario. This is because although we allow shorter processes to
arrive, the longer burst time of P1 still contributes significantly to the overall turnaround
time.

Sources:
Britannica: Overview of microkernel and its advantages (Encyclopedia Britannica)

https://www.britannica.com/technology/operating-system .

University of Rhode Island: Detailed discussion on microkernel architecture and its


challenges (Front)
https://homepage.cs.uri.edu/faculty/wolfe/book/Readings/Reading07.htm .
StepofWeb: Insights into the practical implications and development challenges of
microkernels (Ajamias Github Pages)
https://ajamias.github.io/openos/textbook/intro/purpose.html .

StepofWeb on system utilities and their roles (Stepofweb)

https://stepofweb.com/what-are-the-three-main-purposes-of-an-operating-system-
briefly-explain-each-purpose/#google_vignette c.

Structure: In P2P networks, all nodes (peers) are equal and can act both as clients and
servers, sharing resources directly with each other (Ajamias Github Pages)
https://ajamias.github.io/openos/textbook/intro/purpose.html .

Operating System Concepts - System Calls: https://phoenixnap.com/kb/system-call

GeeksforGeeks - File System Calls in Operating System:


https://www.geeksforgeeks.org/different-types-of-system-calls-in-os/

Tutorialspoint - System Calls in Operating System: https://www.tutorialspoint.com/what-


are-system-calls-in-operating-system

Operating Systems: Three Easy Pieces - System Calls:


https://pages.cs.wisc.edu/~remzi/OSTEP/

You might also like