KEMBAR78
Module 5 | PDF | Process (Computing) | Thread (Computing)
0% found this document useful (0 votes)
8 views36 pages

Module 5

Uploaded by

poornimahmitt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views36 pages

Module 5

Uploaded by

poornimahmitt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

MAHARAJA INSTITUTE OF TECHNOLOGY THANDAVAPUR

(Approved by AICTE, New Delhi and Affiliated to VTU, Belagavi)

Department of AI and ML
Subject Name
Introduction to Embedded System
Sub code: BETCK205J

Prepared by
Prof. Poornima H

MIT, Thandavapura
Module-5
Real-Time Operating System (RTOS) Based Embedded System Design
1. Operating System Basics:
The operating system acts as a bridge between the user applications/tasks and the
underlying system resources through a set of system functionalities and services. The OS
manages the system resources and makes them available to the user applications/tasks on a
need basis. A normal computing system is a collection of different I/O subsystems, working,
and storage memory. The primary functions of an operating system is
 Make the system convenient to use
 Organise and manage the system resources efficiently and correctly
Figure 10.1 gives an insight into the basic components of an operating system and their
interfaces with rest of the world.

1.1 The Kernel


 The kernel is the core of the operating system and is responsible for managing the
system resources and the communication among the hardware and other system
services.
 Kernel acts as the abstraction layer between system resources and user applications.
Kernel contains a set of system libraries and services.
 For a general purpose OS, the kernel contains different services for handling the
following.

Process Management:
Process management deals with managing the processes/tasks. Process management includes
setting up the memory space for the process, loading the process’s code into the memory space,
allocating system resources, scheduling and managing the execution of the process, setting up
and managing the Process Control Block (PCB), Inter Process Communication and
synchronisation, process termination/ deletion, etc. We will look into the description of process
and process management in a later section of this chapter.

Primary Memory Management:


The term primary memory refers to the volatile memory (RAM) where processes are
loaded and variables and shared data associated with each process are stored.
The Memory Management Unit (MMU) of the kernel is responsible for
 Keeping track of which part of the memory area is currently used by which process
 Allocating and De-allocating memory space on a need basis (Dynamic memory
allocation).

MIT, Thandavapura
File System Management:
File is a collection of related information. A file could be a program (source code or
executable), text fi les, image fi les, word documents, audio/video files, etc. Each of these files
differ in the kind of information they hold and the way in which the information is stored. The
file operation is a useful service provided by the OS. The fi le system management service of
Kernel is responsible for
 The creation, deletion and alteration of files
 Creation, deletion and alteration of directories
 Saving of fi les in the secondary storage memory (e.g. Hard disk storage)
 Providing automatic allocation of fi le space based on the amount of free space
available
 Providing a flexible naming convention for the files
The various file system management operations are OS dependent. For example, the kernel of
Microsoft® DOS OS supports a specific set of file system management operations and they
are not the same as the file system operations supported by UNIX Kernel.

I/O System (Device) Management:


 Kernel is responsible for routing the I/O requests coming from different user
applications to the appropriate I/O devices of the system. In a well-structured OS, the
direct accessing of I/O devices are not allowed and the access to them are provided
through a set of Application Programming Interfaces (APIs) exposed by the kernel.
 The kernel maintains a list of all the I/O devices of the system. This list may be available
in advance, at the time of building the kernel. Some kernels, dynamically updates the
list of available devices as and when a new device is installed (e.g. Windows NT kernel
keeps the list updated when a new plug ‘n’ play USB device is attached to the system).
The service ‘Device Manager’ (Name may vary across different OS kernels) of the
kernel is responsible for handling all I/O device related operations.
 The kernel talks to the I/O device through a set of low-level systems calls, which are
implemented in a service, called device drivers. The device drivers are specific to a
device or a class of devices. The Device Manager is responsible for
 Loading and unloading of device drivers
 Exchanging information and the system specific control signals to and from the device
Secondary Storage Management:
The secondary storage management deals with managing the secondary storage memory
devices, if any, connected to the system. Secondary memory is used as backup medium for
programs and data since the main memory is volatile. In most of the systems, the secondary
storage is kept in disks (Hard Disk). The secondary storage management service of kernel deals
with
 Disk storage allocation
 Disk scheduling (Time interval at which the disk is activated to backup data)
 Free Disk space management

Protection Systems:
 Most of the modern operating systems are designed in such a way to support multiple
users with different levels of access permissions (e.g. Windows 10 with user
permissions like ‘Administrator’, ‘Standard’, ‘Restricted’, etc.).

MIT, Thandavapura
 Protection deals with implementing the security policies to restrict the access to both
user and system resources by different applications or processes or users. In multiuser
supported operating systems, one user may not be allowed to view or modify the
whole/portions of another user’s data or profile details.
 In addition, some application may not be granted with permission to make use of some
of the system resources. This kind of protection is provided by the protection services
running within the kernel.

Interrupt Handler:
 Kernel provides handler mechanism for all external/internal interrupts generated by the
system. These are some of the important services offered by the kernel of an operating
system. It does not mean that a kernel contains no more than components/services
explained above.
 Depending on the type of the operating system, a kernel may contain lesser number of
components/services or more number of components/services. In addition to the
components/services listed above, many operating systems offer a number of add on
system components/services to the kernel.
 Network communication, network management, and user-interface graphics, timer
services (delays, timeouts, etc.), error handler, database management, etc. are examples
for such components/services. Kernel exposes the interface to the various kernel
applications/services, hosted by kernel, to the user applications through a set of standard
Application Programming Interfaces (APIs). User applications can avail these API calls
to access the various kernel application/services.

1.1.1 Kernel Space and User Space


 As we discussed in the earlier section, the applications/services are classified into two
categories, namely: user applications and kernel applications.
 The program code corresponding to the kernel applications/services are kept in a
contiguous area (OS dependent) of primary (working) memory and is protected from
the unauthorised access by user programs/applications. The memory space at which the
kernel code is located is known as ‘Kernel Space’. Similarly, all user applications are
loaded to a specific area of primary memory and this memory area is referred as ‘User
Space’.
 User space is the memory area where user applications are loaded and executed. The
partitioning of memory into kernel and user space is purely Operating System
dependent. Some OS implements this kind of partitioning and protection whereas some
OS do not segregate the kernel and user application code storage into two separate
areas.
 In an operating system with virtual memory support, the user applications are loaded
into its corresponding virtual memory space with demand paging technique; Meaning,
the entire code for the user application need not be loaded to the main (primary)
memory at once; instead the user application code is split into different pages and these
pages are loaded into and out of the main memory area on a need basis.
 The act of loading the code into and out of the main memory is termed as ‘Swapping’.
Swapping happens between the main (primary) memory and secondary storage
memory.
 Each process run in its own virtual memory space and are not allowed accessing the
memory space corresponding to another processes, unless explicitly requested by the
process. Each process will have certain privilege levels on accessing the memory of

MIT, Thandavapura
other processes and based on the privilege settings, processes can request kernel to map
another process’s memory to its own or share through some other mechanism.
 Most of the operating systems keep the kernel application code in main memory and it
is not swapped out into the secondary memory.

1.1.2 Monolithic Kernel and Microkernel: As we know, the kernel forms the heart of an
operating system. Different approaches are adopted for building an Operating System kernel.
Based on the kernel design, kernels can be classified into ‘Monolithic’ and ‘Micro’.
Monolithic Kernel
In monolithic kernel architecture, all kernel services run in the kernel space. Here all
kernel modules run within the same memory space under a single kernel thread. The tight
internal integration of kernel modules in monolithic kernel architecture allows the effective
utilisation of the low-level features of the underlying system. The major drawback of
monolithic kernel is that any error or failure in any one of the kernel modules leads to the
crashing of the entire kernel application. LINUX, SOLARIS, MS-DOS kernels are examples
of monolithic kernel. The architecture representation of a monolithic kernel is given in Fig.
10.2.

Microkernel:
 The microkernel design incorporates only the essential set of Operating System services
into the kernel. The rest of the Operating System services are implemented in programs
known as ‘Servers’ which runs in user space.
 This provides a highly modular design and OS-neutral abstraction to the kernel.
Memory management, process management, timer systems and interrupt handlers are
the essential services, which forms the part of the microkernel. Mach, QNX, Minix 3
kernels are examples for microkernel. The architecture representation of a microkernel
is shown in Fig. 10.3. Microkernel based design approach offers the following benefits
 Robustness: If a problem is encountered in any of the services, which runs as ‘Server’
application, the same can be reconfigured and re-started without the need for re-starting
the entire OS. Thus, this approach is highly useful for systems, which demands high
‘availability’. Refer Chapter 3 to get an understanding of ‘availability’. Since the
services which run as ‘Servers’ are running on a different memory space, the chances
of corruption of kernel services are ideally zero.
 Configurability: Any services, which run as ‘Server’ application can be changed
without the need to restart the whole system. This makes the system dynamically
configurable.

MIT, Thandavapura
TYPES OF OPERATING SYSTEMS
1. General Purpose Operating System (GPOS):
The operating systems, which are deployed in general computing systems, are referred
as General Purpose Operating Systems (GPOS). The kernel of such an OS is more generalised
and it contains all kinds of services required for executing generic applications. General-
purpose operating systems are often quite non-deterministic in behaviour. Their services can
inject random delays into application software and may cause slow responsiveness of an
application at unexpected times. GPOS are usually deployed in computing systems where
deterministic behaviour is not an important criterion. Personal Computer/Desktop system is a
typical example for a system where GPOSs are deployed. Windows 10/8.x/XP/MS-DOS etc.
are examples for General Purpose Operating Systems.

2. Real-Time Operating System (RTOS)


There is no universal definition available for the term ‘Real-Time’ when it is used in
conjunction with operating systems. What ‘Real-Time’ means in Operating System context is
still a debatable topic and there are many definitions available. In a broad sense, ‘Real-Time’
implies deterministic timing behaviour. Deterministic timing behaviour in RTOS context
means the OS services consumes only known and expected amounts of time regardless the
number of services. A Real-Time Operating System or RTOS implements policies and rules
concerning time-critical allocation of a system’s resources. The RTOS decides which
applications should run in which order and how much time needs to be allocated for each
application. Predictable performance is the hallmark of a well-designed RTOS. This is best
achieved by the consistent application of policies and rules. Policies guide the design of an
RTOS. Rules implement those policies and resolve policy conflicts. Windows Embedded
Compact, QNX, VxWorks MicroC/OS-II etc are examples of Real Time Operating Systems
(RTOS).
2.1 The Real-Time Kernel
The kernel of a Real-Time Operating System is referred as Real. Time kernel. In
complement to the conventional OS kernel, the Real-Time kernel is highly specialised and it
contains only the minimal set of services required for running the user applications/tasks. The
basic functions of a Real-Time kernel are listed below:
 Task/Process management
 Task/Process scheduling
 Task/Process synchronisation
 Error/Exception handling

MIT, Thandavapura
 Memory management
 Interrupt handling
 Time management
2.2 Task/ Process management
Deals with setting up the memory space for the tasks, loading the task’s code into the memory
space, allocating system resources, setting up a Task Control Block (TCB) for the task and
task/process termination/deletion. A Task Control Block (TCB) is used for holding the
information corresponding to a task. TCB usually contains the following set of information.
Task ID: Task Identification Number
Task State: The current state of the task (e.g. State = ‘Ready’ for a task which is ready to
execute)
Task Type: Task type. Indicates what is the type for this task. The task can be a hard real time
or soft real time or background task.
Task Priority: Task priority (e.g. Task priority = 1 for task with priority = 1)
Task Context Pointer: Context pointer. Pointer for context saving
Task Memory Pointers: Pointers to the code memory, data memory and stack memory for the
task
Task System Resource Pointers: Pointers to system resources (semaphores, mutex, etc.) used
by the task
Task Pointers: Pointers to other TCBs (TCBs for preceding, next and waiting tasks)
Other Parameters: Other relevant task parameters. The parameters and implementation of the
TCB is kernel dependent. The TCB parameters vary across different kernels, based on the task
management implementation. Task management service utilises the TCB of a task in the
following way
 Creates a TCB for a task on creating a task
 Delete/remove the TCB of a task when the task is terminated or deleted
 Reads the TCB to get the state of a task
 Update the TCB with updated parameters on need basis (e.g. on a context switch)
 Modify the TCB to change the priority of the task dynamically

2.3 Task/ Process Scheduling:


Deals with sharing the CPU among various tasks/processes. A kernel application called
‘Scheduler’ handles the task scheduling. Scheduler is nothing but an algorithm implementation,
which performs the efficient and optimal scheduling of tasks to provide a deterministic
behaviour. We will discuss the various types of scheduling in a later section of this chapter.

2.4 Task/ Process Synchronisation:


Deals with synchronising the concurrent access of a resource, which is shared across
multiple tasks and the communication between various tasks. We will discuss the various
synchronisation techniques and inter task /process communication in a later section of this
chapter.

2.5 Error/ Exception Handling:


 Deals with registering and handling the errors occurred/exceptions raised during the
execution of tasks. Insufficient memory, timeouts, deadlocks, deadline missing, bus
error, divide by zero, unknown instruction execution, etc. are examples of
errors/exceptions.

MIT, Thandavapura
 Errors/Exceptions can happen at the kernel level services or at task level. Deadlock is
an example for kernel level exception, whereas timeout is an example for a task level
exception. The OS kernel gives the information about the error in the form of a system
call (API). GetLastError() API provided by Windows CE/Embedded Compact RTOS
is an example for such a system call. Watchdog timer is a mechanism for handling the
timeouts for tasks.
 Certain tasks may involve the waiting of external events from devices. These tasks will
wait infinitely when the external device is not responding and the task will generate a
hang-up behaviour. In order to avoid these types of scenarios, a proper timeout
mechanism should be implemented.
 A watchdog is normally used in such situations. The watchdog will be loaded with the
maximum expected wait time for the event and if the event is not triggered within this
wait time, the same is informed to the task and the task is timed out. If the event happens
before the timeout, the watchdog is resetted.
2.6 Memory Management
 Compared to the General Purpose Operating Systems, the memory management
function of an RTOS kernel is slightly different. In general, the memory allocation time
increases depending on the size of the block of memory needs to be allocated and the
state of the allocated memory block (initialised memory block consumes more
allocation time than un-initialised memory block).
 Since predictable timing and deterministic behaviour are the primary focus of an RTOS,
RTOS achieves this by compromising the effectiveness of memory allocation. RTOS
makes use of ‘block’ based memory allocation technique, instead of the usual dynamic
memory allocation techniques used by the GPOS.
 RTOS kernel uses blocks of fixed size of dynamic memory and the block is allocated
for a task on a need basis.
 The blocks are stored in a ‘Free Buffer Queue’. To achieve predictable timing and avoid
the timing overheads, most of the RTOS kernels allow tasks to access any of the
memory blocks without any memory protection.
 RTOS kernels assume that the whole design is proven correct and protection is
unnecessary.
 Some commercial RTOS kernels allow memory protection as optional and the kernel
enters a fail-safe mode when an illegal memory access occurs.
 A few RTOS kernels implement Virtual Memory* concept for memory allocation if the
system supports secondary memory storage (like HDD and FLASH memory). In the
‘block’ based memory allocation, a block of fixed memory is always allocated for tasks
on need basis and it is taken as a unit. Hence, there will not be any memory
fragmentation issues.
 The memory allocation can be implemented as constant functions and thereby it
consumes fixed amount of time for memory allocation. This leaves the deterministic
behaviour of the RTOS kernel untouched. The ‘block’ memory concept avoids the
garbage collection overhead also. (We will explore this technique under the
MicroC/OS-II kernel in a latter chapter).The ‘block’ based memory allocation achieves
deterministic behaviour with the trade-of limited choice of memory chunk size and
suboptimal memory usage.

MIT, Thandavapura
2.7 Interrupt Handling:
 Deals with the handling of various types of interrupts. Interrupts provide Real-Time
behaviour to systems. Interrupts inform the processor that an external device or an
associated task requires immediate attention of the CPU.
 Interrupts can be either Synchronous or Asynchronous. Interrupts which occurs in
sync with the currently executing task is known as Synchronous interrupts. Usually
the software interrupts fall under the Synchronous Interrupt category.
 Divide by zero, memory segmentation error, etc. are examples of synchronous
interrupts. For synchronous interrupts, the interrupt handler runs in the same context
of the interrupting task. Asynchronous interrupts are interrupts, which occurs at any
point of execution of any task, and are not in sync with the currently executing task.
 The interrupts generated by external devices (by asserting the interrupt line of the
processor/controller to which the interrupt line of the device is connected)
connected to the processor/controller, timer overflow interrupts, and serial data
reception / transmission interrupts, etc. are examples for asynchronous interrupts.
 For asynchronous interrupts, the interrupt handler is usually written as separate task
(Depends on OS kernel implementation) and it runs in a different context. Hence, a
context switch happens while handling the asynchronous interrupts.
 Priority levels can be assigned to the interrupts and each interrupts can be enabled
or disabled individually. Most of the RTOS kernel implements ‘Nested Interrupts’
architecture. Interrupt nesting allows the pre-emption (interruption) of an Interrupt
Service Routine (ISR), servicing an interrupt, by a high priority interrupt.

2.8 Time Management:


 Accurate time management is essential for providing precise time reference for all
applications. The time reference to kernel is provided by a high-resolution Real-Time
Clock (RTC) hardware chip (hardware timer).
 The hardware timer is programmed to interrupt the processor/controller at a fixed rate.
This timer interrupt is referred as ‘Timer tick’. The ‘Timer tick’ is taken as the timing
reference by the kernel. The ‘Timer tick’ interval may vary depending on the hardware
timer.
 Usually the ‘Timer tick’ varies in the microseconds range. The time parameters for
tasks are expressed as the multiples of the ‘Timer tick’. The System time is updated
based on the ‘Timer tick’. If the System time register is 32 bits wide and the ‘Timer
tick’ interval is 1 microsecond, the System time register will reset in

The ‘Timer tick’ interrupt is handled by the ‘Timer Interrupt’ handler of kernel. The ‘Timer
tick’ interrupt can be utilised for implementing the following actions.
 Save the current context (Context of the currently executing task).
 Increment the System time register by one. Generate timing error and reset the System
time register if the timer tick count is greater than the maximum range available for
System time register.
 Update the timers implemented in kernel (Increment or decrement the timer registers
for each timer depending on the count direction setting for each register. Increment
registers with count direction setting = ‘count up’ and decrement registers with count
direction setting = ‘count down’).
 Activate the periodic tasks, which are in the idle state.

MIT, Thandavapura
 Invoke the scheduler and schedule the tasks again based on the scheduling algorithm.
 Delete all the terminated tasks and their associated data structures (TCBs)
 Load the context for the fi rst task in the ready queue. Due to the re-scheduling, the
ready task might be changed to a new one from the task, which was pre-empted by the
‘Timer Interrupt’ task.
Apart from these basic functions, some RTOS provide other functionalities also (Examples
are file management and network functions). Some RTOS kernel provides options for
selecting the required kernel functions at the time of building a kernel. The user can pick
the required functions from the set of available functions and compile the same to generate
the kernel binary. Windows CE is a typical example for such an RTOS. While building the
target, the user can select the required components for the kernel.

2.9 Hard Real-Time

 Real-Time Operating Systems that strictly adhere to the timing constraints for a task
is referred as ‘Hard Real-Time’ systems. A Hard Real-Time system must meet the
deadlines for a task without any slippage.
 Missing any deadline may produce catastrophic results for Hard Real-Time
Systems, including permanent data lose and irrecoverable damages to the
system/users. Hard Real-Time systems emphasise the principle ‘A late answer is a
wrong answer’. A system can have several such tasks and the key to their correct
operation lies in scheduling them so that they meet their time constraints.
 Air bag control systems and Anti-lock Brake Systems (ABS) of vehicles are typical
examples for Hard Real-Time Systems. The Air bag control system should be into
action and deploy the air bags when the vehicle meets a severe accident. Ideally
speaking, the time for triggering the air bag deployment task, when an accident is
sensed by the Air bag control system, should be zero and the air bags should be
deployed exactly within the time frame, which is predefined for the air bag
deployment task.
 Any delay in the deployment of the air bags makes the life of the passengers under
threat. When the air bag deployment task is triggered, the currently executing task
must be pre-empted, the air bag deployment task should be brought into execution,
and the necessary I/O systems should be made readily available for the air bag
deployment task.
 To meet the strict deadline, the time between the air bag deployment event
triggering and start of the air bag deployment task execution should be minimum,
ideally zero. As a rule of thumb, Hard Real-Time Systems does not implement the
virtual memory model for handling the memory.
 This eliminates the delay in swapping in and out the code corresponding to the task
to and from the primary memory. In general, the presence of Human in the loop
(HITL) for tasks introduces unexpected delays in the task execution. Most of the
Hard Real-Time Systems are automatic and does not contain a ‘human in the loop’.

2.10 Soft Real-Time


 Real-Time Operating System that does not guarantee meeting deadlines, but offer the
best effort to meet the deadline are referred as ‘Soft Real-Time’ systems.
 Missing deadlines for tasks are acceptable for a Soft Real time system if the frequency
of deadline missing is within the compliance limit of the Quality of Service (QoS).

MIT, Thandavapura
 A Soft Real-Time system emphasises the principle ‘A late answer is an acceptable
answer, but it could have done bit faster’. Soft Real-Time systems most often have a
‘human in the loop (HITL)’. Automatic Teller Machine (ATM) is a typical example for
Soft-Real-Time System.
 If the ATM takes a few seconds more than the ideal operation time, nothing fatal
happens. An audio-video playback system is another example for Soft Real-Time
system. No potential damage arises if a sample comes late by fraction of a second, for
playback.

3. TASKS, PROCESS AND THREADS


3.1 Tasks
 The term ‘task’ refers to something that needs to be done. In our day-to-day life, we
are bound to the execution of a number of tasks.
 The task can be the one assigned by our managers or the one assigned by our
professors/teachers or the one related to our personal or family needs. In addition, we
will have an order of priority and schedule/timeline for executing these tasks.
 In the operating system context, a task is defined as the program in execution and the
related information maintained by the operating system for the program.
 Task is also known as ‘Job’ in the operating system context. A program or part of it in
execution is also called a ‘Process’. The terms ‘Task’, ‘Job’ and ‘Process’ refer to the
same entity in the operating system context and most often they are used
interchangeably.

3.2 Process
 A ‘Process’ is a program, or part of it, in execution.
 Process is also known as an instance of a program in execution. Multiple instances of
the same program can execute simultaneously.
 A process requires various system resources like CPU for executing the process,
memory for storing the code corresponding to the process and associated variables, I/O
devices for information exchange, etc. A process is sequential in execution.
The Structure of a Process
 The concept of ‘Process’ leads to concurrent execution (pseudo parallelism) of tasks
and thereby the efficient utilisation of the CPU and other system resources.
 Concurrent execution is achieved through the sharing of CPU among the processes.
 A process mimics a processor in properties and holds a set of registers, process status,
a Program Counter (PC) to point to the next executable instruction of the process, a
stack for holding the local variables associated with the process and the code
corresponding to the process.
 This can be visualised as shown in Fig. 10.4. A process which inherits all the properties
of the CPU can be considered as a virtual processor, awaiting its turn to have its
properties switched into the physical processor.
 When the process gets its turn, its registers and the program counter register becomes
mapped to the physical registers of the CPU.
 From a memory perspective, the memory occupied by the process is segregated into
three regions, namely, Stack memory, Data memory and Code memory (Fig. 10.5).
 The ‘Stack’ memory holds all temporary data such as variables local to the process.
Data memory holds all global data for the process.
 The code memory contains the program code (instructions) corresponding to the
process.
MIT, Thandavapura
 On loading a process into the main memory, a specific area of memory is allocated for
the process. The stack memory usually starts (OS Kernel implementation dependent) at
the highest memory address from the memory area allocated for the process.

 Say for example, the memory map of the memory area allocated for the process is 2048
to 2100, the stack memory starts at address 2100 and grows down wards to
accommodate the variables local to the process.
3.2.1 Process States and State Transition
 The creation of a process to its termination is not a single step operation. The process
traverses through a series of states during its transition from the newly created state to
the terminated state.
 The cycle through which a process changes its state from ‘newly created’ to ‘execution
completed’ is known as ‘Process Life Cycle’.
 The various states through which a process traverses through during a Process Life
Cycle indicates the current status of the process with respect to time and also provides
information on what it is allowed to do next.

 Figure 10.6 represents the various states associated with a process. The state at which
a process is being created is referred as ‘Created State’.
 The Operating System recognises a process in the ‘Created State’ but no resources are
allocated to the process. The state, where a process is incepted into the memory and
awaiting the processor time for execution, is known as ‘Ready State’.
 At this stage, the process is placed in the ‘Ready list’ queue maintained by the OS. The
state where in the source code instructions corresponding to the process is being
executed is called ‘Running State’.

MIT, Thandavapura
 Running state is the state at which the process execution happens. ‘Blocked State/Wait
State’ refers to a state where a running process is temporarily suspended from execution
and does not have immediate access to resources.
 A state where the process completes its execution is known as ‘Completed State’. The
transition of a process from one state to another is known as ‘State transition’. When a
process changes its state from Ready to running or from running to blocked or
terminated or from blocked to running, the CPU allocation for the process may also
change.
 It should be noted that the state representation for a process/task mentioned here is a
generic representation. The states associated with a task may be known with a different
name or there may be more or less number of states than the one explained here under
different OS kernel.
 For example, under VxWorks’ kernel, the tasks may be in either one or a specific
combination of the states READY, PEND, DELAY and SUSPEND. The PEND state
represents a state where the task/process is blocked on waiting for I/O or system
resource.
 The DELAY state represents a state in which the task/process is sleeping and the
SUSPEND state represents a state where a task/process is temporarily suspended from
execution and not available for execution.

Process Management

Process management deals with the creation of a process, setting up the memory space
for the process, loading the process’s code into the memory space, allocating system resources,
setting up a Process Control Block (PCB) for the process and process termination/deletion. For
more details on Process Management, refer to the section ‘Task/Process management’ given
under the topic ‘The Real-Time Kernel’ of this chapter.

3.3 Threads
 A thread is the primitive that can execute code.
 A thread is a single sequential flow of control within a process.
 ‘Thread’ is also known as lightweight process.
A process can have many threads of execution. Different threads, which are part of a process,
share the same address space; meaning they share the data memory, code memory and heap
memory area.

 Threads maintain their own thread status (CPU register values), Program Counter (PC)
and stack. The memory model for a process and its associated threads are given in Fig.
10.7.

MIT, Thandavapura
The Concept of Multithreading
 A process/task in embedded application may be a complex or lengthy one and it may
contain various sub operations like getting input from I/O devices connected to the
processor, performing some internal calculations/operations, updating some I/O
devices etc.
 If all the sub functions of a task are executed in sequence, the CPU utilisation may not
be efficient.
 For example, if the process is waiting for a user input, the CPU enters the wait state
for the event, and the process execution also enters a wait state.
 Instead of this single sequential execution of the whole process, if the task/process is
split into different threads carrying out the different sub functionalities of the process,
the CPU can be effectively utilised and when the thread corresponding to the I/O
operation enters the wait state, another threads which do not require the I/O event for
their operation can be switched into execution.
 This leads to more speedy execution of the process and the efficient utilisation of the
processor time and resources.
 The multithreaded architecture of a process can be better visualised with the thread-
process diagram shown in Fig. 10.8.
 If the process is split into multiple threads, which executes a portion of the process,
there will be a main thread and rest of the threads will be created within the main thread.

Use of multiple threads to execute a process brings the following advantage.


1. Better memory utilisation. Multiple threads of the same process share the address
space for data memory. This also reduces the complexity of inter thread
communication since variables can be shared across the threads.
2. Since the process is split into different threads, when one thread enters a wait state,
the CPU can be utilised by other threads of the process that do not require the event,
which the other thread is waiting, for processing. This speeds up the execution of the
process.
3. Efficient CPU utilisation. The CPU is engaged all time.

MIT, Thandavapura
3.3.1 Thread Standards
Thread standards deal with the different standards available for thread creation and
management. These standards are utilised by the operating systems for thread creation and
thread management. It is a set of thread class libraries. The commonly available thread class
libraries are explained below.
POSIX Threads
POSIX stands for Portable Operating System Interface. The POSIX.4 standard deals
with the Real-Time extensions and POSIX.4a standard deals with thread extensions. The
POSIX standard library for thread creation and management is ‘Pthreads’. ‘Pthreads’ library
defines the set of POSIX thread creation and management functions in ‘C’ language.
The primitive

creates a new thread for running the function start_ function. Here pthread_t is the handle to
the newly created thread and pthread_attr_t is the data type for holding the thread attributes.
‘start_function’ is the function the thread is going to execute and arguments is the arguments
for ‘start_function’ (It is a void * in the above example). On successful creation of a Pthread,
pthread_create() associates the Thread Control Block (TCB) corresponding to the newly
created thread to the variable of type pthread_t (new_thread_ID in our example).
The primitive

blocks the current thread and waits until the completion of the thread pointed by it (In this
example new_ thread ) All the POSIX ‘thread calls’ returns an integer. A return value of zero
indicates the success of the call. It is always good to check the return value of each call.
Win32 Threads
Win32 threads are the threads supported by various flavours of Windows Operating Systems.
The Win32 Application Programming Interface (Win32 API) libraries provide the standard
set of Win32 thread creation and management functions. Win32 threads are created with the
API.

The parameter lpThreadAttributes defines the security attributes for the thread and
dwStackSize defines the stack size for the thread. These two parameters are not supported by
the Windows CE/Embedded Compact
Real-Time Operating Systems and it should be kept as NULL and 0 respectively in a
CreateThread API Call.

MIT, Thandavapura
The other parameters are
lpStartAddress: Pointer to the function which is to be executed by the thread.
lpParameter: Parameter specifying an application-defined value that is passed to the thread
routine.
dwCreationFlags: Defi nes the state of the thread when it is created. Usually it is kept as 0 or
CREATE_
SUSPENDED implying the thread is created and kept at the suspended state.
lpThreadId: Pointer to a DWORD that receives the identifier for the thread.
Java Threads
Java threads are the threads supported by Java programming Language. The java thread
class ‘Thread’ is defined in the package ‘java.lang’. This package needs to be imported for
using the thread creation functions supported by the Java thread class. There are two ways of
creating threads in Java: Either by extending the base ‘Thread’ class or by implementing an
interface. Extending the thread class allows inheriting the methods and variables of the parent
class (Thread class) only whereas interface allows a way to achieve the requirements for a set
of classes. The following piece of code illustrates the implementation of Java threads with
extending the thread base class ‘Thread’.

3.3.2 Thread Pre-emption


Thread pre-emption is the act of pre-empting the currently running thread (stopping the
currently running thread temporarily). Thread pre-emption ability is solely dependent on the
Operating System. Thread pre-emption is performed for sharing the CPU time among all the
threads. The execution switching among threads are known as ‘Thread context switching’.
Thread context switching is dependent on the Operating system’s scheduler and the type of the
thread. When we say ‘Thread’, it falls into any one of the following types.
User Level Thread:
 User level threads do not have kernel/Operating System support and they exist solely
in the running process.

MIT, Thandavapura
 Even if a process contains multiple user level threads, the OS treats it as single thread
and will not switch the execution among the different threads of it.
 It is the responsibility of the process to schedule each thread as and when required.
 In summary, user level threads of a process are non-pre-emptive at thread level from
OS perspective.
Kernel/System Level Thread:
 Kernel level threads are individual units of execution, which the OS treats as separate
threads.
 The OS interrupts the execution of the currently running kernel thread and switches the
execution to another kernel thread based on the scheduling policies implemented by the
OS.
 In summary kernel level threads are pre-emptive.
 For user level threads, the execution switching (thread context switching) happens only
when the currently executing user level thread is voluntarily blocked.
 Hence, no OS intervention and system calls are involved in the context switching of
user level threads. This makes context switching of user level threads very fast.
 On the other hand, kernel level threads involve lots of kernel overhead and involve
system calls for context switching.
 However, kernel threads maintain a clear layer of abstraction and allow threads to use
system calls independently. There are many ways for binding user level threads with
system/kernel level threads. The following section gives an overview of various thread
binding models.
Many-to-One Model:
 Here many user level threads are mapped to a single kernel thread. In this model, the
kernel treats all user level threads as single thread and the execution switching among
the user level threads happens when a currently executing user level thread voluntarily
blocks itself or relinquishes the CPU.
 Solaris Green threads and GNU Portable Threads are examples for this. The ‘PThread’
example given under the POSIX thread library section is an illustrative example for
application with Many-to-One thread model.
One-to-One Model:
 In One-to-One model, each user level thread is bonded to a kernel/system level thread.
Windows NT and Linux threads are examples for One-to-One thread models.
 The modified ‘PThread’ example given under the ‘Thread Pre-emption’ section is an
illustrative example for application with Oneto- One thread model.
Many-to-Many Model:
 In this model many user level threads are allowed to be mapped to many kernel threads.
Windows NT/2000 with Thread Fibre package is an example for this.

MIT, Thandavapura
Thread v/s Process
Thread Process
Thread is a single unit of execution and is Process is a program in execution and
part of process. contains one or more threads.
A thread does not have its own data memory Process has its own code memory, data
and heap memory. It shares the data memory memory and stack memory.
and heap memory with other threads of the
same process.
A thread cannot live independently; it lives A process contains at least one thread.
within the process.
There can be multiple threads in a process. Threads within a process share the code, data
The first thread (main thread) calls the main and heap memory. Each thread holds
function and occupies the start of the stack separate memory area for stack (shares the
memory of the process. total stack memory of the process).
Threads are very inexpensive to create Processes are very expensive to create.
Involves many OS overhead.
Context switching is inexpensive and fast. Context switching is complex and involves
lot of OS overhead and is comparatively
slower.
If a thread expires, its stack is reclaimed by If a process dies, the resources allocated to it
the process. are reclaimed by the OS and all the
associated threads of the process also dies.

4. Multiprocessing and Multitasking:


 The terms multiprocessing and multitasking are a little confusing and sounds alike. In
the operating system context multiprocessing describes the ability to execute multiple
processes simultaneously.
 Systems which are capable of performing multiprocessing, are known as
multiprocessor systems. Multiprocessor systems possess multiple CPUs and can
execute multiple processes simultaneously.
 The ability of the operating system to have multiple programs in memory, which are
ready for execution, is referred as multiprogramming. In a uniprocessor system, it is
not possible to execute multiple processes simultaneously.
 However, it is possible for a uniprocessor system to achieve some degree of pseudo
parallelism in the execution of multiple processes by switching the execution among
different processes.
 The ability of an operating system to hold multiple processes in memory and switch the
processor (CPU) from executing one process to another process is known as
multitasking.
 Multitasking creates the illusion of multiple tasks executing in parallel. Multitasking
involves the switching of CPU from executing one task to another. In an earlier section
‘The Structure of a Process’ of this chapter, we learned that a Process is identical to the
physical processor in the sense it has own register set which mirrors the CPU registers,
stack and Program Counter (PC). Hence, a ‘process’ is considered as a ‘Virtual
processor’, awaiting its turn to have its properties switched into the physical processor.
 In a multitasking environment, when task/process switching happens, the virtual
processor (task/process) gets its properties converted into that of the physical processor.

MIT, Thandavapura
 The switching of the virtual processor to physical processor is controlled by the
scheduler of the OS kernel.
 Whenever a CPU switching happens, the current context of execution should be saved
to retrieve it at a later point of time when the CPU executes the process, which is
interrupted currently due to execution switching.
 The context saving and retrieval is essential for resuming a process exactly from the
point where it was interrupted due to CPU switching.
 The act of switching CPU among the processes or changing the current execution
context is known as ‘Context switching’.
 The act of saving the current context which contains the context details (Register
details, memory details, system resource usage details, execution details, etc.) for the
currently running process at the time of CPU switching is known as ‘ Context saving’.
 The process of retrieving the saved context details for a process, which is going to be
executed due to CPU switching, is known as ‘Context retrieval’. Multitasking involves
‘Context switching’ (Fig. 10.11), ‘Context saving’ and ‘Context retrieval’.

Toss Juggling:
 The skilful object manipulation game is a classic real world example for the
multitasking Illusion.
 The juggler uses a number of objects (balls, rings, etc.) and throws them up and catches
them. At any point of time, he throws only one ball and catches only one per hand.
 However, the speed at which he is switching the balls for throwing and catching creates
the illusion, he is throwing and catching multiple balls or using more than two hands
simultaneously, to the spectators.

4.1 Types of Multitasking


 Co-operative Multitasking
 Pre-emptive Multitasking
 Non-pre-emptive Multitasking

Co-operative Multitasking:
Co-operative multitasking is the most primitive form of multitasking in which a
task/process gets a chance to execute only when the currently executing task/process
voluntarily relinquishes the CPU. In this method, any task/process can hold the CPU as much
time as it wants. Since this type of implementation involves the mercy of the tasks each other

MIT, Thandavapura
for getting the CPU time for execution, it is known as co-operative multitasking. If the currently
executing task is non-cooperative, the other tasks may have to wait for a long time to get the
CPU
Pre-emptive Multitasking:

Pre-emptive multitasking ensures that every task/process gets a chance to execute. When and
how much time a process gets is dependent on the implementation of the pre-emptive
scheduling. As the name indicates, in pre-emptive multitasking, the currently running
task/process is pre-empted to give a chance to other tasks/process to execute. The pre-
emption of task may be based on time slots or task/process priority.

Non-pre-emptive Multitasking:

In non-preemptive multitasking, the process/task, which is currently given the CPU


time, is allowed to execute until it terminates (enters the ‘Completed’ state) or enters the
‘Blocked/Wait’ state, waiting for an I/O or system resource. The co-operative and non-
preemptive multitasking differs in their behaviour when they are in the ‘Blocked/Wait’ state.
In co-operative multitasking, the currently executing process/task need not relinquish the CPU
when it enters the ‘Blocked/Wait’ state, waiting for an I/O, or a shared resource access or an
event to occur whereas in non-preemptive multitasking the currently executing task
relinquishes the CPU when it waits for an I/O or system resource or an event to occur.

5. Task Scheduling:
 As we already discussed, multitasking involves the execution switching among the
different tasks.
 There should be some mechanism in place to share the CPU among the different tasks
and to decide which process/task is to be executed at a given point of time.
 Determining which task/process is to be executed at a given point of time is known as
task/process scheduling. Task scheduling forms the basis of multitasking.
 Scheduling policies forms the guidelines for determining which task is to be executed
when. The scheduling policies are implemented in an algorithm and it is run by the
kernel as a service.
 The kernel service/application, which implements the scheduling algorithm, is known
as ‘Scheduler’. The process scheduling decision may take place when a process
switches its state to
1. ‘Ready’ state from ‘Running’ state
2. ‘Blocked/Wait’ state from ‘Running’ state
3. ‘Ready’ state from ‘Blocked/Wait’ state
4. ‘Completed’ state
 A process switches to ‘Ready’ state from the ‘Running’ state when it is preempted.
Hence, the type of scheduling in scenario 1 is pre-emptive.
 When a high priority process in the ‘Blocked/Wait’ state completes its I/O and switches
to the ‘Ready’ state, the scheduler picks it for execution if the scheduling policy used
is priority based preemptive.
 This is indicated by scenario 3. In preemptive/non-preemptive multitasking, the process
relinquishes the CPU when it enters the ‘Blocked/Wait’ state or the ‘Completed’ state
and switching of the CPU happens at this stage. Scheduling under scenario 2 can be
either preemptive or non-preemptive. Scheduling under scenario 4 can be preemptive,
non-preemptive or co-operative.

MIT, Thandavapura
 The selection of a scheduling criterion/algorithm should consider the following factors:
CPU Utilisation: The scheduling algorithm should always make the CPU utilisation high. CPU
utilisation is a direct measure of how much percentage of the CPU is being utilised.
Throughput: This gives an indication of the number of processes executed per unit of time.
The throughput for a good scheduler should always be higher.
Turnaround Time: It is the amount of time taken by a process for completing its execution.
It includes the time spent by the process for waiting for the main memory, time spent in the
ready queue, time spent on completing the I/O operations, and the time spent in execution. The
turnaround time should be a minimal for a good scheduling algorithm.
Waiting Time: It is the amount of time spent by a process in the ‘Ready’ queue waiting to get
the CPU time for execution. The waiting time should be minimal for a good scheduling
algorithm.
Response Time: It is the time elapsed between the submission of a process and the first
response. For a good scheduling algorithm, the response time should be as least as possible.
The Operating System maintains various queues† in connection with the CPU scheduling, and
a process passes through these queues during the course of its admittance to execution
completion.

The various queues maintained by OS in association with CPU scheduling are:


Job Queue: Job queue contains all the processes in the system
Ready Queue: Contains all the processes, which are ready for execution and waiting for CPU
to get their turn for execution. The Ready queue is empty when there is no process ready for
running.
Device Queue: Contains the set of processes, which are waiting for an I/O device.
A process migrates through all these queues during its journey from ‘Admitted’ to ‘Completed’
stage. The following diagrammatic representation (Fig. 10.12) illustrates the transition of a
process through the various queues. Based on the scheduling algorithm used, the scheduling
can be classified into the following categories.

5.1 Non-preemptive Scheduling:


Non-preemptive scheduling is employed in systems, which implement non-preemptive
multitasking model. In this scheduling type, the currently executing task/process is allowed to
run until it terminates or enters the ‘Wait’ state waiting for an I/O or system resource. The
various types of non-preemptive scheduling adopted in task/process scheduling are listed
below.

MIT, Thandavapura
5.1.1 First-Come-First-Served (FCFS)/ FIFO Scheduling:
As the name indicates, the First-Come-First-Served (FCFS) scheduling algorithm
allocates CPU time to the processes based on the order in which they enter the ‘Ready’ queue.
The first entered process is serviced first. It is same as any real world application where queue
systems are used; e.g. Ticketing reservation system where people need to stand in a queue and
the first person standing in the queue is serviced first. FCFS scheduling is also known as First
In First Out (FIFO) where the process which is put first into the ‘Ready’ queue is serviced first.

Example 1
Three processes with process IDs P1, P2, P3 with estimated completion time 10, 5, 7
milliseconds respectively enters the ready queue together in the order P1, P2, P3. Calculate
the waiting time and Turn Around Time (TAT) for each process and the average waiting time
and Turn Around Time (Assuming there is no I/O waiting for the processes).
The sequence of execution of the processes by the CPU is represented as

Assuming the CPU is readily available at the time of arrival of P1, P1 starts executing
without any waiting in the ‘Ready’ queue. Hence the waiting time for P1 is zero. The waiting
time for all processes are given as
Waiting Time for P1 = 0 ms (P1 starts executing first)
Waiting Time for P2 = 10 ms (P2 starts executing after completing P1)
Waiting Time for P3 = 15 ms (P3 starts executing after completing P1 and P2)
Average waiting time = (Waiting time for all processes) / No. of Processes
= (Waiting time for (P1+P2+P3)) / 3
= (0+10+15)/3 = 25/3
= 8.33 milliseconds
Turn Around Time (TAT) for P1 = 10 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P2 = 15 ms (-Do-)
Turn Around Time (TAT) for P3 = 22 ms (-Do-)
Average Turn Around Time = (Turn Around Time for all processes) / No. of Processes
= (Turn Around Time for (P1+P2+P3)) / 3
= (10+15+22)/3 = 47/3

MIT, Thandavapura
= 15.66 milliseconds
Average Turn Around Time (TAT) is the sum of average waiting time and average execution
time.
Average Execution Time = (Execution time for all processes)/No. of processes
= (Execution time for (P1+P2+P3))/3
= (10+5+7)/3 = 22/3
= 7.33
Average Turn Around Time = Average waiting time + Average execution time
= 8.33 + 7.33
= 15.66 milliseconds
Example 2
Calculate the waiting time and Turn Around Time (TAT) for each process and the Average
waiting time and Turn Around Time (Assuming there is no I/O waiting for the processes) for
the above example if the process enters the ‘Ready’ queue together in the order P2, P1, P3.
The sequence of execution of the processes by the CPU is represented as

Assuming the CPU is readily available at the time of arrival of P2, P2 starts executing
without any waiting in the ‘Ready’ queue. Hence the waiting time for P2 is zero. The waiting
time for all processes is given as
Waiting Time for P2 = 0 ms (P2 starts executing first)
Waiting Time for P1 = 5 ms (P1 starts executing after completing P2)
Waiting Time for P3 = 15 ms (P3 starts executing after completing P2 and P1)
Average waiting time = (Waiting time for all processes) / No. of Processes
= (Waiting time for (P2+P1+P3)) / 3
= (0+5+15)/3 = 20/3
= 6.66 milliseconds
Turn Around Time (TAT) for P2 = 5 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P1 = 15 ms (-Do-)
Turn Around Time (TAT) for P3 = 22 ms (-Do-)

MIT, Thandavapura
Average Turn Around Time = (Turn Around Time for all processes) / No. of Processes
= (Turn Around Time for (P2+P1+P3)) / 3
= (5+15+22)/3 = 42/3
= 14 milliseconds
The Average waiting time and Turn Around Time (TAT) depends on the order in which the
processes enter the ‘Ready’ queue, regardless there estimated completion time. From the above
two examples it is clear that the Average waiting time and Turn Around Time improve if the
process with shortest execution completion time is scheduled first. The major drawback of
FCFS algorithm is that it favours monopoly of process. A process, which does not contain any
I/O operation, continues its execution until it finishes its task. If the process contains any I/O
operation, the CPU is relinquished by the process. In general, FCFS favours CPU bound
processes and I/O bound processes may have to wait until the completion of CPU bound
process, if the currently executing process is a CPU bound process. This leads to poor device
utilisation. The average waiting time is not minimal for FCFS scheduling algorithm.
5.1.2 Last-Come-First Served (LCFS)/LIFO Scheduling
The Last-Come-First Served (LCFS) scheduling algorithm also allocates CPU time to the
processes based on the order in which they are entered in the ‘Ready’ queue. The last entered
process is serviced first. LCFS scheduling is also known as Last In First Out ( LIFO) where the
process, which is put last into the ‘Ready’ queue, is serviced first.
Example 1
Three processes with process IDs P1, P2, P3 with estimated completion time 10, 5, 7
milliseconds respectively enters the ready queue together in the order P1, P2, P3 (Assume only
P1 is present in the ‘Ready’ queue when the scheduler picks it up and P2, P3 entered ‘Ready’
queue after that). Now a new process P4 with estimated completion time 6 ms enters the
‘Ready’ queue after 5 ms of scheduling P1. Calculate the waiting time and Turn Around Time
(TAT) for each process and the Average waiting time and Turn Around Time (Assuming there
is no I/O waiting for the processes). Assume all the processes contain only CPU operation and
no I/O operations are involved. Initially there is only P1 available in the Ready queue and the
scheduling sequence will be P1, P3, P2. P4 enters the queue during the execution of P1 and
becomes the last process entered the ‘Ready’ queue. Now the order of execution changes to
P1, P4, P3, and P2 as given below.

The waiting time for all the processes is given as


Waiting Time for P1 = 0 ms (P1 starts executing first)

MIT, Thandavapura
Waiting Time for P4 = 5 ms ( P4 starts executing after completing P1. But P4 arrived after 5
ms of execution of P1. Hence its waiting time = Execution start time – Arrival Time = 10 – 5
= 5)
Waiting Time for P3 = 16 ms (P3 starts executing after completing P1 and P4)
Waiting Time for P2 = 23 ms (P2 starts executing after completing P1, P4 and P3)
Average waiting time = (Waiting time for all processes) / No. of Processes
= (Waiting time for (P1+P4+P3+P2)) / 4
= (0 + 5 + 16 + 23)/4 = 44/4
= 11 milliseconds
Turn Around Time (TAT) for P1 = 10 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P4 = 11 ms (Time spent in Ready Queue + Execution Time =
(Execution
Start Time – Arrival Time) + Estimated Execution Time = (10
– 5) + 6 = 5 + 6)
Turn Around Time (TAT) for P3 = 23 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P2 = 28 ms (Time spent in Ready Queue + Execution Time)
Average Turn Around Time = (Turn Around Time for all processes) / No. of Processes
= (Turn Around Time for (P1+P4+P3+P2)) / 4
= (10+11+23+28)/4 = 72/4
= 18 milliseconds
LCFS scheduling is not optimal and it also possesses the same drawback as that of FCFS
algorithm.

5.1.3 Shortest Job First (SJF) Scheduling


Shortest Job First ( SJF) scheduling algorithm ‘sorts the ‘Ready’ queue’ each time a process
relinquishes the CPU (either the process terminates or enters the ‘Wait’ state waiting for I/O or
system resource) to pick the process with shortest (least) estimated completion/run time. In
SJF, the process with the shortest estimated run time is scheduled first, followed by the next
shortest process, and so on.
Example 1
Three processes with process IDs P1, P2, P3 with estimated completion time 10, 5, 7
milliseconds respectively enters the ready queue together. Calculate the waiting time and Turn
Around Time (TAT) for each process and the Average waiting time and Turn Around Time
(Assuming there is no I/O waiting for the processes) in SJF algorithm. The scheduler sorts the
‘Ready’ queue based on the shortest estimated completion time and schedules the process with

MIT, Thandavapura
the least estimated completion time first and the next least one as second, and so on. The order
in which the processes are scheduled for execution is represented as

The estimated execution time of P2 is the least (5 ms) followed by P3 (7 ms) and P1 (10 ms).
The waiting time for all processes are given as
Waiting Time for P2 = 0 ms (P2 starts executing fi rst)
Waiting Time for P3 = 5 ms (P3 starts executing after completing P2)
Waiting Time for P1 = 12 ms (P1 starts executing after completing P2 and P3)
Average waiting time = (Waiting time for all processes) / No. of Processes
= (Waiting time for (P2+P3+P1)) / 3
= (0+5+12)/3 = 17/3
= 5.66 milliseconds
Turn Around Time (TAT) for P2 = 5 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P3 = 12 ms (-Do-)
Turn Around Time (TAT) for P1 = 22 ms (-Do-)
Average Turn Around Time = (Turn Around Time for all processes) / No. of Processes
= (Turn Around Time for (P2+P3+P1)) / 3
= (5+12+22)/3 = 39/3
= 13 milliseconds
Average Turn Around Time (TAT) is the sum of average waiting time and average execution
time.
The average Execution time = (Execution time for all processes)/No. of processes
= (Execution time for (P1+P2+P3))/3
= (10+5+7)/3 = 22/3 = 7.33
Average Turn Around Time = Average Waiting time + Average Execution time
= 5.66 + 7.33
= 13 milliseconds
From this example, it is clear that the average waiting time and turn around time is much
improved with the
SJF scheduling for the same processes when compared to the FCFS algorithm

Example 2

MIT, Thandavapura
Calculate the waiting time and Turn Around Time (TAT) for each process and the Average
waiting time and Turn Around Time for the above example if a new process P4 with
estimated completion time 2 ms enters the ‘Ready’ queue after 2 ms of execution of P2.
Assume all the processes contain only CPU operation and no I/O operations are involved. At
the beginning, there are only three processes (P1, P2 and P3) available in the ‘Ready’ queue
and the SJF scheduler picks up the process with the least execution completion time (In this
example P2 with execution completion time 5 ms) for scheduling. The execution sequence
diagram for this is same as that of Example 1.
Now process P4 with estimated execution completion time 2 ms enters the ‘Ready’ queue after
2 ms of start of execution of P2. Since the SJF algorithm is non-preemptive and process P2
does not contain any I/O operations, P2 continues its execution. After 5 ms of scheduling, P2
terminates and now the scheduler again sorts the ‘Ready’ queue for process with least execution
completion time. Since the execution completion time for P4 (2 ms) is less than that of P3 (7
ms), which was supposed to be run after the completion of P2 as per the ‘Ready’ queue
available at the beginning of execution scheduling, P4 is picked up for executing. Due to the
arrival of the process P4 with execution time 2 ms, the ‘Ready’ queue is re-sorted in the order
P2, P4, P3, P1. At the beginning it was P2, P3, P1. The execution sequence now changes as per
the following diagram.

The waiting time for all the processes are given as


Waiting time for P2 = 0 ms (P2 starts executing fi rst)
Waiting time for P4 = 3 ms (P4 starts executing after completing P2. But P4 arrived after 2
ms of execution
of P2. Hence its waiting time = Execution start time – Arrival Time = 5 – 2 = 3)
Waiting time for P3 = 7 ms (P3 starts executing after completing P2 and P4)
Waiting time for P1 = 14 ms (P1 starts executing after completing P2, P4 and P3)
Average waiting time = (Waiting time for all processes) / No. of Processes
= (Waiting time for (P2+P4+P3+P1)) / 4
= (0 + 3 + 7 + 14)/4 = 24/4
= 6 milliseconds
Turn Around Time (TAT) for P2 = 5 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P4 = 5 ms ( Time spent in Ready Queue + Execution Time =
(Execution Start
Time – Arrival Time) + Estimated Execution Time = (5 – 2) + 2
= 3 + 2)
Turn Around Time (TAT) for P3 = 14 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P1 = 24 ms (Time spent in Ready Queue + Execution Time)

MIT, Thandavapura
Average Turn Around Time = (Turn Around Time for all Processes) / No. of Processes
= (Turn Around Time for (P2+P4+P3+P1)) / 4
= (5+5+14+24)/4 = 48/4
= 12 milliseconds
The average waiting time for a given set of process is minimal in SJF scheduling and so it is
optimal
compared to other non-preemptive scheduling like FCFS. The major drawback of SJF
algorithm is that a process whose estimated execution completion time is high may not get a
chance to execute if more and more processes with least estimated execution time enters the
‘Ready’ queue before the process with longest estimated execution time started its execution
(In non-preemptive SJF ). This condition is known as ‘Starvation’. Another drawback of SJF
is that it is difficult to know in advance the next shortest process in the ‘Ready’ queue for
scheduling since new processes with different estimated execution time keep entering the
‘Ready’ queue at any point of time.

5.1.4 Priority Based Scheduling


The Turn Around Time (TAT) and waiting time for processes in non-preemptive
scheduling varies with the type of scheduling algorithm. Priority based non-preemptive
scheduling algorithm ensures that a process with high priority is serviced at the earliest
compared to other low priority processes in the ‘Ready’ queue. The priority of a task/process
can be indicated through various mechanisms. The Shortest Job First (SJF) algorithm can be
viewed as a priority based scheduling where each task is prioritised in the order of the time
required to complete the task. The lower the time required for completing a process the higher
is its priority in SJF algorithm.

Example 1
Three processes with process IDs P1, P2, P3 with estimated completion time 10, 5, 7
milliseconds and priorities 0, 3, 2 (0—highest priority, 3—lowest priority) respectively enters
the ready queue together. Calculate the waiting time and Turn Around Time (TAT) for each
process and the Average waiting time and Turn Around Time (Assuming there is no I/O waiting
for the processes) in priority based scheduling algorithm. The scheduler sorts the ‘Ready’
queue based on the priority and schedules the process with the highest priority (P1 with priority
number 0) first and the next high priority process (P3 with priority number 2) as second, and
so on. The order in which the processes are scheduled for execution is represented as

The waiting time for all the processes are given as


Waiting time for P1 = 0 ms (P1 starts executing fi rst)
Waiting time for P3 = 10 ms (P3 starts executing after completing P1)
Waiting time for P2 = 17 ms (P2 starts executing after completing P1 and P3)

MIT, Thandavapura
Average waiting time = (Waiting time for all processes) / No. of Processes
= (Waiting time for (P1+P3+P2)) / 3
= (0+10+17)/3 = 27/3
= 9 milliseconds
Turn Around Time (TAT) for P1 = 10 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P3 = 17 ms (-Do-)
Turn Around Time (TAT) for P2 = 22 ms (-Do-)
Average Turn Around Time = (Turn Around Time for all processes) / No. of Processes

= (Turn Around Time for (P1+P3+P2)) / 3


= (10+17+22)/3 = 49/3
= 16.33 milliseconds

Example 2
Calculate the waiting time and Turn Around Time (TAT) for each process and the Average
waiting time and Turn Around Time for the above example if a new process P4 with estimated
completion time 6 ms and priority 1 enters the ‘Ready’ queue after 5 ms of execution of P1.
Assume all the processes contain only CPU operation and no I/O operations are involved. At
the beginning, there are only three processes (P1, P2 and P3) available in the ‘Ready’ queue
and the scheduler picks up the process with the highest priority (In this example P1 with priority
0) for scheduling. The execution sequence diagram for this is same as that of Example 1. Now
process P4 with estimated execution completion time 6 ms and priority 1 enters the ‘Ready’
queue after 5 ms of execution of P1. Since the scheduling algorithm is non-preemptive and
process P1 does not contain any I/O operations, P1 continues its execution. After 10 ms of
scheduling, P1 terminates and now the scheduler again sorts the ‘Ready’ queue for process
with highest priority. Since the priority for P4 (priority 1) is higher than that of P3 (priority 2),
which was supposed to be run after the completion of P1 as per the ‘Ready’ queue available at
the beginning of execution scheduling, P4 is picked up for executing. Due to the arrival of the
process P4 with priority 1, the ‘Ready’ queue is resorted in the order P1, P4, P3, P2. At the
beginning it was P1, P3, P2. The execution sequence now changes as per the following
diagram.

The waiting time for all the processes are given as


Waiting time for P1 = 0 ms (P1 starts executing fi rst)
Waiting time for P4 = 5 ms (P4 starts executing after completing P1. But P4 arrived after 5
ms of execution of
P1. Hence its waiting time = Execution start time – Arrival Time = 10 – 5 = 5)
Waiting time for P3 = 16 ms (P3 starts executing after completing P1 and P4)

MIT, Thandavapura
Waiting time for P2 = 23 ms (P2 starts executing after completing P1, P4 and P3)
Average waiting time = (Waiting time for all processes) / No. of Processes
= (Waiting time for (P1+P4+P3+P2)) / 4
= (0 + 5 + 16 + 23)/4 = 44/4
= 11 milliseconds
Turn Around Time (TAT) for P1 = 10 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P4 = 11 ms (Time spent in Ready Queue + Execution
Time = (Execution Start Time – Arrival Time) + Estimated
Execution Time = (10 – 5) + 6 = 5 + 6)
Turn Around Time (TAT) for P3 = 23 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P2 = 28 ms (Time spent in Ready Queue + Execution Time)
Average Turn Around Time = (Turn Around Time for all processes) / No. of Processes
= (Turn Around Time for (P2 + P4 + P3 + P1)) / 4
= (10 + 11 + 23 + 28)/4 = 72/4
= 18 milliseconds

Similar to SJF scheduling algorithm, non-preemptive priority based algorithm also possess the
drawback of ‘Starvation’ where a process whose priority is low may not get a chance to execute
if more and more processes with higher priorities enter the ‘Ready’ queue before the process
with lower priority started its execution. ‘Starvation’ can be effectively tackled in priority based
non-preemptive scheduling by dynamically raising the priority of the low priority task/process
which is under starvation (waiting in the ready queue for a longer time for getting the CPU
time). The technique of gradually raising the priority of processes which are waiting in the
‘Ready’ queue as time progresses, for preventing ‘Starvation’, is known as ‘Aging’.

6. Preemptive Scheduling
Preemptive scheduling is employed in systems, which implements preemptive
multitasking model. In preemptive scheduling, every task in the ‘Ready’ queue gets a chance
to execute. When and how often each process gets a chance to execute (gets the CPU time) is
dependent on the type of preemptive scheduling algorithm used for scheduling the processes.
In this kind of scheduling, the scheduler can preempt (stop temporarily) the currently executing
task/process and select another task from the ‘Ready’ queue for execution. When to pre-empt
a task and which task is to be picked up from the ‘Ready’ queue for execution after preempting
the current task is purely dependent on the scheduling algorithm. A task which is preempted
by the scheduler is moved to the ‘Ready’ queue. The act of moving a ‘Running’ process/task
into the ‘Ready’ queue by the scheduler, without the processes requesting for it is known as
‘Preemption’. Preemptive scheduling can be implemented in different approaches. The two
important approaches adopted in preemptive scheduling are time-based preemption and
priority-based preemption. The various types of preemptive scheduling adopted in task/process
scheduling are explained below.

6.1 Preemptive SJF Scheduling/ Shortest Remaining Time (SRT)

MIT, Thandavapura
The non-preemptive SJF scheduling algorithm sorts the ‘Ready’ queue only after completing
the execution of the current process or when the process enters ‘Wait’ state, whereas the
preemptive SJF scheduling algorithm sorts the ‘Ready’ queue when a new process enters the
‘Ready’ queue and checks whether the execution time of the new process is shorter than the
remaining of the total estimated time for the currently executing process. If the execution time
of the new process is less, the currently executing process is preempted and the new process is
scheduled for execution. Thus preemptive SJF scheduling always compares the execution
completion time (It is same as the remaining time for the new process) of a new process entered
the ‘Ready’ queue with the remaining time for completion of the currently executing process
and schedules the process with shortest remaining time for execution. Preemptive SJF
scheduling is also known as Shortest Remaining Time ( SRT) scheduling. Now let us solve
Example 2 given under the Non-preemptive SJF scheduling for preemptive SJF scheduling.
The problem statement and solution is explained in the following example.

Example 1

Three processes with process IDs P1, P2, P3 with estimated completion time 10, 5, 7
milliseconds respectively enters the ready queue together. A new process P4 with estimated
completion time 2 ms enters the ‘Ready’ queue after 2 ms. Assume all the processes contain
only CPU operation and no I/O operations are involved. At the beginning, there are only three
processes (P1, P2 and P3) available in the ‘Ready’ queue and the SRT scheduler picks up the
process with the shortest remaining time for execution completion (In this example, P2 with
remaining time 5 ms) for scheduling. The execution sequence diagram for this is same as that
of example 1 under non-preemptive SJF scheduling.
Now process P4 with estimated execution completion time 2 ms enters the ‘Ready’
queue after 2 ms of start of execution of P2. Since the SRT algorithm is preemptive, the
remaining time for completion of process P2 is checked with the remaining time for completion
of process P4. The remaining time for completion of P2 is 3 ms which is greater than that of
the remaining time for completion of the newly entered process P4 (2 ms). Hence P2 is
preempted and P4 is scheduled for execution. P4 continues its execution to finish since there
is no new process entered in the ‘Ready’ queue during its execution. After 2 ms of scheduling
P4 terminates and now the scheduler again sorts the ‘Ready’ queue based on the remaining
time for completion of the processes present in the ‘Ready’ queue. Since the remaining time
for P2 (3 ms), which is pre-empted by P4 is less than that of the remaining time for other
processes in the ‘Ready’ queue, P2 is scheduled for execution. Due to the arrival of the process
P4 with execution time 2 ms, the ‘Ready’ queue is re-sorted in the order P2, P4, P2, P3, P1. At
the beginning it was P2, P3, P1. The execution sequence now changes as per the following
diagram.

The waiting time for all the processes are given as

MIT, Thandavapura
Waiting time for P2 = 0 ms + (4 – 2) ms = 2 ms (P2 starts executing fi rst and is interrupted
by P4 and has to
wait till the completion of P4 to get the next CPU slot)
Waiting time for P4 = 0 ms (P4 starts executing by preempting P2 since the execution time
for completion
of P4 (2 ms) is less than that of the Remaining time for execution completion of P2
(Here it is 3 ms))
Waiting time for P3 = 7 ms (P3 starts executing after completing P4 and P2)
Waiting time for P1 = 14 ms (P1 starts executing after completing P4, P2 and P3)
Average waiting time = (Waiting time for all the processes) / No. of Processes
= (Waiting time for (P4+P2+P3+P1)) / 4
= (0 + 2 + 7 + 14)/4 = 23/4
= 5.75 milliseconds
Turn Around Time (TAT) for P2 = 7 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P4 = 2 ms (Time spent in Ready Queue + Execution Time =
(Execution Start
Time – Arrival Time) + Estimated Execution Time = (2 – 2) + 2)
Turn Around Time (TAT) for P3 = 14 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P1 = 24 ms (Time spent in Ready Queue + Execution Time)
Average Turn Around Time = (Turn Around Time for all the processes) / No. of Processes
= (Turn Around Time for (P2+P4+P3+P1)) / 4
= (7+2+14+24)/4 = 47/4
= 11.75 milliseconds

Now let’s compare the Average Waiting time and Average Turn Around Time with that of
the Average waiting time and Average Turn Around Time for non-preemptive SJF
scheduling (Refer to Example 2 given under the section Non-preemptive SJF scheduling)
Average Waiting Time in non-preemptive SJF scheduling = 6 ms
Average Waiting Time in preemptive SJF scheduling = 5.75 ms
Average Turn Around Time in non-preemptive SJF scheduling = 12 ms
Average Turn Around Time in preemptive SJF scheduling = 11.75 ms
This reveals that the Average waiting Time and Turn Around Time (TAT) improves
significantly with preemptive SJF scheduling.

6.2 Round Robin (RR) Scheduling:


 The term Round Robin is very popular among the sports and games activities.
 You might have heard about ‘Round Robin’ league or ‘Knock out’ league associated
with any football or cricket tournament.

MIT, Thandavapura
 In the ‘Round Robin’ league each team in a group gets an equal chance to play against
the rest of the teams in the same group whereas in the ‘Knock out’ league the losing
team in a match moves out of the tournament.
 In the process scheduling context also, ‘Round Robin’ brings the same message “Equal
chance to all”.
 In Round Robin scheduling, each process in the ‘Ready’ queue is executed for a pre-
defined time slot.
 The execution starts with picking up the first process in the ‘Ready’ queue (see Fig.
10.13). It is executed for a pre-defined time and when the pre-defined time elapses or
the process completes (before the pre-defined time slice), the next process in the
‘Ready’ queue is selected for execution.
 This is repeated for all the processes in the ‘Ready’ queue. Once each process in the
‘Ready’ queue is executed for the pre-defined time period, the scheduler comes back
and picks the first process in the ‘Ready’ queue again for execution.
 The sequence is repeated. This reveals that the Round Robin scheduling is similar to
the FCFS scheduling and the only difference is that a time slice based preemption is
added to switch the execution between the processes in the ‘Ready’ queue.
 The ‘Ready’ queue can be considered as a circular queue in which the scheduler picks
up the first process for execution and moves to the next till the end of the queue and
then comes back to the beginning of the queue to pick up the first process.
 The time slice is provided by the timer tick feature of the time management unit of the
OS kernel (Refer the Time management section under the subtopic ‘The Real-Time
kernel’ for more details on Timer tick).
 Time slice is kernel dependent and it varies in the order of a few microseconds to
milliseconds. Certain OS kernels may allow the time slice as user configurable. Round
Robin scheduling ensures that every process gets a fixed amount of CPU time for
execution.
 When the process gets its fixed time for execution is determined by the FCFS policy
(That is, a process entering the Ready queue first gets its fixed execution time first and
so on…).
 If a process terminates before the elapse of the time slice, the process releases the
CPU voluntarily and the next process in the queue is scheduled for execution by the

scheduler.

MIT, Thandavapura
Example 1
Three processes with process IDs P1, P2, P3 with estimated completion time 6, 4, 2
milliseconds respectively, enters the ready queue together in the order P1, P2, P3. Calculate
the waiting time and Turn Around Time (TAT) for each process and the Average waiting time
and Turn Around Time (Assuming there is no I/O waiting for the processes) in RR algorithm
with Time slice = 2 ms. The scheduler sorts the ‘Ready’ queue based on the FCFS policy and
picks up the first process P1 from the ‘Ready’ queue and executes it for the time slice 2 ms.
When the time slice is expired, P1 is preempted and P2 is scheduled for execution. The Time
slice expires after 2ms of execution of P2. Now P2 is preempted and P3 is picked up for
execution. P3 completes its execution within the time slice and the scheduler picks P1 again
for execution for the next time slice. This procedure is repeated till all the processes are
serviced. The order in which the processes are scheduled for execution is represented as

The waiting time for all the processes are given as


Waiting time for P1 = 0 + (6 – 2) + (10 – 8) = 0 + 4 + 2 = 6 ms
(P1 starts executing fi rst and waits for two time slices to get execution back and again
1 time slice for getting CPU time)
Waiting time for P2 = (2 – 0) + (8 – 4) = 2 + 4 = 6 ms
(P2 starts executing after P1 executes for 1 time slice and waits for two time slices to
get the CPU time)
Waiting time for P3 = (4 – 0) = 4 ms
(P3 starts executing after completing the fi rst time slices for P1 and P2 and completes
its execution in a single time slice)
Average waiting time = (Waiting time for all the processes) / No. of Processes
= (Waiting time for (P1 + P2 + P3)) / 3
= (6 + 6 + 4)/3 = 16/3

MIT, Thandavapura
= 5.33 milliseconds
Turn Around Time (TAT) for P1 = 12 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P2 = 10 ms (-Do-)
Turn Around Time (TAT) for P3 = 6 ms (-Do-)
Average Turn Around Time = (Turn Around Time for all the processes) / No. of Processes
= (Turn Around Time for (P1 + P2 + P3))/3
= (12 + 10 + 6)/3 = 28/3
= 9.33 milliseconds
Average Turn Around Time (TAT) is the sum of average waiting time and average execution
time.
Average Execution time = (Execution time for all the process)/No. of processes
= (Execution time for (P1 + P2 + P3))/3
= (6 + 4 + 2)/3 = 12/3 = 4
Average Turn Around Time = Average Waiting time + Average Execution time
= 5.33 + 4
= 9.33 milliseconds
RR scheduling involves lot of overhead in maintaining the time slice information for every
process which is currently being executed.

Example 1
Three processes with process IDs P1, P2, P3 with estimated completion time 10, 5, 7
milliseconds and priorities 1, 3, 2 (0—highest priority, 3—lowest priority) respectively enters
the ready queue together. A new process P4 with estimated completion time 6 ms and priority
0 enters the ‘Ready’ queue after 5 ms of start of execution of P1. Assume all the processes
contain only CPU operation and no I/O operations are involved. At the beginning, there are
only three processes (P1, P2 and P3) available in the ‘Ready’ queue and the scheduler picks up
the process with the highest priority (In this example P1 with priority 1) for scheduling. Now
process P4 with estimated execution completion time 6 ms and priority 0 enters the ‘Ready’
queue after 5 ms of start of execution of P1. Since the scheduling algorithm is preemptive, P1
is preempted by P4 and P4 runs to completion. After 6 ms of scheduling, P4 terminates and
now the scheduler again sorts the ‘Ready’ queue for process with highest priority. Since the
priority for P1 (priority 1), which is preempted by P4 is higher than that of P3 (priority 2) and
P2 ((priority 3), P1 is again picked up for execution by the scheduler. Due to the arrival of the
process P4 with priority 0, the ‘Ready’ queue is resorted in the order P1, P4, P1, P3, P2. At the
beginning it was P1, P3, P2. The execution sequence now changes as per the following
Diagram

The waiting time for all the processes are given as


Waiting time for P1 = 0 + (11 – 5) = 0 + 6 = 6 ms
(P1 starts executing fi rst and gets preempted by P4 after 5 ms and again gets the CPU
time after completion of P4)
Waiting time for P4 = 0 ms
(P4 starts executing immediately on entering the ‘Ready’ queue, by preempting P1)

MIT, Thandavapura
Waiting time for P3 = 16 ms (P3 starts executing after completing P1 and P4)
Waiting time for P2 = 23 ms (P2 starts executing after completing P1, P4 and P3)
Average waiting time = (Waiting time for all the processes) / No. of Processes
= (Waiting time for (P1+P4+P3+P2)) / 4
= (6 + 0 + 16 + 23)/4 = 45/4
= 11.25 milliseconds
Turn Around Time (TAT) for P1 = 16 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P4 = 6 ms
(Time spent in Ready Queue + Execution Time = (Execution Start Time – Arrival Time)
+ Estimated Execution Time = (5 – 5) + 6 = 0 + 6)
Turn Around Time (TAT) for P3 = 23 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P2 = 28 ms (Time spent in Ready Queue + Execution Time)
Average Turn Around Time = (Turn Around Time for all the processes) / No. of Processes
= (Turn Around Time for (P2 + P4 + P3 + P1)) / 4
= (16 + 6 + 23 + 28)/4 = 73/4
= 18.25 milliseconds

Questions

1. Briefly explain basics of operating system? (10)


2. What are the types of operating system? (10M)
3. Describe Real time operating system (RTOS)? (10M)
4. What is process? Memory organisation of the process? (10M)
5. Explain briefly process states and states transition? (10M)
6. Explain briefly Threads and threads standards? (10M)
7. Difference between threads and process? (5M)
8. Describe Multiprocessing and multitasking? (10M)
9. What are the types of Multitasking? (10M)
10. Explain briefly task scheduling? (10M)
11. First come first served (FIFS/FIFO) scheduling problems? (10M)
12. Last come first served (LCFS/LIFO) scheduling problems? (10M)
13. Shortest job first (SJF) scheduling problems? (10M)
14. Priority based scheduling problems? (10M)
15. Pre-emptive (SJF) scheduling problems? (10M)
16. Explain Round Robin (RR) Scheduling? (10M)
17. Round Robin (RR) Scheduling problems? (10M)

MIT, Thandavapura

You might also like