KEMBAR78
Coa Notes | PDF | Scheduling (Computing) | Process (Computing)
0% found this document useful (0 votes)
19 views34 pages

Coa Notes

The document covers the fundamentals of process management in operating systems, detailing concepts such as process states, control blocks, and scheduling types. It explains various scheduling algorithms including long-term, short-term, and medium-term scheduling, as well as preemptive and non-preemptive scheduling. Additionally, it discusses the differences between processes and threads, their management, and the advantages of using threads for efficient resource utilization.

Uploaded by

Shivam Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views34 pages

Coa Notes

The document covers the fundamentals of process management in operating systems, detailing concepts such as process states, control blocks, and scheduling types. It explains various scheduling algorithms including long-term, short-term, and medium-term scheduling, as well as preemptive and non-preemptive scheduling. Additionally, it discusses the differences between processes and threads, their management, and the advantages of using threads for efficient resource utilization.

Uploaded by

Shivam Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

UNIT 3

Process Management system


 PROCESS CONCEPT
 PROCESS STATE
 PROCESS CONTROL BLOCK
 PROCESS SCHEDULING
 A process is a program in executions.
 The terms program and process almost
interchangeably
 A process is more than the program code
 A process includes:
◦ Text section
◦ program counter
◦ stack
◦ data section
◦ heap
 Schedulers are special system software which
handle process scheduling in various ways.
Their main task is to select the jobs to be
submitted into the system and to decide
which process to run. Schedulers are of three
types −
 Long-Term Scheduler
 Short-Term Scheduler
 Medium-Term Scheduler
 It is also called a job scheduler. A long-term
scheduler determines which programs are
admitted to the system for processing. It selects
processes from the queue and loads them into
memory for execution. Process loads into the
memory for CPU scheduling.
 The primary objective of the job scheduler is to
provide a balanced mix of jobs, such as I/O
bound and processor bound. It also controls the
degree of multiprogramming. If the degree of
multiprogramming is stable, then the average
rate of process creation must be equal to the
average departure rate of processes leaving the
system.
 It is also called as CPU scheduler. Its main
objective is to increase system performance in
accordance with the chosen set of criteria. It is
the change of ready state to running state of the
process. CPU scheduler selects a process among
Short Term Scheduler
the processes that are ready to execute and
allocates CPU to one of them.
 Short-term schedulers, also known as
dispatchers, make the decision of which process
to execute next. Short-term schedulers are faster
than long-term schedulers.
 Term Scheduler
 Medium-term scheduling is a part of swapping. It
removes the processes from the memory. It reduces
the degree of multiprogramming. The medium-term
scheduler is in-charge of handling the swapped out-
processes.
 A running process may become suspended if it
makes an I/O request. A suspended processes cannot
make any progress towards completion. In this
condition, to remove the process from memory and
make space for other processes, the suspended
process is moved to the secondary storage. This
process is called swapping, and the process is said to
be swapped out or rolled out. Swapping may be
necessary to improve the process mix.
 A context switch is the mechanism to store
and restore the state or context of a CPU in
Process Control block so that a process
execution can be resumed from the same
point at a later time. Using this technique, a
context switcher enables multiple processes
to share a single CPU. Context switching is an
essential part of a multitasking operating
system features.
 The criteria include the following:
 CPU utilisation –
The main objective of any CPU scheduling algorithm is to keep the CPU as busy as
possible.
 Throughput –
A measure of the work done by CPU is the number of processes being executed
and completed per unit time. This is called throughput. The throughput may vary
depending upon the length or duration of processes.

 Turnaround time –
For a particular process, an important criteria is how long it takes to execute that
process. The time elapsed from the time of submission of a process to the time of
completion is known as the turnaround time. Turn-around time is the sum of
times spent waiting to get into memory, waiting in ready queue, executing in CPU,
and waiting for I/O.
 TAT = process finish time – process arrival time

 Waiting time –
Time spent by a process waiting in the ready queue.
 WT = TAT – Burst time

 Response time –
Time taken from submission of the process of request until the first response is
produced. This measure is called response time.
 Burst Time/Execution Time: It is a time
required by the process to complete
execution. It is also called running time.
 Arrival Time: when a process enters in a
ready state
 Finish Time: when process complete and exit
from a system
 CPU Scheduling is a process of determining
which process will own CPU for execution while
another process is on hold. The main task of CPU
scheduling is to make sure that whenever the
CPU remains idle, the OS at least select one of
the processes available in the ready queue for
execution. The selection process will be carried
out by the CPU scheduler. It selects one of the
processes in memory that are ready for
execution.

To determine if scheduling is preemptive or non-
preemptive, consider these four parameters:
 A process switches from the running to the
waiting state.
 Specific process switches from the running state
to the ready state.
 Specific process switches from the waiting state
to the ready state.
 Process finished its execution and terminated.
Only conditions 1 and 4 apply, the scheduling is
called non- pre emptive otherwise it is called
preemptive
 In this type of scheduling method, the CPU
has been allocated to a specific process. The
process that keeps the CPU busy will release
the CPU either by switching context or
terminating.
 Non-Preemptive Scheduling occurs when a
process voluntarily enters the wait state or
terminates.
 Preemptive scheduling is used when a
process switches from running state to ready
state or from waiting state to ready state. The
resources (mainly CPU cycles) are allocated to
the process for the limited amount of time
and then is taken away, and the process is
again placed back in the ready queue if that
process still has CPU burst time remaining.
That process stays in ready queue till it gets
next chance to execute.
Preemptive Scheduling Non-preemptive Scheduling
A processor can be preempted to Once the processor starts its
execute the different processes in execution, it must finish it before
the middle of any current process executing the other. It can't be
execution. paused in the middle.
CPU utilization is more efficient CPU utilization is less efficient
compared to Non-Preemptive compared to preemptive
Scheduling. Scheduling.
Waiting and response time of Waiting and response time of the
preemptive Scheduling is less. non-preemptive Scheduling
method is higher.
Preemptive Scheduling is When any process enters the state
prioritized. The highest priority of running, the state of that
process is a process that is process is never deleted from the
currently utilized. scheduler until it finishes its job.
Preemptive Scheduling is flexible. Non-preemptive Scheduling is
 In this type of algorithm, processes which
requests the CPU first get the CPU allocation
first. This is managed with a FIFO queue.
 It is non preemptive CPU scheduling
 Shortest Job First (SJF) is an algorithm in
which the process having the smallest
execution time is chosen for the next
execution. This scheduling method can be
preemptive or non-preemptive. It
significantly reduces the average waiting time
for other processes awaiting execution.
There are basically two types of SJF methods:
 Non-Preemptive SJF
 Preemptive SJF – Shortest Remaining Time
First(SRTF)
 Priority Scheduling is a method of scheduling
processes that is based on priority. In this
algorithm, the scheduler selects the tasks to
work as per the priority.
 The processes with higher priority should be
carried out first, whereas jobs with equal
priorities are carried out on a round-robin or
FCFS basis. Priority depends upon memory
requirements, time requirements, etc.
 Priority scheduling divided into two main types:
 Preemptive Scheduling
 In Preemptive Scheduling, the tasks are mostly assigned with
their priorities. Sometimes it is important to run a task with
a higher priority before another lower priority task, even if
the lower priority task is still running. The lower priority task
holds for some time and resumes when the higher priority
task finishes its execution.
 Non-Preemptive Scheduling
 In this type of scheduling method, the CPU has been
allocated to a specific process. The process that keeps the
CPU busy, will release the CPU either by switching context or
terminating. It is the only method that can be used for
various hardware platforms. That's because it doesn't need
special hardware (for example, a timer) like preemptive
scheduling.
 Round Robin is the preemptive process
scheduling algorithm.
 Each process is provided a fix time to
execute, it is called a quantum.
 Once a process is executed for a given time
period, it is preempted and other process
executes for a given time period.
 Context switching is used to save states of
preempted processes.
S.N. Process Thread
1 Process is heavy weight or resource Thread is light weight,
intensive. taking lesser resources
than a process.
2 Process switching needs interaction Thread switching does
with operating system. not need to interact with
operating system.
3 In multiple processing environments, All threads can share
each process executes the same code same set of open files,
but has its own memory and file child processes.
resources.
4 If one process is blocked, then no While one thread is
other process can execute until the blocked and waiting, a
first process is unblocked. second thread in the
same task can run.
5 Multiple processes without using Multiple threaded
threads use more resources. processes use fewer
resources.
 Threads minimize the context switching time.
 Use of threads provides concurrency within a
process.
 Efficient communication.
 It is more economical to create and context
switch threads.
 Threads allow utilization of multiprocessor
architectures to a greater scale and efficiency.
 Threads are implemented in following two
ways −
 User Level Threads − User managed threads.
 Kernel Level Threads − Operating System
managed threads acting on kernel, an
operating system core.
 The user-level threads are managed by users and
the kernel is not aware of it.
 These threads are faster to create and manage.
 The kernel manages them as if it was a single-
threaded process.
 It is implemented using user-level libraries and
not by system calls. So, no call to the operating
system is made when a thread switches the
context.
 Each process has its own private thread table to
keep the track of the threads.
 The kernel knows about the thread and is
supported by the OS.
 The threads are created and implemented
using system calls.
 The thread table is not present here for each
process. The kernel has a thread table to
keep the track of all the threads present in
the system.
 Kernel-level threads are slower to create and
manage as compared to user-level threads.
 Three primary thread libraries:
 (threads are managed without kernel support)
POSIX Pthreads
 Windows threads
 Java threads
 Kernel threads - Supported by the Kernel.
Managed directly by the OS Examples –
virtually all general purpose operating
systems, including: Windows Solaris
Linux Tru64 UNIX Mac OS

You might also like