lOMoARcPSD|55704075
Module 2 OS
Operating systems (APJ Abdul Kalam Technological University)
Scan to open on Studocu
Studocu is not sponsored or endorsed by any college or university
Downloaded by Sai (zayikrishnan123@gmail.com)
lOMoARcPSD|55704075
MODULE 2
PROCESS MANAGEMENT
PROCESS CONCEPT
Process – a program in execution; process execution must progress in sequential fashion
Multiple parts
o The program code, also called text section
o Current activity includingprogram counter, processor registers
o Stackcontaining temporary data
Function parameters, return addresses, local variables
o Data sectioncontaining global variables
o Heapcontaining memory dynamically allocated during run time
Program is passive entity stored on disk (executable file), process is active
o Program becomes process when executable file loaded into memory
Process in Memory
PROCESS STATE
A process executes, it changes state. The state of a process is defined in part by the current activity of that
process. A process may be in one of the following states:
New: The process is being created.
Running: Instructions are being executed.
Waiting: The process is waiting for some event to occur (such as an I/O completion or reception of a
signal).
Ready: The process is waiting to be assigned to a processor.
Terminated: The process has finished execution.
Diagram of Process States
PROCESS CONTROL BLOCK
Downloaded by Sai (zayikrishnan123@gmail.com)
lOMoARcPSD|55704075
Each process is represented in the operating systemby a process control block (PCB)—also called a task
control block. A PCB is shown in Figure. It contains many pieces of information associated with a specific
process, including these:
Process state: The state may be new, ready, running, waiting, halted, and so on.
Program counter: The counter indicates the address of the next instruction to be executed for this process.
CPU registers: The registers vary in number and type, depending on the computer architecture. They include
accumulators, index registers, stack pointers, and general-purpose registers, plus any condition-code
information. Along with the program counter, this state information must be saved when an interrupt occurs, to
allow the process to be continued correctly afterward.
CPU- scheduling information: This information includes a process priority, pointers to scheduling queues,
and any other scheduling parameters.
Memory-management information: This information may include such items as the value of the base and
limit registers and the page tables, or the segment tables, depending on the memory system used by the
operating system.
Accounting information: This information includes the amount of CPU and real time used, time limits,
account numbers, job or process numbers, and so on.
I/O status information: This information includes the list of I/O devices allocated to the process, a list of open
files, and so on.
Fig: Process Control Block
THREADS
A thread is a flow of execution through the process code, with its own program counter that keeps track
of which instruction to execute next, system registers which hold its current working variables, and a
stack which contains the execution history.
A thread shares with its peer threads few information like code segment, data segment and open files.
When one thread alters a code segment memory item, all other threads see that.
A thread is also called a lightweight process.
Threads provide a way to improve application performance through parallelism.
PROCESS SCHEDULING
The objective of multiprogramming is to have some process running at all times, to maximize CPU
utilization. The objective of time sharing is to switch the CPU among processes so frequently that users can
interact with each program while it is running. To meet these objectives, the process scheduler selectsan
available process (possibly from a set of several available processes) for program execution on the CPU. For a
single-processor system, there will never be more than one running process. If there are more processes, the
rest will have to wait until the CPUDownloaded
is free and can(zayikrishnan123@gmail.com)
by Sai be rescheduled.
lOMoARcPSD|55704075
Scheduling Queues
Job queue – set of all processes in the system
Ready queue – set of all processes residing in main memory, ready and waiting to execute.
Ready queue is generally stored as a linked list. A ready-queue header contains pointers to the first
and final PCBs in the list. Each PCB includes a pointer field that points to the next PCB in the ready
queue
Device queues – set of processes waiting for an I/O device
Processes migrate among the various queues
Ready Queue And Various I/O Device Queues
A common representation of process scheduling is a queueing diagram, such as that in Figure.
Each rectangular box represents a queue.
Two types of queues are present: the ready queue and a set of device queues.
The circles represent the resources that serve the queues, and the arrows indicate the flow of processes
in the system.
Figure : Queueing diagram represents queues, resources, flows
A new process is initially put in the ready queue. It waits there until it is selected for execution, or dispatched.
Once the process is allocated the CPU and is executing, one of several events could occur:
Downloaded by Sai (zayikrishnan123@gmail.com)
lOMoARcPSD|55704075
• The process could issue an I/O request and then be placed in an I/O queue.
• The process could create a new child process and wait for the child’s termination.
• The process could be removed forcibly from the CPU, as a result of an interrupt, and be put
back in the ready queue.
In the first two cases, the process eventually switches from the waiting state to the ready state and is then put
back in the ready queue. A process continues this cycle until it terminates, at which time it is removed from all
queues and has its PCB and resources deallocated.
SCHEDULERS
Short-term scheduler (or CPU scheduler) – selects which process should be executed next and
allocates CPU
o Short-term scheduler is invoked frequently (milliseconds) Þ (must be fast)
Long-term scheduler (or job scheduler) – selects which processes should be brought into the ready
queue (main memory) from the job pool.
o Long-term scheduler is invoked infrequently (seconds, minutes) Þ (may be slow)
o The long-term scheduler controls the degree of multiprogramming(the number of processes in
memory)
o If the degree of multiprogramming is stable, then the average rate of process creation must be
equal to the average departure rate of processes leaving the system. Thus, the long-term
scheduler may need to be invoked only when a process leaves the system
Processes can be described as either:
o I/O-bound process – spends more time doing I/O than computations, many short CPU bursts
o CPU-bound process – spends more time doing computations; few very long CPU bursts
Long-term scheduler strives for good process mix
If all processes are I/O bound, the ready queue will almost always be empty, and the short-term
scheduler will have little to do. If all processes are CPU bound, the I/O waiting queue will almost
always be empty, devices will go unused, and again the system will be unbalanced. The system with the
best performance will thus have a combination of CPU-bound and I/O-bound processes.
Medium-term scheduler can be added if degree of multiple programming needs to decrease
o Remove process from memory, store on disk, bring back in from disk to continue execution:
swapping
Addition of Medium Term Scheduler
Downloaded by Sai (zayikrishnan123@gmail.com)
lOMoARcPSD|55704075
CONTEXT SWITCH
When CPU switches to another process, the system must save the state of the old process and load the
saved state for the new process via a context switch
Context of a process represented in the PCB
Context-switch time is overhead; the system does no useful work while switching
Operations on Processes
System must provide mechanisms for:
o process creation,
o process termination,
PROCESS CREATION
Parent process create children processes, which, in turn create other processes, forming a tree of
processes
Generally, process identified and managed via a process identifier (pid)
Resource sharing options
o Parent and children share all resources
o Children share subset of parent’s resources
o Parent and child share no resources
Execution options
o Parent and children execute concurrently
o Parent waits until children terminate
Address space
o Child duplicate of parent
o Child has a program loaded into it
Figure illustrates a typical process tree for the Linux operating system, showing the name of each
process and it’s pid.
The init process (which always has a pid of 1) serves as the root parent process for all user processes.
Once the system has booted, the init process can also create various user processes, such as a web or
print server, anssh server, and the like.
In Figure we see two children of init—kthreadd
Downloaded and sshd.
by Sai (zayikrishnan123@gmail.com)
lOMoARcPSD|55704075
The kthreadd process is responsible for creating additional processes that perform tasks on behalf of the
kernel (in this situation, khelper and pdflush).
The sshd process is responsible for managing clients that connect to the system by using ssh (which is
short for secureshell).
Theloginprocessisresponsibleformanagingclientsthatdirectly log onto the system.
In this example, a client has logged on and is using the bash shell, which has been assigned pid 8416.
Using the bash command-line interface, this user has created the process ps as well as the emacs editor.
UNIX examples
o fork() system call creates new process
o Assume that the new process consists of a copy of the address space of the original process.
o Both processes (the parent and the child) continue execution at the instruction after the fork(),
with one difference: the return code for the fork() is zero for the new (child) process,
whereas the (nonzero) process identifier of the child is returned to the parent.
o exec() system call used after a fork() to replace the process’ memory space with a new
program .
o The parent can then create more children; or, if it has nothing else to do while the child runs, it
can issue a wait() system call to move itself off the ready queue until the termination of the
child.
Downloaded by Sai (zayikrishnan123@gmail.com)
lOMoARcPSD|55704075
PROCESS TERMINATION
Process executes last statement and then asks the operating system to delete it using the exit()
system call.
o Returns status data from child to parent (via wait())
Downloaded
o Process’ resources by Sai (zayikrishnan123@gmail.com)
are deallocated by operating system
lOMoARcPSD|55704075
Parent may terminate the execution of children processes using the abort() system call. Some
reasons for doing so:
o Child has exceeded allocated resources
o Task assigned to child is no longer required
o The parent is exiting and the operating systems does not allow a child to continue if its
parent terminates
Some operating systems do not allow child to exists if its parent has terminated. If a process
terminates, then all its children must also be terminated.
o Cascading termination. All children, grandchildren, etc are terminated.
o The termination is initiated by the operating system.
The parent process may wait for termination of a child process by using the wait()system call. The
call returns status information and the pid of the terminated process
pid = wait(&status);
When a process terminates, its resources are deallocated by the operating system.
However, its entry in the process table must remain there until the parent calls wait().
If no parent waiting (did not invoke wait()) process is a zombie
If parent terminated without invoking wait , process is an orphan
INTERPROCESS COMMUNICATION
Processes within a system may be independentor cooperating
A process is independent if it cannot affect or be affected by the other processes executing in the
system.
Any process that does not share data with any other process is independent
Cooperating process can affect or be affected by other processes, including sharing data
Reasons for cooperating processes:
o Information sharing
Since several users may be interested in the same piece of information (for instance, a
shared file), we must provide an environment to allow concurrent access to such information
o Computation speedup
If we want a particular task to run faster, we must break it into subtasks, each of which will
be executing in parallel with the others. Notice that such a speedup can be achieved only if
the computer has multiple processing cores.
o Modularity
We may want to construct the system in a modular fashion, dividing the system functions
into separate processes or threads.
o Convenience
Even an individual user may work on many tasks at the same time. For instance, a user may
be editing, listening to music, and compiling in parallel
Cooperating processes need interprocess communication (IPC)
Downloaded by Sai (zayikrishnan123@gmail.com)
Two models of IPC
lOMoARcPSD|55704075
o Shared memory
o Message passing
Communications Models
Shared memory systems
An area of memory shared among the processes that wish to communicate
The communication is under the control of the users processes not the operating system.
Major issues are to provide mechanism that will allow the user processes to synchronize their actions
when they access shared memory.
Producer-Consumer Problem
Paradigm for cooperating processes, producer process produces information that is consumed by a
consumer process
o unbounded-buffer places no practical limit on the size of the buffer
o bounded-buffer assumes that there is a fixed buffer size
Thesharedbufferisimplementedasacirculararraywithtwologicalpointers: in and out.
The variablein points to the next free position in the buffer; out points to the first full position in
the buffer.
The buffer is empty when in == out; the buffer is full when ((in + 1)% BUFFER SIZE) ==out.
The producer process has alocal variable next produced in which the new item to be produced is
stored.
The consumer process has a local variable next consumed in which the item to be consumed is stored.
Bounded-Buffer – Shared-Memory Solution
Shared data
#define BUFFER_SIZE 10
typedef struct { Downloaded by Sai (zayikrishnan123@gmail.com)
lOMoARcPSD|55704075
10
...
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
Solution is correct, but can only use BUFFER_SIZE-1 elements
Bounded-Buffer – Producer
item next_produced;
while (true) {
/* produce an item in next produced */
while (((in + 1) % BUFFER_SIZE) == out)
; /* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
}
Bounded Buffer – Consumer
item next_consumed;
while(true){
while (in == out) ;
/*donothing*/
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
/* consume the item in next consumed */
}
MESSAGE PASSING
Mechanism for processes to communicate and to synchronize their actions
Message system – processes communicate with each other without resorting to shared variables
IPC facility provides two operations:
o send(message)
o receive(message)
The message size is either fixed or variable
If processes P and Q wish to communicate, they need to:
o Establish a communicationlinkbetween them
o Exchange messagesDownloaded
via send/receive
by Sai (zayikrishnan123@gmail.com)
lOMoARcPSD|55704075
11
Implementation issues:
o How are links established?
o Can a link be associated with more than two processes?
o How many links can there be between every pair of communicating processes?
o What is the capacity of a link?
o Is the size of a message that the link can accommodate fixed or variable?
o Is a link unidirectional or bi-directional?
Implementation of communication link
o Physical:
Shared memory
Hardware bus
Network
o Logical:
Direct or indirect
Synchronous or asynchronous
Automatic or explicit buffering
Direct Communication
Processes must name each other explicitly:
o send (P, message) – send a message to process P
o receive(Q, message) – receive a message from process Q
Properties of communication link
o Links are established automatically
o A link is associated with exactly one pair of communicating processes
o Between each pair there exists exactly one link
o The link may be unidirectional, but is usually bi-directional
Indirect Communication
Messages are directed and received from mailboxes (also referred to as ports)
o Each mailbox has a unique id
o Processes can communicate only if they share a mailbox
Properties of communication link
o Link established only if processes share a common mailbox
o A link may be associated with many processes
o Each pair of processes may share several communication links
o Link may be unidirectional or bi-directional
Operations
o create a new mailbox (port)
o send and receive messages through mailbox
o destroy a mailbox
Primitives are defined as:
send(A, message) – send a message to mailbox A
receive(A, message) – receive a message from mailbox A
Mailbox sharing
o P1, P2, and P3 share mailbox A
Downloaded by Sai (zayikrishnan123@gmail.com)
lOMoARcPSD|55704075
12
o P1, sends; P2and P3 receive
o Who gets the message?
Solutions
o Allow a link to be associated with at most two processes
o Allow only one process at a time to execute a receive operation
o Allow the system to select arbitrarily the receiver. Sender is notified who the receiver was.
Synchronization
Message passing may be either blocking or non-blocking
Blocking is considered synchronous
o Blocking send --the sender is blocked until the message is received
o Blocking receive --the receiver is blocked until a message is available
Non-blocking is considered asynchronous
o Non-blocking send -- the sender sends the message and continue
o Non-blocking receive -- the receiver receives:
A valid message, or
Null message
Different combinations possible
o If both send and receive are blocking, we have a rendezvous
Buffering
Queue of messages attached to the link.
implemented in one of three ways
o Zero capacity – no messages are queued on a link.
Sender must wait for receiver .
o Bounded capacity – finite length of n messages
Sender must wait if link full
o Unbounded capacity – infinite length
Sender never waits
Downloaded by Sai (zayikrishnan123@gmail.com)
lOMoARcPSD|55704075
13
CPU SCHEDULING
Basic Concepts
Maximum CPU utilization obtained with multiprogramming
CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait
CPU burst followed by I/O burst
CPU Scheduler
Short-term scheduler selects from among the processes in ready queue, and allocates the CPU to one
of them
o Queue may be ordered in various ways
CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
Dispatcher
Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this
involves:
o switching context
o switching to user mode
o jumping to the proper location in the user program to restart that program
Dispatch latency – time it Downloaded by Sai (zayikrishnan123@gmail.com)
takes for the dispatcher to stop one process and start another running
lOMoARcPSD|55704075
14
SCHEDULING CRITERIA
CPU utilization – keep the CPU as busy as possible
Throughput – Number of processes that complete their execution per time unit
Turnaround time – The interval from the time of submission of a process to the time of
completion is the turnaround time. Turnaround time is the sum of the periods spent waiting to get
into memory, waiting in the ready queue, executing on the CPU, and
doing I/O.
Waiting time – amount of time a process has been waiting in the ready queue
Response time – amount of time it takes from when a request was submitted until the first response
is produced, not output (for time-sharing environment)
SCHEDULING ALGORITHMS
First- Come, First-Served (FCFS) Scheduling
The simplest CPU-scheduling algorithm is the first-come, first-served (FCFS) scheduling algorithm.
With this scheme, the process that requests the CPU first is allocated the CPU first. The implementation of the
FCFS policy is easily managed with a FIFO queue. When a process enters the ready queue, its PCB is linked
onto the tail of the queue. When the CPU is free, it is allocated to the process at the head of the queue. The
running process is then removed from the queue. The code for FCFS scheduling is simple to write and
understand.
Process Burst Time
P1 24
P2 3
P3 3
Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P1 P2 P3
0 24 27 30
Waiting time for P1 = 0; P2 = 24; P3 = 27
Average waiting time: (0 + 24 + 27)/3 = 17
Suppose that the processes arrive in the order:
P2, P3, P1
The Gantt chart for the schedule is:
P2 P3 P1
0 3 6 30
Waiting time for P1 = 6;P2 = 0; P3 = 3
Average waiting time: (6 + 0 + 3)/3 = 3
Much better than previous case
o Convoy effect - all the other processes wait for the one big process to get off the CPU.
Shortest-Job-First (SJF) Scheduling
Downloaded by Sai (zayikrishnan123@gmail.com)
lOMoARcPSD|55704075
15
When the CPU is available, it is assigned to the process that has the smallest next CPU burst. If the next
CPU bursts of two processes are the same, FCFS scheduling is used to break the tie. Note that a more
appropriate term for this scheduling method would be the shortest-next-CPU-burst algorithm, because
scheduling depends on the length of the next CPU burst of a process, rather than its total length.
Advantage : SJF is optimal – gives minimum average waiting time for a given set of processes
Disadvantage: The difficulty is knowing the length of the next CPU request
Process Arrival Time Burst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3
SJF scheduling chart
P4 P1 P3 P2
0 3 9 16 24
Average waiting time = (3 + 16 + 9 + 0) / 4 = 7
Example of Shortest-remaining-time-first
Now we add the concepts of varying arrival times and preemption to the analysis
Process Arrival Time Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
Preemptive SJF Gantt Chart
P1 P2 P4 P1 P3
0 1 5 10 17 26
Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 = 6.5 msec
Priority Scheduling
A priority number (integer) is associated with each process
The CPU is allocated to the process with the highest priority (smallest integer º highest priority)
o Preemptive
o Nonpreemptive
SJF is priority scheduling where priority is the inverse of predicted next CPU burst time
Problem ºStarvation – low priority processes may never execute
Solution ºAging – as time progresses increase the priority of the process
Example of Priority Scheduling
Downloaded by Sai (zayikrishnan123@gmail.com)
lOMoARcPSD|55704075
16
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Priority scheduling Gantt chart
P1 P2 P1 P3 P4
0 1 6 16 18 19
Average waiting time = 8.2 msec
Round Robin (RR)
Each process gets a small unit of CPU time (time quantum q), usually 10-100 milliseconds. After this
time has elapsed, the process is preempted and added to the end of the ready queue.
If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the
CPU time in chunks of at most q time units at once. No process waits more than (n-1)qtime units.
Timer interrupts every quantum to schedule next process
Performance
o q large Þ FIFO
o q small Þq must be large with respect to context switch, otherwise overhead is too high
Example of RR with Time Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3
The Gantt chart is:
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
Typically, higher average turnaround than SJF, but better response
q should be large compared to context switch time
q usually 10ms to 100ms, context switch < 10 usec
Downloaded by Sai (zayikrishnan123@gmail.com)