Unix Module 2
Unix Module 2
Process
A process is a program at the time of execution.
Differences between Process and Program
Process Program
Process is a dynamic object Program is a static object
Process States
When a process executed, it changes the state, generally the state of process is determined by
the current activity of the process. Each process may be in one of the following states:
1. New : The process is beingcreated.
2. Running : The process is beingexecuted.
3. Waiting : The process is waiting for some event tooccur.
4. Ready : The process is waiting to be assigned to a processor.
5. Terminated : The Process has finishedexecution.
Only one process can be running in any processor at any time, But many process may be in
ready and waiting states. The ready processes are loaded into a “ready queue”.
Diagram of process state
a) New ->Ready : OS creates process and prepares the
process to be executed,thenOSmoved the process into readyqueue.
b) Ready->Running : OS selects one of the Jobs from ready Queue and move themfrom
ready to Running.
e) Running -> Waiting : A process is put into the waiting state, if the process need an
event occur (or) an I/O Devicerequire.
f) Waiting->Ready : A process in the waiting state is moved to ready
state whenthe eventforwhichit has beenCompleted.
Process Control Block:
It is also called Task Control Block. It contains many pieces of information associated with a specific
Process.
Process State
Program Counter
CPU Registers
Accounting Information
Threads:
A process is divide into number of light weight process, each light weight process is said to be
a Thread. The Thread has a program counter (Keeps track of which instruction to execute
next), registers (holds its current working variables), stack (execution History).
Thread States:
Process Thread
Process takes more time to create. Thread takes less time to create.
it takes more time to complete execution & Less time to terminate.
terminate.
Execution is very slow. Execution is very fast.
It takes more time to switch b/w two It takes less time to switch b/w two
processes. threads.
Communication b/w two processes is difficult . Communication b/w two threads is
easy.
Process can’t share the same memory area. Threads can share same memory area.
System calls are requested to communicate System calls are not required.
each other.
Process is loosely coupled. Threads are tightly coupled.
It requires more resources to execute. Requires few resources to execute.
Multithreading
A process is divided into number of smaller tasks each task is called a Thread. Number of
Threads with in a Process execute at a time is called Multithreading.
If a program, is multithreaded, even when some portion of it is blocked, the whole program is
not blocked.The rest of the program continues working If multiple CPU’s are available.
Multithreading gives best performance.If we have only a single thread, number of CPU’s
available, No performance benefits achieved.
Process creation is heavy-weight while thread creation is light-weight
Can simplify code, increase efficiency
Advantages
Thread switching does not require Kernel mode privileges.
User level thread can run on any operating system.
Scheduling can be application specific in the user level thread.
User level threads are fast to create and manage.
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
Disadvantages
2) Kernel Threads: kernel creates, schedules, manages these threads .these threads are
slower, manage. If one thread in a process blocked, over all process need not be blocked.
Advantages
Kernel can simultaneously schedule multiple threads from the same process on multiple
processes.
If one thread in a process is blocked, the Kernel can schedule another thread of the sameprocess.
Kernel routines themselves can multithreaded.
Disadvantages
Kernel threads are generally slower to create and manage than the userthreads.
Transfer of control from one thread to another within same process requires a mode switch to
the Kernel.
Multithreading Models
Some operating system provides a combined user level thread and Kernel level thread facility. Solaris is
a good example of this combined approach. In a combined system, multiple threads within the same
application can run in parallel on multiple processors and a blocking system call need not block the entire
process. Multithreading models are three types
Many to many relationship.
Many to one relationship.
One to one relationship.
In this model, many user level threads multiplexes to the Kernel thread of smaller or equal numbers. The
number of Kernel threads may be specific to either a particular application or a particular machine.
Following diagram shows the many to many model. In this model, developers can create as many user 36
threads as necessary and the corresponding Kernel threads can run in parallels on a multiprocessor.
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
Many to one model maps many user level threads to one Kernel level thread. Thread management is done
in user space. When thread makes a blocking system call, the entire process will be blocks. Only one
thread can access the Kernel at a time,so multiple threads are unable to run in parallel on multiprocessors.
If the user level thread libraries are implemented in the operating system in such a way that system does
not support them then Kernel threads use the many to one relationship modes.
There is one to one relationship of user level thread to the kernel level thread.This model provides more
concurrency than the many to one model. It also another thread to run when a thread makes a blocking
system call. It support multiple thread to execute in parallel on microprocessors.
Disadvantage of this model is that creating user thread requires the corresponding Kernel thread. OS/2,
windows NT and windows 2000 use one to one relationship model.
37
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
38
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
Process Scheduling: Foundation and Scheduling objectives, Types of Schedulers, Scheduling criteria:
CPU utilization, Throughput, Turnaround Time, Waiting Time, Response Time; Scheduling algorithms:
Pre-emptive and Non pre-emptive, FCFS, SJF, RR; Multiprocessor scheduling: Real Time scheduling: RM
and EDF.
Inter-process Communication: Critical Section, Race Conditions, Mutual Exclusion, Hardware Solution,
Strict Alternation, Peterson’s Solution, The Producer/Consumer Problem, Semaphores, Event Counters,
Monitors, Message Passing, Classical IPC Problems: Reader’s & Writer Problem, Dinning Philosopher
Problem etc.
PROCESS SCHEDULING:
CPU is always busy in Multiprogramming. Because CPU switches from one job to another job. But in
simple computers CPU sit idle until the I/O request granted.
scheduling is a important OS function. All resources are scheduled before use.(cpu,
memory, devices…..)
Process scheduling is an essential part of a Multiprogramming operating systems. Such
operating systems allow more than one process to be loaded into the executable memory at
a time and the loaded process shares the CPU using time multiplexing
. Scheduling Objectives
Maximize throughput.
Maximize number of users receiving acceptable response times.
Be predictable.
Balance resource use.
Avoid indefinite postponement.
Enforce Priorities.
Give preference to processes holding key resources
SCHEDULING QUEUES: people live in rooms. Process are present in rooms knows
as queues. There are 3types
1. job queue: when processes enter the system, they are put into a job queue, which
consists all processes in the system. Processes in the job queue reside on mass storage and await
the allocation of main memory.
2. ready queue: if a process is present in main memory and is ready to be allocated to
cpu forexecution, is kept in readyqueue.
3. device queue: if a process is present in waiting state (or) waiting for an i/o event to
complete is said to bein device queue.(or)
The processes waiting for a particular I/O device is called device queue.
39
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
Scheduler duties:
40
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
Context Switch: Assume, main memory contains more than one process. If cpu is executing a process, if
time expires or if a high priority process enters into main memory, then the scheduler saves information
about current process in the PCB and switches to execute the another process. The concept of moving CPU
by scheduler from one process to other process is known as context switch.
Non-Preemptive Scheduling: CPU is assigned to one process, CPU do not release until the competition of
that process. The CPU will assigned to some other process only after the previous process has finished.
Preemptive scheduling: here CPU can release the processes even in the middle of the
execution. CPU received a signal from process p2. OS compares the priorities of p1 ,p2. If
p1>p2, CPU continues the execution of p1. If p1<p2 CPU preempt p1 and assigned to p2.
Dispatcher: The main job of dispatcher is switching the cpu from one process to another
process. Dispatcher connects the cpu to the process selected by the short term scheduler.
Dispatcher latency: The time it takes by the dispatcher to stop one process and start another
process is known as dispatcher latency. If the dispatcher latency is increasing, then the degree of
multiprogramming decreases.
SCHEDULING CRITERIA:
1. Throughput: how many jobs are completed by the cpu with in a timeperiod.
2. Turn around time : The time interval between the submission of the process
and time of the completion is turn around time.
TAT = Waiting time in ready queue + executing time + waiting time in waiting queue for
I/O.
3. Waiting time: The time spent by the process to wait for cpu to beallocated.
4. Response time: Time duration between the submission and firstresponse.
5. Cpu Utilization: CPU is costly device, it must be kept as busy aspossible.
Eg: CPU efficiency is 90% means it is busy for 90 units, 10 units idle.
CPU SCHEDULINGALGORITHMS:
1. First come First served scheduling: (FCFS): The process that request the CPU
first is holds the cpu first. If a process request the cpu then it is loaded into the ready queue,
connect CPU to that process.
Consider the following set of processes that arrive at time 0, the length of the cpu burst time
given in milli seconds.
burst time is the time, required the cpu to execute that job, it is in milli seconds.
41
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
42
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
P2 6 2
P3 4 4
P4 5 6
P5 2 8
Which process having the smallest CPU burst time, CPU is assigned to that process . If
two process having the same CPU burst time, FCFS is used.
P1 5
P2 24
P3 16
P4 10
P5 3
P5 having the least CPU burst time ( 3ms ). CPU assigned to that ( P5 ). After completion of
P5 short term scheduler search for nest ( P1 ).......
Average Waiting Time :
Short term scheduler always chooses the process that has term shortest remaining time. When a
new process joins the ready queue , short term scheduler compare the remaining time of
executing process and new process. If the new process has the least CPU burst time, The
scheduler selects that job and connect to CPU. Otherwise continue the old process.
P1 3 0
P2 6 2
P3 4 4
P4 5 6
P5 2 8
45
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
P1 arrives at time 0, P1 executing First , P2 arrives at time 2. Compare P1 remaining time and P2 ( 3-2 =
1) and 6. So, continue P1 after P1, executing P2, at time 4, P3 arrives, compare P2 remaining time (6-1=5
) and 4 ( 4<5 ) .So, executing P3 at time 6, P4 arrives. Compare P3 remaining time and P4 ( 4-
2=2 ) and 5 (2<5 ). So, continue P3 , after P3, ready queue consisting P5 is the least out of
three. So execute P5, next P2, P4.
FORMULA : Finish time - Arrival
Time Finish Time for P1 => 3-0 = 3
Finish Time for P2 => 15-2 = 13
Finish Time for P3 => 8-4 =4
Finish Time for P4 => 20-6 = 14
Finish Time for P5 => 10-8 = 2
It is designed especially for time sharing systems. Here CPU switches between the processes.
When the time quantum expired, the CPU switched to another job. A small unit of time, called
a time quantum or time slice. A time quantum is generally from 10 to 100 ms. The time
quantum is generally depending on OS. Here ready queue is a circular queue. CPU scheduler
picks the first process from ready queue, sets timer to interrupt after one time quantum and
dispatches the process.
P1 30
P2 6
P3 8
46
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
5) PRIORITY SCHEDULING :
P2 12 4
P3 1 5
P4 3 1
P5 4 3
P4 has the highest priority. Allocate the CPU to process P4 first next P1, P5, P2, P3.
Disadvantage: Starvation
Starvation means only high priority process are executing, but low priority
process are waiting for the CPU for the longest period of the time.
2) Processor Affinity
Successive memory accesses by the process are often satisfied in cache memory.
what happens if the process migrates to another processor. the contents of cache
memory must be invalidated for the first processor, cache for the second processor
must be repopulated. Most Symmetric multi processor systems try to avoid
migration of processes from one processor to another processor, keep a process
running on the same processor. This is called processor affinity.
a) Soft affinity:
Soft affinity occurs when the system attempts to keep processes on the same
processor but makes no guarantees.
48
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
b) Hard affinity:
Process specifies that it is not to be moved between processors.
3) Load balancing:
One processor wont be sitting idle while another is overloaded.
Balancing can be achived through push migration or pull migration.
Push migration:
Push migration involves a separate process that runs periodically(e.g every 200 ms)
and moves processes from heavily loaded processors onto less loaded processors.
Pull migration:
Pull migration involves idle processors taking processes from the ready queues of the other
processors.
T1 0.5 3
T2 1 4
T3 2 6
49
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
50
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
Example
Consider the following task set in Table 1. P represents the Period, e the Execution time and D stands
for the Deadline. Assume that the job priorities are re-evaluated at the release and deadline of a job.
P e D
T1 2 0.5 2
T2 4 1 4
T3 5 1.5 5
Solution
51
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
At T = 10.5, the next job with the closest deadline is T3 because the next T2 job will be released
at T = 12. So T3 is scheduled until completion.
At T = 12, the next release of T1 is scheduled.
At T = 12.5, T2 is scheduled as it is the job with the closest deadline.
At T = 14, the next release of T1 is scheduled.
At T = 15, the next release of T3 is scheduled because it is now the job with the closest deadline
because the next release of T1 and T2 is at 16ms. T3 runs for 1ms.
At T = 16, T3 is pre=empted because a new release of T1 which has the closest deadline is now
available.
T = 16.5, T2 is the job with the closest deadline, so it is scheduled for the duration of its
execution time.
At T = 17.5, since T1 and T2 have completed, T3 resumes execution to complete its task which
ran for only 1ms the last time. T3 completes execution at T = 18.
At T = 18, a new instance of T1 is released and scheduled to run for its entire execution time.
At T = 18.5, no job is released yet because a new release of T1, T2 and T3 are at 20ms.
Fig. 2 shows the EDF schedule from T = 0 to T = 20.
.
Process synchronization refers to the idea that multiple processes are to join up or
handshake at a certain point, in order to reach an agreement or commit to a certain
sequence of action. Coordination of simultaneous processes to complete a task is
known as process synchronization.
The critical section problem
Consider a system , assume that it consisting of n processes. Each process having a
segment of code. This segment of code is said to be critical section.
E.G: Railway Reservation System.
Two persons from different stations want to reserve their tickets, the train number,
destination is common, the two persons try to get the reservation at the same time.
Unfortunately, the available berths are only one; both are trying for that berth.
It is also called the critical section problem. Solution is when one process is
executing in its critical section, no other process is to be allowed to execute in
its critical section.
52
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
The critical section problem is to design a protocol that the processes can use to
cooperate. Each process must request permission to enter its critical section. The
section of code implementing this request is the entry section. The critical section
may be followed by an exit section. The remaining code is the remainder section.
Critical section:
The portion in any program that accesses a shared resource is called as critical section (or)
critical region.
Peterson’s solution:
Peterson solution is one of the solutions to critical section problem involving two
processes. This solution states that when one process is executing its critical section
then the other process executes the rest of the code and vice versa.
Peterson solution requires two shared data items:
1) turn: indicates whose turn it is to enter
into the critical section. If turn == i ,then
process i is allowed into their critical section.
2) flag: indicates when a process wants to enter into critical section. when
53
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
Synchronization hardware
In a uniprocessor multiprogrammed system, mutual exclusion can be obtained by
disabling the interrupts before the process enters its critical section and enabling
them after it has exited the critical section.
Disable
interrupts
Critical section
Enable interrupts
do {
acquire
lock
critical
section
release
lock
remainder
section
} while (TRUE);
54
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
A process wants to enter critical section and value of lock is false then testandset
returns false and the value of lock becomes true. thus for other processes wanting
to enter their critical sections testandset returns true and the processes do busy
waiting until the process exits critical section and sets the value of lock to false.
• Definition:
boolean TestAndSet(boolean&lock){
boolean temp=lock;
Lock=true;
return temp;
}
Algorithm for TestAndSet
do{
while testandset(&lock)
//do nothing
//critical section
lock=false
remainder section
}while(TRUE);
55
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
lock is global variable initialized to false.each process has a local variable key. A
process wants to enter critical section,since the value of lock is false and key is
true.
lock=false
key=true
after swap instruction,
lock=true
key=false
now key=false becomes true,process exits repeat-until,and enter into critical section.
When process is in critical section (lock=true),so other processes wanting to enter
critical section will have
lock=true
key=true
Hence they will do busy waiting in repeat-until loop until the process exits critical
section and sets the value of lock to false.
Semaphores
A semaphore is an integer variable.semaphore accesses only through two operations.
1) wait: wait operation decrements the count by1.
If the result value is negative,the process executing the wait operation is blocked.
2) signaloperation:
Signal operation increments by 1,if the value is not positive then one of the
process blocked in wait operation unblocked.
wait (S) {
while S <= 0 ; //
no-op
S--;
}
signal (S)
{
S++;
}
56
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
do {
wait (mutex);
// Critical Section
signal (mutex);
// remainder section
} while (TRUE);
First process that executes wait operation will be immediately granted sem.count to 0.
If some other process wants critical section and executes wait() then it is
blocked,since value becomes -1. If the process exits critical section it executes
signal().sem.count is incremented by 1.blocked process is removed from queue and
added to ready queue.
Problems:
1) Deadlock
Deadlock occurs when multiple processes are blocked.each waiting for a resource
that can only be freed by one of the other blocked processes.
2) Starvation
one or more processes gets blocked forever and never get a chance to take their
turn in the critical section.
3) Priority inversion
If low priority process is running ,medium priority processes are waiting for low
priority process,high priority processes are waiting for medium priority
processes.this is called Priority inversion.
The two most common kinds of semaphores are counting semaphores and
binary semaphores. Counting semaphores represent multiple resources,
while binary semaphores, as the name implies, represents two possible states
(generally 0 or 1; locked or unlocked).
Classic problems of synchronization
1) Bounded-buffer problem
Two processes share a common ,fixed –size buffer.
Producer puts information into the buffer, consumer takes it out.
The problem arise when the producer wants to put a new item in the buffer,but it is
already full. The solution is for the producer has to wait until the consumer has
consumed atleast one buffer. similarly if the consumer wants to remove an item
from the buffer and sees that the buffer is empty,it goes to sleep until the producer
puts something in the buffer and wakes it up.
57
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
58
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
59
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
Five philosophers are seated on 5 chairs across a table. Each philosopher has a
plate full of noodles. Each philosopher needs a pair of forks to eat it. There are only
5 forks available all together. There is only one fork between any two plates of
noodles.
In order to eat, a philosopher lifts two forks, one to his left and the other to his
right. if he is successful in obtaining two forks, he starts eating after some time, he
stops eating and keeps both the forks down.
60
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
Several remedies:
1) Allow at most 4 philosophers to be sitting simultaneously at the table.
2) Allow a philosopher to pickup his fork only if both forks are available.
3) An odd philosopher picks up first his left fork and then right fork. an even philosopher picks up
his right fork and then his left fork.
MONITORS
The disadvantage of semaphore is that it is unstructured construct. Wait and signal operations
can be scattered in a program and hence debugging becomes difficult.
A monitor is an object that contains both the data and procedures needed to perform allocation of
a shared resource. To accomplish resource allocation using monitors, a process must call a
monitor entry routine. Many processes may want to enter the monitor at the same time. but
only one process at a time is allowed to enter. Data inside a monitor may be either global to all
routines within the monitor (or) local to a specific routine. Monitor data is accessible only within
the monitor. There is no way for processes outside the monitor to access monitor data. This is a
form of information hiding.
If a process calls a monitor entry routine while no other processes are executing inside the
monitor, the process acquires a lock on the monitor and enters it. while a process is in the
monitor, other processes may not enter the monitor to acquire the resource. If a process calls a
monitor entry routine while the other monitor is locked the monitor makes the calling process
wait outside the monitor until the lock on the monitor is released. The process that has the
resource will call a monitor entry routine to release the resource. This routine could free the
resource and wait for another requesting process to arrive monitor entry routine calls signal to
allow one of the waiting processes to enter the monitor and acquire the resource. Monitor gives
high priority to waiting processes than to newly arriving ones.
Structure:
monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }
procedurePn (…) {……}
Initialization code (…) { … }
}
}
Processes can call procedures p1,p2,p3……They cannot access the local variables of the
monitor
61
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
Monitor provides condition variables along with two operations on them i.e. wait and signal.
wait(condition variable)
signal(condition variable)
Every condition variable has an associated queue.A process calling wait on a
particular condition variable is placed into the queue associated with that condition
variable.A process calling signal on a particular condition variable causes a process
waiting on that condition variable to be removed from the queue associated with it.
62
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
monitor
producerconsumer
condition
full,empty;
int count;
procedure insert(item)
{
if(count==MAX)
wait(full) ;
insert_item(item);
count=count+1;
if(count==1)
signal(empty);
}
procedure remove()
{
if(count==0)
wait(empty);
remove_item(item);
count=count-1;
if(count==MAX-1)
signal(full);
}
procedure producer()
{
producerconsumer.insert(item);
}
procedure consumer()
{
producerconsumer.remove();
}
63
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
A philosopher may pickup his forks only if both of them are available.A
philosopher can eat only if his two neighbours are not eating.some other
philosopher can delay himself when he is hungry.
Diningphilosophers.Take_forks( ) : acquires forks ,which may block the process.
Eat noodles ( )
Diningphilosophers.put_forks( ): releases the forks.
Resuming processes within a monitor
If several processes are suspended on condion x and x.signal( ) is executed by some process.
then
how do we determine which of the suspended processes should be resumed next ?
solution is FCFS(process that has been waiting the longest is resumed first).In
many circumstances, such simple technique is not adequate. alternate solution is to
assign priorities and wake up the process with the highest priority.
64
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
65
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
66