KEMBAR78
Chapter No 4 CPU Scheduling and Algorithms.ppt
Chapter 4: CPU Scheduling and
Algorithms
CPU scheduling
• In a single-processor system, only one process can run at a time. Others
must wait until the CPU is free and can be rescheduled.
• The objective of multiprogramming is to have some process running at all
times, to maximize CPU utilization.
• The idea is relatively simple. A process is executed until it must wait,
typically for the completion of some I/O request.
• In a simple computer system, the CPU then just sits idle. All this waiting
time is wasted; no useful work is accomplished.
• With multiprogramming, we try to use this time productively. Several
processes are kept in memory at one time. When one process has to wait,
the operating system takes the CPU away from that process and gives the
CPU to another process.
CPU scheduling
• This pattern continues. Every time one process has to wait, another process
can take over use of the CPU.
• Scheduling of this kind is a fundamental operating-system function.
• Almost all computer resources are scheduled before use. The CPU is, of
course, one of the primary computer resources. Thus, its scheduling is
central to operating-system design.
CPU and Input Output Burst cycle
• The success of CPU scheduling depends on an observed property of
processes:
• process execution consists of a cycle of CPU execution and I/O wait.
Processes alternate between these two states.
• Process execution begins with a CPU burst. That is followed by an I/O
burst, which is followed by another CPU burst, then another I/O burst, and
so on.
• Eventually, the final CPU burst ends with a system request to terminate
execution.
CPU and Input Output Burst cycle
CPU Scheduler
• Whenever the CPU becomes idle, the operating system must select one of
the processes in the ready queue to be executed.
• The selection process is carried out by the short-term scheduler, or CPU
scheduler.
• The scheduler selects a process from the processes in memory that are
ready to execute and allocates the CPU to that process.
• Note that the ready queue is not necessarily a first-in, first-out (FIFO)
queue. It may me implemented as a FIFO queue, a priority queue, a tree,
or simply an unordered linked list.
• Conceptually, however, all the processes in the ready queue are lined up
waiting for a chance to run on the CPU.
• The records in the queues are generally process control blocks (PCBs) of
the processes.
Scheduling types
• Preemptive scheduling
• Non-preemptive scheduling
Process state diagram
Scheduling types
• CPU-scheduling decisions may take place under the following four
circumstances:
1. When a process switches from the running state to the waiting state (for
example, as the result of an I/O request or an invocation of wait() for the
termination of a child process)
2. When a process switches from the running state to the ready state (for
example, when an interrupt occurs)
3. When a process switches from the waiting state to the ready state (for
example, at completion of I/O)
4. When a process terminates
5. For situations 1 and 4, there is no choice in terms of scheduling. A new
process (if one exists in the ready queue) must be selected for execution.
There is a choice, however, for situations 2 and 3.
Scheduling types
• When scheduling takes place only under circumstances 1 and 4, we say
that the scheduling scheme is non-preemptive or cooperative. Otherwise,
it is preemptive.
• Under non-preemptive scheduling, once the CPU has been allocated to a
process, the process keeps the CPU until it releases the CPU either by
terminating or by switching to the waiting state.
Scheduling Criteria
• CPU utilization: We want to keep the CPU as busy as possible.
Conceptually, CPU utilization can range from 0 to 100 percent. In a real
system, it should range from 40 percent (for a lightly loaded system) to 90
percent (for a heavily loaded system).
• Throughput: If the CPU is busy executing processes, then work is being
done. One measure of work is the number of processes that are completed
per time unit, called throughput. For long processes, this rate may be one
process per hour; for short transactions, it may be ten processes per second.
• Turnaround time. From the point of view of a particular process, the
important criterion is how long it takes to execute that process. The
interval from the time of submission of a process to the time of completion
is the turnaround time. Turnaround time is the sum of the periods spent
waiting to get into memory, waiting in the ready queue, executing on the
CPU, and doing I/O.
Scheduling Criteria
• Waiting time: The CPU-scheduling algorithm does not affect the
amountof time during which a process executes or does I/O. It affects only
the amount of time that a process spends waiting in the ready queue.
Waiting time is the sum of the periods spent waiting in the ready queue.
• Response time. In an interactive system, turnaround time may not be the
best criterion. Often, a process can produce some output fairly early and
can continue computing new results while previous results are being output
to the user. Thus, another measure is the time from the submission of a
request until the first response is produced. This measure, called response
time, is the time it takes to start responding, not the time it takes to output
the response.
Scheduling Algorithm
• First Come First Served (FCFS)
• Shortest Job First (SJF) scheduling
1. Shortest-next-CPU-burst algorithm
2. Shortest-remaining-time-first algorithm
• Priority scheduling
• Round Robin scheduling
First Come First Served Algorithm
• The simplest CPU-scheduling algorithm is the first-come, first-served
(FCFS) scheduling algorithm. With this scheme, the process that requests
the CPU first is allocated the CPU first.
• The implementation of the FCFS policy is easily managed with a FIFO
queue. When a process enters the ready queue, its PCB is linked onto the
tail of the queue.
• When the CPU is free, it is allocated to the process at the head of the
queue. The running process is then removed from the queue.
• Gantt chart, which is a bar chart that illustrates a particular schedule,
including the start and finish times of each of the participating processes
First Come First Served Algorithm
Shortest-Job-First algorithm
• A different approach to CPU scheduling is the shortest-job-first (SJF)
scheduling algorithm.
• This algorithm associates with each process the length of the process’s
next CPU burst. When the CPU is available, it is assigned to the process
that has the smallest next CPU burst.
• If the next CPU bursts of two processes are the same, FCFS scheduling is
used to break the tie.
• Two types of SJF algorithm.
1. Non-Preemptive SJF ( Shortest-Next-CPU-Burst)
2. Preemptive SJF (Shortest-Remaining-Time-First)
Non-Preemptive SJF ( Shortest-Next-CPU-Burst)
• This algorithm associates with each process the length of the process’s
next CPU burst. When the CPU is available, it is assigned to the process
that has the smallest next CPU burst.
• Example
Non-Preemptive SJF ( Shortest-Next-CPU-Burst) Example-1
Non-Preemptive SJF ( Shortest-Next-CPU-Burst) Example-2
Non-Preemptive SJF ( Shortest-Next-CPU-Burst)
• This algorithm associates with each process the length of the process’s
next CPU burst. When the CPU is available, it is assigned to the process
that has the smallest next CPU burst.
• Example
Preemptive SJF ( Shortest-Remaining-Time-First)
• The SJF algorithm can be either preemptive or non-preemptive.
• The choice arises when a new process arrives at the ready queue while a
previous process is still executing. The next CPU burst of the newly arrived
process may be shorter than what is left of the currently executing
process.
• A preemptive SJF algorithm will preempt the currently executing process,
whereas a non-preemptive SJF algorithm will allow the currently running
process to finish its CPU burst. Preemptive SJF scheduling is sometimes
called shortest-remaining-time-first scheduling.
• Example
Non-Preemptive SJF (Shortest-Remaining-Time-First) Example
Priority Scheduling
• The SJF algorithm is a special case of the general priority-scheduling
algorithm.
• A priority is associated with each process, and the CPU is allocated to the
process with the highest priority. Equal-priority processes are scheduled
in FCFS order.
• Priorities are generally indicated by some fixed range of numbers, such as
0 to 7 or 0 to 4,095.
• However, there is no general agreement on whether 0 is the highest or
lowest priority. Some systems use low numbers to represent low priority;
others use low numbers for high priority.
• Here, we assume that low numbers represent high priority.
Priority Scheduling Example
•
Round Robin Scheduling (RR scheduling)
• The round-robin (RR) scheduling algorithm is designed especially for
timesharing systems.
• It is similar to FCFS scheduling, but preemption is added to enable the
system to switch between processes.
• A small unit of time, called a time quantum or time slice, is defined. A
time quantum is generally from 10 to 100 milliseconds in length.
• The ready queue is treated as a circular queue.
• The CPU scheduler goes around the ready queue, allocating the CPU to
each process for a time interval of up to 1 time quantum.
• To implement RR scheduling, we again treat the ready queue as a FIFO
queue of processes. New processes are added to the tail of the ready
queue. The CPU scheduler picks the first process from the ready queue,
sets a timer to interrupt after 1 time quantum, and dispatches the
process.
Round Robin Scheduling (RR scheduling)
• One of two things will then happen. The process may have a CPU burst of
less than 1 time quantum. In this case, the process itself will release the
CPU voluntarily.
• The scheduler will then proceed to the next process in the ready queue. If
the CPU burst of the currently running process is longer than 1 time
quantum, the timer will go off and will cause an interrupt to the operating
system.
• A context switch will be executed, and the process will be put at the tail
of the ready queue. The CPU scheduler will then select the next process in
the ready queue.
Round Robin Scheduling (RR scheduling) Example
Multilevel Queue Scheduling
• Another class of scheduling algorithms has been created for situations in
which processes are easily classified into different groups. For example, a
common division is made between foreground (interactive) processes and
background (batch) processes. These two types of processes have
different response-time requirements and so may have different
scheduling needs.
• In addition, foreground processes may have priority (externally defined)
over background processes.
•
Multilevel Queue Scheduling
Multilevel Queue Scheduling
• A multilevel queue scheduling algorithm partitions the ready queue into
several separate queues .
• The processes are permanently assigned to one queue, generally based on
some property of the process, such as memory size, process priority, or
process type.
• Each queue has its own scheduling algorithm. For example, separate
queues might be used for foreground and background processes.
• The foreground queue might be scheduled by an RR algorithm, while the
background queue is scheduled by an FCFS algorithm.
• In addition, there must be scheduling among the queues, which is
commonly implemented as fixed-priority preemptive scheduling.
• For example, the foreground queue may have absolute priority over the
background queue.
Multilevel Queue Scheduling
• Let’s look at an example of a multilevel queue scheduling algorithm with
five queues, listed below in order of priority:
1. System processes
2. Interactive processes
3. Interactive editing processes
4. Batch processes
5. Student processes
• Each queue has absolute priority over lower-priority queues. No process
in the batch queue, for example, could run unless the queues for system
processes, interactive processes, and interactive editing processes were
all empty.
• If an interactive editing process entered the ready queue while a batch
process was running, the batch process would be preempted.
Multilevel Queue Scheduling
• Another possibility is to time-slice among the queues. Here, each queue
gets a certain portion of the CPU time, which it can then schedule among
its various processes.
• For instance, in the foreground–background queue example, the
foreground queue can be given 80 percent of the CPU time for RR
scheduling among its processes, while the background queue receives 20
percent of the CPU to give to its processes on an FCFS basis.
Deadlock
• A process requests resources;
if the resources are not
available at that time, the
process enters into a waiting
state.
• But, sometimes, a waiting
process is never again able to
change state, because the
resources it has requested are
held by other waiting process.
This situation is called as a
deadlock.
• Definition of Deadlock:
A set of processes is in a
deadlock state only when every
process in the set is waiting for
an event that can be caused only
by another process in the set.
P= {p1,p2,p3,p4,…}
• Under the normal mode of operation, a process may utilize a resource in
only the following sequence:
• Request: The process requests the resource. If the request cannot be
granted immediately, then the requesting process must wait until it can
acquire the resource.
• Use: The process can operate on the resource (e.g. if the resource is a
printer the process can print on the printer.)
• Release: The process release the resource.
Resource-Allocation Graph
• V is partitioned into two types:
– P = {P1, P2, …, Pn}, the set consisting of all
the processes in the system
– R = {R1, R2, …, Rm}, the set consisting of all
resource types in the system
• request edge – directed edge Pi  Rj
• assignment edge – directed edge Rj  Pi
• A set of vertices V and a set of edges E.
Resource-Allocation Graph (Cont.)
• Process
• Resource Type with 4 instances
• Pi requests instance of Rj
• Pi is holding an instance of Rj
Pi
Pi
Rj
Rj
Rj
Example of a Resource Allocation Graph
Resource Allocation Graph With A Deadlock
Graph With A Cycle But No Deadlock
Necessary Condition for Deadlock
• Mutual Exclusion
At least one resource must
be held in a non-sharable
mode, that is, only one
process at a time can use
the resource. If another
process requests that
resource, the requesting
process must be delayed
until the resource has been
released.
Necessary Condition for Deadlock
• Hold and Wait
A process must be holding
at least one resources and
waiting to acquire
additional resources that
are currently being held by
other processes.
Necessary Condition for Deadlock
• No preemption
Resources can not be
preempted; that is a
resource can be released
only voluntarily by the
process holding it, after that
process has completed its
task.
Necessary Condition for Deadlock
Necessary Condition for Deadlock
Deadlock Prevention
For a deadlock occurs, each of the four necessary condition must hold. By
ensuring that at least one of the these conditions cannot hold. We can
prevent the occurrence of a deadlock.
• Eliminating Mutual Exclusion condition:
1.The mutual-exclusion condition must hold for non-sharable resource. For
example, a printer cannot be simultaneously shared by several processes.
2.Read-Only files are a good example of sharable resource. If several process
attempt to open a read-only file at same time, they can be granted
simultaneous access to the file. A process never need to wait for a sharable
resource.
3.In general, however, we cannot prevent deadlock by denying the mutual-
exclusion condition, because some resources are originally non-sharable
mode.
Deadlock Prevention : Eliminating Hold and Wait
condition:
• Eliminating Hold and Wait condition:
• To ensure that the hold-and-wait condition never occurs in the system,
we must guarantee that, whenever a process requests a resource, it does
not hold any other resources.
• One protocol that we can use requires each process to request and be
allocated all its resources before it begins execution.
•
• An alternative protocol allows a process to request resources only when it
has hold none. A process may request some resources and use them.
• Before it can request any additional resources, it must release all the
resources that it is currently allocated.
• To illustrate the difference between these two protocols, we consider a
process that copies data from a DVD drive to a file on disk, sorts the file,
and then prints the results to a printer.
• If all resources must be requested at the beginning of the process, then
the process must initially request the DVD drive, disk file, and printer. It
will hold the printer for its entire execution, even though it needs the
printer only at the end.
• The second method allows the process to request initially only the DVD
drive and disk file. It copies from the DVD drive to the disk and then
releases both the DVD drive and the disk file.
• The process must then request the disk file and the printer. After copying
the disk file to the printer, it releases these two resources and terminates.
• Both these protocols have two main disadvantages. First, resource
utilization may be low, since resources may be allocated but unused for a
long period.
Deadlock Prevention : Eliminating Hold and Wait
condition:
• In the example given, for instance, we can release the DVD drive and disk
file, and then request the disk file and printer, only if we can be sure that
our data will remain on the disk file. Otherwise, we must request all
resources at the beginning for both protocols.
• Second, starvation is possible. A process that needs several popular
resources may have to wait indefinitely, because at least one of the
resources that it needs is always allocated to some other process.
Deadlock Prevention : Eliminating Hold and Wait
condition:
• The third necessary condition for deadlocks is that there be no
preemption of resources that have already been allocated. To ensure that
this condition does not hold, we can use the following protocol.
• If a process is holding some resources and requests another resource that
cannot be immediately allocated to it (that is, the process must wait),
then all resources the process is currently holding are preempted.
• In other words, these resources are implicitly released. The preempted
resources are added to the list of resources for which the process is
waiting. The process will be restarted only when it can regain its old
resources, as well as the new ones that it is requesting.
Deadlock Prevention : Eliminating No Preemption
condition
• Alternatively, if a process requests some resources, we first check
whether they are available. If they are, we allocate them. If they are not,
we check whether they are allocated to some other process that is
waiting for additional resources.
• If so, we preempt the desired resources from the waiting process and
allocate them to the requesting process. If the resources are neither
available nor held by a waiting process, the requesting process must wait.
• While it is waiting, some of its resources may be preempted, but only if
another process requests them. A process can be restarted only when it is
allocated the new resources it is requesting and recovers any resources
that were preempted while it was waiting.
Deadlock Prevention : Eliminating No Preemption
condition

Chapter No 4 CPU Scheduling and Algorithms.ppt

  • 1.
    Chapter 4: CPUScheduling and Algorithms
  • 2.
    CPU scheduling • Ina single-processor system, only one process can run at a time. Others must wait until the CPU is free and can be rescheduled. • The objective of multiprogramming is to have some process running at all times, to maximize CPU utilization. • The idea is relatively simple. A process is executed until it must wait, typically for the completion of some I/O request. • In a simple computer system, the CPU then just sits idle. All this waiting time is wasted; no useful work is accomplished. • With multiprogramming, we try to use this time productively. Several processes are kept in memory at one time. When one process has to wait, the operating system takes the CPU away from that process and gives the CPU to another process.
  • 3.
    CPU scheduling • Thispattern continues. Every time one process has to wait, another process can take over use of the CPU. • Scheduling of this kind is a fundamental operating-system function. • Almost all computer resources are scheduled before use. The CPU is, of course, one of the primary computer resources. Thus, its scheduling is central to operating-system design.
  • 4.
    CPU and InputOutput Burst cycle • The success of CPU scheduling depends on an observed property of processes: • process execution consists of a cycle of CPU execution and I/O wait. Processes alternate between these two states. • Process execution begins with a CPU burst. That is followed by an I/O burst, which is followed by another CPU burst, then another I/O burst, and so on. • Eventually, the final CPU burst ends with a system request to terminate execution.
  • 5.
    CPU and InputOutput Burst cycle
  • 6.
    CPU Scheduler • Wheneverthe CPU becomes idle, the operating system must select one of the processes in the ready queue to be executed. • The selection process is carried out by the short-term scheduler, or CPU scheduler. • The scheduler selects a process from the processes in memory that are ready to execute and allocates the CPU to that process. • Note that the ready queue is not necessarily a first-in, first-out (FIFO) queue. It may me implemented as a FIFO queue, a priority queue, a tree, or simply an unordered linked list. • Conceptually, however, all the processes in the ready queue are lined up waiting for a chance to run on the CPU. • The records in the queues are generally process control blocks (PCBs) of the processes.
  • 7.
    Scheduling types • Preemptivescheduling • Non-preemptive scheduling
  • 8.
  • 9.
    Scheduling types • CPU-schedulingdecisions may take place under the following four circumstances: 1. When a process switches from the running state to the waiting state (for example, as the result of an I/O request or an invocation of wait() for the termination of a child process) 2. When a process switches from the running state to the ready state (for example, when an interrupt occurs) 3. When a process switches from the waiting state to the ready state (for example, at completion of I/O) 4. When a process terminates 5. For situations 1 and 4, there is no choice in terms of scheduling. A new process (if one exists in the ready queue) must be selected for execution. There is a choice, however, for situations 2 and 3.
  • 10.
    Scheduling types • Whenscheduling takes place only under circumstances 1 and 4, we say that the scheduling scheme is non-preemptive or cooperative. Otherwise, it is preemptive. • Under non-preemptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state.
  • 11.
    Scheduling Criteria • CPUutilization: We want to keep the CPU as busy as possible. Conceptually, CPU utilization can range from 0 to 100 percent. In a real system, it should range from 40 percent (for a lightly loaded system) to 90 percent (for a heavily loaded system). • Throughput: If the CPU is busy executing processes, then work is being done. One measure of work is the number of processes that are completed per time unit, called throughput. For long processes, this rate may be one process per hour; for short transactions, it may be ten processes per second. • Turnaround time. From the point of view of a particular process, the important criterion is how long it takes to execute that process. The interval from the time of submission of a process to the time of completion is the turnaround time. Turnaround time is the sum of the periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing I/O.
  • 12.
    Scheduling Criteria • Waitingtime: The CPU-scheduling algorithm does not affect the amountof time during which a process executes or does I/O. It affects only the amount of time that a process spends waiting in the ready queue. Waiting time is the sum of the periods spent waiting in the ready queue. • Response time. In an interactive system, turnaround time may not be the best criterion. Often, a process can produce some output fairly early and can continue computing new results while previous results are being output to the user. Thus, another measure is the time from the submission of a request until the first response is produced. This measure, called response time, is the time it takes to start responding, not the time it takes to output the response.
  • 13.
    Scheduling Algorithm • FirstCome First Served (FCFS) • Shortest Job First (SJF) scheduling 1. Shortest-next-CPU-burst algorithm 2. Shortest-remaining-time-first algorithm • Priority scheduling • Round Robin scheduling
  • 14.
    First Come FirstServed Algorithm • The simplest CPU-scheduling algorithm is the first-come, first-served (FCFS) scheduling algorithm. With this scheme, the process that requests the CPU first is allocated the CPU first. • The implementation of the FCFS policy is easily managed with a FIFO queue. When a process enters the ready queue, its PCB is linked onto the tail of the queue. • When the CPU is free, it is allocated to the process at the head of the queue. The running process is then removed from the queue. • Gantt chart, which is a bar chart that illustrates a particular schedule, including the start and finish times of each of the participating processes
  • 15.
    First Come FirstServed Algorithm
  • 16.
    Shortest-Job-First algorithm • Adifferent approach to CPU scheduling is the shortest-job-first (SJF) scheduling algorithm. • This algorithm associates with each process the length of the process’s next CPU burst. When the CPU is available, it is assigned to the process that has the smallest next CPU burst. • If the next CPU bursts of two processes are the same, FCFS scheduling is used to break the tie. • Two types of SJF algorithm. 1. Non-Preemptive SJF ( Shortest-Next-CPU-Burst) 2. Preemptive SJF (Shortest-Remaining-Time-First)
  • 17.
    Non-Preemptive SJF (Shortest-Next-CPU-Burst) • This algorithm associates with each process the length of the process’s next CPU burst. When the CPU is available, it is assigned to the process that has the smallest next CPU burst. • Example
  • 18.
    Non-Preemptive SJF (Shortest-Next-CPU-Burst) Example-1
  • 19.
    Non-Preemptive SJF (Shortest-Next-CPU-Burst) Example-2
  • 20.
    Non-Preemptive SJF (Shortest-Next-CPU-Burst) • This algorithm associates with each process the length of the process’s next CPU burst. When the CPU is available, it is assigned to the process that has the smallest next CPU burst. • Example
  • 21.
    Preemptive SJF (Shortest-Remaining-Time-First) • The SJF algorithm can be either preemptive or non-preemptive. • The choice arises when a new process arrives at the ready queue while a previous process is still executing. The next CPU burst of the newly arrived process may be shorter than what is left of the currently executing process. • A preemptive SJF algorithm will preempt the currently executing process, whereas a non-preemptive SJF algorithm will allow the currently running process to finish its CPU burst. Preemptive SJF scheduling is sometimes called shortest-remaining-time-first scheduling. • Example
  • 22.
  • 23.
    Priority Scheduling • TheSJF algorithm is a special case of the general priority-scheduling algorithm. • A priority is associated with each process, and the CPU is allocated to the process with the highest priority. Equal-priority processes are scheduled in FCFS order. • Priorities are generally indicated by some fixed range of numbers, such as 0 to 7 or 0 to 4,095. • However, there is no general agreement on whether 0 is the highest or lowest priority. Some systems use low numbers to represent low priority; others use low numbers for high priority. • Here, we assume that low numbers represent high priority.
  • 24.
  • 25.
    Round Robin Scheduling(RR scheduling) • The round-robin (RR) scheduling algorithm is designed especially for timesharing systems. • It is similar to FCFS scheduling, but preemption is added to enable the system to switch between processes. • A small unit of time, called a time quantum or time slice, is defined. A time quantum is generally from 10 to 100 milliseconds in length. • The ready queue is treated as a circular queue. • The CPU scheduler goes around the ready queue, allocating the CPU to each process for a time interval of up to 1 time quantum. • To implement RR scheduling, we again treat the ready queue as a FIFO queue of processes. New processes are added to the tail of the ready queue. The CPU scheduler picks the first process from the ready queue, sets a timer to interrupt after 1 time quantum, and dispatches the process.
  • 26.
    Round Robin Scheduling(RR scheduling) • One of two things will then happen. The process may have a CPU burst of less than 1 time quantum. In this case, the process itself will release the CPU voluntarily. • The scheduler will then proceed to the next process in the ready queue. If the CPU burst of the currently running process is longer than 1 time quantum, the timer will go off and will cause an interrupt to the operating system. • A context switch will be executed, and the process will be put at the tail of the ready queue. The CPU scheduler will then select the next process in the ready queue.
  • 27.
    Round Robin Scheduling(RR scheduling) Example
  • 28.
    Multilevel Queue Scheduling •Another class of scheduling algorithms has been created for situations in which processes are easily classified into different groups. For example, a common division is made between foreground (interactive) processes and background (batch) processes. These two types of processes have different response-time requirements and so may have different scheduling needs. • In addition, foreground processes may have priority (externally defined) over background processes. •
  • 29.
  • 30.
    Multilevel Queue Scheduling •A multilevel queue scheduling algorithm partitions the ready queue into several separate queues . • The processes are permanently assigned to one queue, generally based on some property of the process, such as memory size, process priority, or process type. • Each queue has its own scheduling algorithm. For example, separate queues might be used for foreground and background processes. • The foreground queue might be scheduled by an RR algorithm, while the background queue is scheduled by an FCFS algorithm. • In addition, there must be scheduling among the queues, which is commonly implemented as fixed-priority preemptive scheduling. • For example, the foreground queue may have absolute priority over the background queue.
  • 31.
    Multilevel Queue Scheduling •Let’s look at an example of a multilevel queue scheduling algorithm with five queues, listed below in order of priority: 1. System processes 2. Interactive processes 3. Interactive editing processes 4. Batch processes 5. Student processes • Each queue has absolute priority over lower-priority queues. No process in the batch queue, for example, could run unless the queues for system processes, interactive processes, and interactive editing processes were all empty. • If an interactive editing process entered the ready queue while a batch process was running, the batch process would be preempted.
  • 32.
    Multilevel Queue Scheduling •Another possibility is to time-slice among the queues. Here, each queue gets a certain portion of the CPU time, which it can then schedule among its various processes. • For instance, in the foreground–background queue example, the foreground queue can be given 80 percent of the CPU time for RR scheduling among its processes, while the background queue receives 20 percent of the CPU to give to its processes on an FCFS basis.
  • 33.
    Deadlock • A processrequests resources; if the resources are not available at that time, the process enters into a waiting state. • But, sometimes, a waiting process is never again able to change state, because the resources it has requested are held by other waiting process. This situation is called as a deadlock.
  • 34.
    • Definition ofDeadlock: A set of processes is in a deadlock state only when every process in the set is waiting for an event that can be caused only by another process in the set. P= {p1,p2,p3,p4,…}
  • 35.
    • Under thenormal mode of operation, a process may utilize a resource in only the following sequence: • Request: The process requests the resource. If the request cannot be granted immediately, then the requesting process must wait until it can acquire the resource. • Use: The process can operate on the resource (e.g. if the resource is a printer the process can print on the printer.) • Release: The process release the resource.
  • 36.
    Resource-Allocation Graph • Vis partitioned into two types: – P = {P1, P2, …, Pn}, the set consisting of all the processes in the system – R = {R1, R2, …, Rm}, the set consisting of all resource types in the system • request edge – directed edge Pi  Rj • assignment edge – directed edge Rj  Pi • A set of vertices V and a set of edges E.
  • 37.
    Resource-Allocation Graph (Cont.) •Process • Resource Type with 4 instances • Pi requests instance of Rj • Pi is holding an instance of Rj Pi Pi Rj Rj Rj
  • 38.
    Example of aResource Allocation Graph
  • 39.
  • 40.
    Graph With ACycle But No Deadlock
  • 41.
    Necessary Condition forDeadlock • Mutual Exclusion At least one resource must be held in a non-sharable mode, that is, only one process at a time can use the resource. If another process requests that resource, the requesting process must be delayed until the resource has been released.
  • 42.
    Necessary Condition forDeadlock • Hold and Wait A process must be holding at least one resources and waiting to acquire additional resources that are currently being held by other processes.
  • 43.
    Necessary Condition forDeadlock • No preemption Resources can not be preempted; that is a resource can be released only voluntarily by the process holding it, after that process has completed its task.
  • 44.
  • 45.
  • 46.
    Deadlock Prevention For adeadlock occurs, each of the four necessary condition must hold. By ensuring that at least one of the these conditions cannot hold. We can prevent the occurrence of a deadlock. • Eliminating Mutual Exclusion condition: 1.The mutual-exclusion condition must hold for non-sharable resource. For example, a printer cannot be simultaneously shared by several processes. 2.Read-Only files are a good example of sharable resource. If several process attempt to open a read-only file at same time, they can be granted simultaneous access to the file. A process never need to wait for a sharable resource. 3.In general, however, we cannot prevent deadlock by denying the mutual- exclusion condition, because some resources are originally non-sharable mode.
  • 47.
    Deadlock Prevention :Eliminating Hold and Wait condition: • Eliminating Hold and Wait condition: • To ensure that the hold-and-wait condition never occurs in the system, we must guarantee that, whenever a process requests a resource, it does not hold any other resources. • One protocol that we can use requires each process to request and be allocated all its resources before it begins execution. • • An alternative protocol allows a process to request resources only when it has hold none. A process may request some resources and use them. • Before it can request any additional resources, it must release all the resources that it is currently allocated.
  • 48.
    • To illustratethe difference between these two protocols, we consider a process that copies data from a DVD drive to a file on disk, sorts the file, and then prints the results to a printer. • If all resources must be requested at the beginning of the process, then the process must initially request the DVD drive, disk file, and printer. It will hold the printer for its entire execution, even though it needs the printer only at the end. • The second method allows the process to request initially only the DVD drive and disk file. It copies from the DVD drive to the disk and then releases both the DVD drive and the disk file. • The process must then request the disk file and the printer. After copying the disk file to the printer, it releases these two resources and terminates. • Both these protocols have two main disadvantages. First, resource utilization may be low, since resources may be allocated but unused for a long period. Deadlock Prevention : Eliminating Hold and Wait condition:
  • 49.
    • In theexample given, for instance, we can release the DVD drive and disk file, and then request the disk file and printer, only if we can be sure that our data will remain on the disk file. Otherwise, we must request all resources at the beginning for both protocols. • Second, starvation is possible. A process that needs several popular resources may have to wait indefinitely, because at least one of the resources that it needs is always allocated to some other process. Deadlock Prevention : Eliminating Hold and Wait condition:
  • 50.
    • The thirdnecessary condition for deadlocks is that there be no preemption of resources that have already been allocated. To ensure that this condition does not hold, we can use the following protocol. • If a process is holding some resources and requests another resource that cannot be immediately allocated to it (that is, the process must wait), then all resources the process is currently holding are preempted. • In other words, these resources are implicitly released. The preempted resources are added to the list of resources for which the process is waiting. The process will be restarted only when it can regain its old resources, as well as the new ones that it is requesting. Deadlock Prevention : Eliminating No Preemption condition
  • 51.
    • Alternatively, ifa process requests some resources, we first check whether they are available. If they are, we allocate them. If they are not, we check whether they are allocated to some other process that is waiting for additional resources. • If so, we preempt the desired resources from the waiting process and allocate them to the requesting process. If the resources are neither available nor held by a waiting process, the requesting process must wait. • While it is waiting, some of its resources may be preempted, but only if another process requests them. A process can be restarted only when it is allocated the new resources it is requesting and recovers any resources that were preempted while it was waiting. Deadlock Prevention : Eliminating No Preemption condition