KEMBAR78
Topic 8: CPU Scheduling: University of Virginia Department of Computer Science Spring 2008 | PDF | Scheduling (Computing) | Operating System Technology
0% found this document useful (0 votes)
92 views3 pages

Topic 8: CPU Scheduling: University of Virginia Department of Computer Science Spring 2008

This document discusses CPU scheduling in operating systems. It covers the goals of scheduling which are efficiency, minimizing overhead, minimizing response time, and distributing cycles equitably. Common scheduling algorithms discussed include FIFO, Round Robin, and Shortest Time to Completion First. Later algorithms like Exponential Queue and Fair Share scheduling aim to be adaptive based on a process's usage to better meet the scheduling goals. The best scheduling would predict the future but real algorithms aim to adapt based on past usage.

Uploaded by

Hummels1000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views3 pages

Topic 8: CPU Scheduling: University of Virginia Department of Computer Science Spring 2008

This document discusses CPU scheduling in operating systems. It covers the goals of scheduling which are efficiency, minimizing overhead, minimizing response time, and distributing cycles equitably. Common scheduling algorithms discussed include FIFO, Round Robin, and Shortest Time to Completion First. Later algorithms like Exponential Queue and Fair Share scheduling aim to be adaptive based on a process's usage to better meet the scheduling goals. The best scheduling would predict the future but real algorithms aim to adapt based on past usage.

Uploaded by

Hummels1000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

CS 414 : Operating Systems

UNIVERSITY OF VIRGINIA
Department of Computer Science

Spring 2008

Topic 8: CPU Scheduling

g Readings for this topic: Chapter 5.

g OS typically consists of two parts: 1) Process coordination, and 2) Resource Management

g Resources fall into two classes:

g Preemptible vs Non-preemptible

g Is this distinction absolute? Real issue?

g OS makes two related kinds of decisions about resources. What is the goal?

g Allocation: who gets what. Implication?

g Scheduling: how long can they keep it. Implication?

g Resource #1:

g Processes may be in any one of three general scheduling states:

g Running.

g Ready.

g Blocked.

g Goals for scheduling disciplines:

g Efficiency of resource utilization. (keep CPU and disks busy)

g Minimize overhead. (context switching)

g Minimize response time. Define response time.

g Distribute cycles equitably. What does this mean?

— 8.1 —
g FIFO (also called FCFS): run until finished.

g Usually, ‘‘finished’’ means ‘‘blocked’’.

g Problem?

g Solution: limit maximum amount of time that a process can run without a context switch.

This time is called a time slice.

g Round Robin: run process for one time slice, then move to back of queue. Each process gets

equal share of the CPU. What if the slice isn’t chosen carefully?

g Too long:

g Too small:

g Originally, Unix had 1 sec. time slices. Right size?

Most systems today use time slices of 10,000 - 100,000 instructions.

g How to implement priorities in RR?

g Is RR always better than FIFO?

g What is the best we can do? Is there "perfect" scheduling algorithm? STCF: shortest time to

completion first with preemption. In what sense is it the best?

g Example: two processes, one doing 1 ms computation followed by 10 ms I/O, one doing all

computation. Suppose we use 100 ms time slice: I/O process only runs at 1/10th speed, I/O

devices are only utilized 10% of time. Suppose we use 1 ms time slice: then compute-bound

process gets interrupted 9 times unnecessarily for each valid interrupt. STCF works well.

g Why not using STCF?

g Rule of thumb: Give the most to those who need the least. What’s the idea here?

g The strategy?

— 8.2 —
g Exponential Queue (or Multi-Level Feedback Queues):

g Give newly runnable process a high priority and a very short time slice. If process uses

up the time slice without blocking then decrease priority by 1 and double time slice for

next time.

g Go through the above example, where the initial values are 1ms and priority 100.

g Techniques like this one are called adaptive. They are common in interactive systems.

g The CTSS system (MIT, early 1960’s) was the first to use exponential queues.

g Fair-share scheduling:

g Keep history of recent CPU usage for each process.

g Give highest priority to process that has used the least CPU time recently. Highly in-

teractive jobs, like vi, will use little CPU time and get high priority. CPU-bound jobs,

like compiles, will get lower priority.

g Can adjust priorities by changing ‘‘billing factors’’ for processes. E.g. to make high-

priority process, only use half its recent CPU usage in computing history.

g Performance evaluation of scheduling algorithms:

g Analytic methods:

g deterministic modeling

g queueing model

g simulation

g implementation

g Summary:

g In principle, scheduling algorithms can be arbitrary, since the system should produce the

same results in any event.

g Scheduling algorithms have strong effects on the the system’s overhead, efficiency, and

response time.

g The best schemes are adaptive. To do absolutely best, we need to predict the future.

— 8.3 —

You might also like