Virtual Memory
Soma Hazra
1
Outline…
Background
Demand Paging
Page fault & Page fault handling
Pure Demand Paging
Performance Analysis of Demand paging
Page Replacement
Page Replacement Algorithms
Allocation of Frames
Frame allocation Algorithm
Thrashing
Working Set Strategy & Page fault frequency
2
Background
Virtual memory – Virtual memory is a technique that allows the
execution of processes that may not be completely in memory.
Virtual memory is the separation of user logical memory from
physical memory. This separation allows an extremely large virtual
memory to be provided for programmers when only a smaller
physical memory is available.
Only part of the program needs to be in memory for execution
Logical address space can therefore be much larger than physical
address space
Allows files and memory to be shared by several different processes
through page sharing.
• Allows for more efficient process creation.
Virtual memory can be implemented via:
Demand paging and Demand segmentation (Burroughs computer
system, IBM OS/2).
3
Background contd…
Virtual Memory that is larger than Physical Memory
4
Demand Paging
A Demand-paging system is similar to a paging system with
swapping.
Rather than swapping the entire process in to memory bring a
page into memory only when it is needed.
Lazy swapper – never swaps a page into memory unless page will
be needed.
Swapper that deals with pages is a pager.
When the page is to be swapped in, the pager guesses which pages will
be used before the process swapped out again.
Instead of swapping in a whole process, the pager brings only those
necessary pages in to memory.
Thus, it avoid reading into memory pages that will not be used.
Decreasing the swap time and the amount of memory needed.
5
Demand Paging contd…
Transfer of a Paged Memory to Contiguous Disk Space
6
Demand Paging contd…
Need H/W support to distinguish between those pages that
are in memory and those pages that are on the disk.
Can use a valid-invalid bit scheme.
When bit is set to “valid “, this value,
indicates that the associated page
is both legal and in memory.
If the bit set to “invalid”, this value indicates that the page
either is not valid( i.e. not in the logical address space of the
process), or is valid but currently on the disk (not in the
memory).
The page table entry for a page that is not currently in
memory is simply marked invalid.
7
Page Table When Some Pages Are Not
in Main Memory
8
Page Fault
If we guess right and page in only those pages that
are actually needed, the process will run exactly.
But what will happens if the process tries to access a
page that was not brought in to memory ?
Access to a page marked invalid causes a “page fault”.
The paging h/w while in translating the address
through the page table, trap an error to operating
system if invalid bit is set.
Note: This trap is result of the operating system’s
failure to bring the desired page into memory, rather
than an invalid address error as an result of attempt
to use an illegal memory address.
9
Page Fault Handling
1. We check an internal table (usually kept with PCB) for this
process, to determine whether the reference was a valid or invalid
memory access.
2. If the reference was invalid, terminate the process. If it was valid,
but we have not yet brought in to memory, we now page it in.
3. Next, find a free frame from the free frame list.
4. Schedule a disk operation to read the desired page into the newly
allocated frame.
5. When the disk read is complete, modify the internal table kept
with the process and the page table to indicate that the page in
now in memory.
6. Restart the instruction that was interrupted by the illegal address
trap.
10
Steps of Page Fault Handling
11
Pure Demand Paging
In extreme case we could start a process with
no pages in memory.
When the OS sets the instruction pointer to the first
instruction of the process, which is not in the
memory, the process immediately faults for the page.
After this page is brought into memory the process
continues to execute, faulting as necessary until every
page that it needs is in memory.
At this point, it can execute with no more faults. This
scheme is called pure demand paging.
Never bring a page in to memory until it is required.
12
Performance of Demand Paging
Demand paging can have a significant effect on the
performance of a computer system.
Let p be the probability of a page fault(0 p 1)
if p = 0 no page faults
if p = 1, every reference is a fault
Effective Access Time (EAT)
EAT = p x page fault time +
(1 – p) x memory access time.
EAT = p x pf + (1 – p) x ma
13
Example of Demand Paging
Memory access time = 100 nanoseconds
(100 to 200 ns)
Average page-fault service time = 25
milliseconds
EAT = (1 – p) x 100 + p (25 milliseconds)
= (1 – p )x 100 + p x 25,000,000
= 100 + p x 24,999,900
14
Page Replacement
Aim is to increase the degree of multiprogramming.
Exp: with 40 frames, we could run 8 processes, rather
than 4 processes that could run if each required 10
frames (5 of which were never used).
The memory is not only used for the program pages,
but buffers for I/O also consume a significant
amount of memory.
Deciding how much memory to allocate to I/O and
how much to program pages is a significant challenge.
Some system allocate a fixed percentage of memory
for I/O buffers
15
Need For Page Replacement
While a user process is executing a page fault occurs.
The h/w traps to operating system, which checks its internal
tables to see that this page fault is a genuine one rather than
an illegal memory access.
Then operating system determines where the desired page is
residing on the disk, but then finds that there are no free
frames on the free-frame list: all memory is in use.
Options:
It could terminate the user process
The operating system could swap out a process, freeing all its
frames, and reducing the level of multiprogramming.
Page replacement
16
Need for Page Replacement
17
Steps for Page Replacement
1. Find the location of the desired page on disk.
2. Find a free frame:
If there is a free frame, use it
If there is no free frame, use a page replacement algorithm to select
a victim frame.
Write the victim page to the disk; change the page and frame table
accordingly.
3. Read the desired page into the (newly) free frame; update the
page and frame tables.
4. Restart the process
Note: If no frames are free, two page transfers (one out and one in) are
required. This situation effectively doubles the page-fault service time
and increase the EAT accordingly
18
Page Replacement
19
Page Replacement Algorithms
Page replacement is basic to demand paging.
With this mechanism, an enormous virtual memory can
be provided for programmers on a smaller physical
memory.
There are many page-replacement algorithm. The best
algorithm is one which have the lowest page-fault rate.
Evaluate algorithm by running it on a particular string of
memory references (reference string) and computing the
number of page faults on that string
In our examples, we will take the following reference
string:
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
20
First In First Out (FIFO)
Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
3 frames (3 pages can be in memory at a time per process)
Replace the oldest page.
1 1 4 5
2 2 1 3 9 page faults
3 3 2 4
1 1 5 4
4 frames 2 2 1 5 10 page faults
3 3 2
4 4 3
Belady’s Anomaly: more frames more page faults 21
FIFO illustration of Belady’s Anomaly
22
FIFO Example
Reference String: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
23
Optimal Page Replacement Algorithm
It has the lowest page-fault rate of all the algorithms and never suffers
from Belady’s anomaly.
Replace the page that will not be used for longest period of time
3 frames example
Difficult to implement.
Used for measuring how well your algorithm performs.
24
Least Recently Used (LRU) Algorithm
Replace the page that has not been used for longest period of time
Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
1 1 1 1 5
2 2 2 2 2
3 5 5 4 4
4 4 3 3 3
Another Example:
25
Implementation of LRU
Counter implementation
– Every page entry has a counter; every time page is referenced
through this entry, copy the clock into the counter (time-of-use
field).
– When a page needs to be changed, look at the counters to
determine which are to change
Stack implementation – keep a stack of page numbers in a double
link form:
– Page referenced:
• So, top is the most recently used (MRU) page and the
bottom is the LRU page.
– No search for replacement
26
Allocation of Frames
How do we allocate the fixed amount of free memory among various
process and If we have 93 free frames and two processes, how many
frames does each process get?
Example: consider a single user system with 128 KB memory and page
size=1 KB. Out of 128 frames OS may take 35 KB, leaving 93 frames for the
user process.
With pure demand paging first 93 page faults would get frames from free-
frame list
Then we may require page replacement algorithm after free-frame list was
exhausted.
Each process needs minimum number of frames (pages) and it is defined by
the given computer architecture.
Two major allocation schemes
– fixed allocation
– priority allocation
27
Frame Allocation Algorithms
Equal allocation – Split m frames among n processes equally,
that is each process will get m/n frames.
Example: If there are 100 frames and 5 processes, each process
will get 20 frames (100/5=20).
Proportional allocation – Allocate available frames according to
the size of the process.
Let the size of virtual memory for process pi be si, ,and define,
S= ∑ si, .
Then if the total number of available frames is m, we allocate
ai frames to process pi, where ai is approximately
ai = (si, /S) x m
Example: if we would split 62 frames between two process,
one of 10 pages and one of 127 pages, then
How many frames will be allocated to each process?
28
Free Allocation Algorithm
Priority Allocation:
In both equal and proportional allocation, the allocation to each
process may be vary according to the multiprogramming level.
If the multiprogramming level increased, each process will
lose some frames to provide the memory needed for the new
process.
On the other hand, if the multiprogramming level decreases,
the frames that had been allocated to the departed process
can be given to the remaining process.
In both equal and proportional allocation, a high-priority process
is treated the same as a low-priority process.
Allocation of frames is done according to their priorities, or on a
combination of size and priority, rather than taking only size.
29
Global vs. Local Allocation
How to select a frame from the various process when
page replacement takes place?
Global replacement – Allows a process to select a
replacement frame from the set of all frames, even if that
frame is currently allocated to some other process; one
process can take a frame from another.
Local replacement – each process selects from only its
own set of allocated frames.
One problem with a global replacement algorithm is that
a process cannot control its own page-fault rate.
The set of pages in memory for a process depends not
only on the paging behavior of that process, but also the
paging behavior of other processes. Therefore the same
process may perform quite differently.
30
Thrashing
If a process does not have “enough” frames, it will quickly page-fault. At
this point, it must replace some page. However, since all the pages are in
active use, it may replace a page which will be needed again right away.
Consequently, it quickly faults again, and again, and again. The process
continues to fault, replacing pages for which it then faults and brings
back right away. This high paging activity is called thrashing.
Note: A process is thrashing if it is spending more time with paging
rather than executing.
Cause of thrashing:
if CPU utilization is too low, then the degree of multi-programming
can be increased by introducing new process
If a global replacement algorithm is used to replace the pages when
page-fault occurs.
31
Thrashing contd…
Note: We can limit the effects of thrashing by using a local
replacement algorithm or priority replacement algorithm
and we can prevent It by providing to a process as many
frames as it need.
32
Working-Set Model
To prevent thrashing, we must provide a process as many frames as it
needs.
One of the methods is working set strategy.
The working set model is based on the assumption of locality.
The locality model states that, as a process executes, it moves from
locality to locality (a locality is a set of pages that are actively used
together).
working-set window a fixed number of page references.
The idea is to examine the most recent page reference.
Set of pages in the most recent page reference is the working set.
If a page is in active use, it will be in the working set. If it is no longer
being used, it will drop from the working set.
Thus the working set is an approximation of the program’s locality.
Example : if =10 memory reference, then the working set at time t1 is
{1,2,5,6,7} and it may change to {3,4} at time t2.
33
Working-set model
The accuracy of the working set depends on the selection of .
if is too small will not encompass entire locality.
if is too large, it may overlap several localities
if = will encompass entire program.
The most important property of the working set is its size.
WSSi=working set size of Process Pi.
D = WSSi total demand frames.
if D > m Thrashing
Policy if D > m, then suspend one of the processes
34
Page-Fault Frequency Scheme
Establish “acceptable” page-fault rate
– If actual page-fault rate too low, process loses frame
– If actual page-fault rate too high, process gains frame
35