Paging Implementing Paging
virtual address physical address
n Compared to segmentation, paging: page offset frame offset
access
● Makes allocation and swapping easier physical
page table memory
● No external fragmentation
page
n Each process is divided into a number of frame
small, fixed-size partitions called pages
● Physical memory is divided into a large
number of small, fixed-size partitions
called frames n A page table keeps track of every page
● Pages have nothing to do with segments in a particular process
● Page size = frame size ● Each entry contains the corresponding
n Usually 512 bytes to 16K bytes frame in main (physical) memory
● The whole process is still loaded into ● Can add protection bits, but not as useful
memory, but the pages of a process do
not have to be loaded into a contiguous n Additional hardware support required is
set of frames slightly less than for segmentation
● Virtual address consists of page number ● No need to keep track of, and compare
and offset from beginning of that page to, limit. Why not?
1 Fall 1998, Lecture 26 2 Fall 1998, Lecture 26
Paging Example
Paging Example
(cont.)
relative address within program: virtual address:
1501 page# = 1, offset = 478
frame main frame main frame main 0000010111011110 000001 0111011110
number memory number memory number memory
0 0 A.0 0 A.0 program D process D
1 1 A.1 1 A.1
2 2 A.2 2 A.2
3 3 A.3 3 A.3
4 4 4 B.0 page 0
5 5 5 B.1
6 6 6 B.2
7 7 7
8 8 8
9 9 9 478
10 10 10
11 11 11 1501 page 1
12 12 12
13 13 13
14 14 14
fifteen available frames load process A (4 pages) load process B (3 pages)
page 2
frame main frame main frame main
number memory number memory number memory
0 A.0 0 A.0 0 A.0 page 3
1 A.1 1 A.1 1 A.1
2 A.2 2 A.2 2 A.2
3 A.3 3 A.3 3 A.3
4 B.0 4 4 D.0
5 B.1 5 5 D.1 4292 bytes
6 B.2 6 6 D.2 long page 4 828 bytes
7 C.0 7 C.0 7 C.0 unused
8 C.1 8 C.1 8 C.1
9 C.2 9 C.2 9 C.2
10 C.3 10 C.3 10 C.3 5120 bytes
11 11 11 D.3 long
12 12 12 D.4 virtual address:
13 13 13 page# = 1, offset = 478
14 14 14 000001 0111011110
load process C (4 pages) swap B out (blocked) load process D (5 pages)
page table
for Process D
page table page table page table page table list of
for Process A for Process B for Process C for Process D free frames 0 4
0 0 0 0 7 0 4 13 1 5
1 1 1 1 8 1 5 14 2 6
2 2 2 2 9 2 6 3 11
3 3 3 10 3 11 4 12
4 12 000101 0111011110
physical address:
frame# = 5, offset = 478
3 Fall 1998, Lecture 26 4 Fall 1998, Lecture 26
Managing Pages and Frames Evaluation of Paging
n OS usually keeps track of free frames in n Advantages:
memory using a bit map ● Easy to allocate memory — keep a list of
● A bit map is just an array of bits available frames, and simple grab first
n 1 means the frame is free one that’s free
n 0 means the frame is allocated to a page ● Easy to swap — pages, frames, and often
● To find a free frame, look for the first 1 bit disk blocks as well, all are same size
in the bit map ● One frame is just as good as another!
n Most modern instruction sets have an
instruction that returns the offset of the n Disadvantages:
first 1 bit in a register
● Page tables are fairly large
n Page table base pointer (special register) n Most page tables are too big to fit in
points to page table of active process registers, so they must live in physical
memory
● Saved/restored as part of context switch n This table lookup adds an extra memory
● Page table also contains: reference for every address translation
n Valid bit — indicates page is in the ● Internal fragmentation
process’s virtual address space n Always get a whole page, even for 1 byte
n Other bits for demand paging (discussed n Larger pages makes the problem worse
next time)
n Average = 1/2 page per process
5 Fall 1998, Lecture 26 6 Fall 1998, Lecture 26
Address Translation, Revisited Translation Lookaside Buffer (TLB)
n A modern microprocessor has, say, a 32
bit virtual address space (232 = 4 GB)
● If page size is 1k (210), that means all the
page tables combined could have over
222 (approximately 4 million) page entries,
each at least a couple of bytes long
● Problem: if the main memory is only, say,
16 Mbytes, storing these page table there
presents a problem!
n Solution: store page tables in virtual
memory, bring in pieces as necessary
● New problem: memory access time may
be doubled since the page tables are now
subject to paging
n (one access to get info from page table,
plus one access to get data from memory)
n New solution: use a special cache (called
a Translation Lookaside Buffer (TLB)) to
cache page table entries
7 Fall 1998, Lecture 26 8 Fall 1998, Lecture 26
Paging and Segmentation
n Use two levels of mapping:
● Process is divided into variable-size
segments
n Segments are logical divisions as before
● Each segment is divided into many small
fixed-size pages
n Pages are easy for OS to manage
n Eliminates external fragmentation
● Virtual address = segment, page, offset
● One segment table per process, one
page table per segment
n Sharing at two levels: segment, page
● Share frame by having same frame
reference in two page tables
● Share segment by having same base in
two segment tables
● Still need protection bits (sharing, r/o, r/w)
9 Fall 1998, Lecture 26