KEMBAR78
Memory Management | PDF | Software Engineering | Computer Hardware
0% found this document useful (0 votes)
21 views44 pages

Memory Management

The document discusses memory management in computer systems, covering concepts such as address binding, logical versus physical address spaces, dynamic loading, dynamic linking, and swapping. It explains various memory allocation strategies, fragmentation issues, and paging as a method for managing memory in a non-contiguous manner. Additionally, it addresses the implementation of page tables and techniques for efficient memory access, including TLBs and hit ratios.

Uploaded by

vibha srivastava
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views44 pages

Memory Management

The document discusses memory management in computer systems, covering concepts such as address binding, logical versus physical address spaces, dynamic loading, dynamic linking, and swapping. It explains various memory allocation strategies, fragmentation issues, and paging as a method for managing memory in a non-contiguous manner. Additionally, it addresses the implementation of page tables and techniques for efficient memory access, including TLBs and hit ratios.

Uploaded by

vibha srivastava
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 44

Memory Management

 Memory consists of a large array of words or bytes, each with its own
address. The CPU fetches instructions from memory according to the
value of the program counter. These instructions may cause additional
loading from and storing to specific memory addresses.
 Memory unit sees only a stream of memory addresses. It does not know how
they are generated.
 Program must be brought into memory and placed within a process for it to be
run.
 Input queue – collection of processes on the disk that are waiting to be
brought into memory for execution.
 User programs go through several steps before being run.

Address binding of instructions and data to memory addresses can happen at three
different stages.
 Compile time: If memory location known a priori, absolute code can
be generated; must recompile code if starting location changes.
Example: .COM-format programs in MS-DOS.
 Load time: Must generate relocatable code if memory location is not known at
compile time.
 Execution time: Binding delayed until run time if the process can be
moved during its execution from one memory segment to another.
Need hardware support for address maps (e.g., relocation registers).

Logical Versus Physical Address Space


 The concept of a logical address space that is bound to a separate
physicaladdress space is central to proper memory management.
o Logical address – address generated by the CPU; also referred to as
virtual address.
o Physical address – address seen by the memory unit.

 The set of all logical addresses generated by a program is a logical


address space; the set of all physical addresses corresponding to these
logical addresses are a physical address space.
 Logical and physical addresses are the same in compile-time and
load-time address-binding schemes; logical (virtual) and physical
addresses differ in execution-time address-binding scheme.
 The run-time mapping from virtual to physical addresses is done by a
hardware device called the memory management unit (MMU).

 This method requires hardware support slightly different from the


hardware configuration. The base register is now called a relocation
register. The value in the relocation register is added to every address
generated by a user process at the time it is sent to memory.
 The user program never sees the real physical addresses. The program
can create a pointer to location 346, store it in memory, manipulate it and
compare it to other addresses. The user program deals with logical
addresses. The memory mapping hardware converts logical addresses
into physical addresses. The final location of a referenced memory
address is not determined until the reference is made.
Dynamic Loading
 Routine is not loaded until it is called.
 All routines are kept on disk in a relocatable load format.
 The main program is loaded into memory and is executed. When a routine
needs to call another routine, the calling routine first checks to see
whether the other the desired routine into memory and to update the
program’s address tables to reflect this change. Then control is passed to
the newly loaded routine.
 Better memory-space utilization; unused routine is never loaded.
 Useful when large amounts of code are needed to handle infrequently occurring
cases.
 No special support from the operating system is required.
 Implemented through program design.

Dynamic Linking
 Linking is postponed until execution time.
 Small piece of code, stub, is used to locate the appropriate memory-
resident library routine, or to load the library if the routine is not already
present.
 When this stub is executed, it checks to see whether the needed routine
is already in memory. If not, the program loads the routine into memory.
 Stub replaces itself with the address of the routine, and executes the routine.
 Thus the next time that code segment is reached, the library
routine is executed directly, incurring no cost for dynamic linking.
 Operating system is needed to check if routine is in processes’ memory address.
 Dynamic linking is particularly useful for libraries.
Swapping
 A process can be swapped temporarily out of memory to a backing store,
and then brought back into memory for continued execution. For example,
assume a multiprogramming environment with a round robin CPU
scheduling algorithm. When a quantum expires, the memory manager will
start to swap out the process that just finished, and to swap in another
process to the memory space that has been freed. In the mean time, the
CPU scheduler will allocate a time slice to some other process in memory.
When each process finished its quantum, it will be swapped with another
process. Ideally, the memory manager can swap processes fast enough
that some processes will be in memory, ready to execute, when the CPU
scheduler wants to reschedule the CPU. The quantum must also be
sufficiently large that reasonable amounts of computing are done between
swaps.

 Roll out, roll in – swapping variant used for priority-based scheduling


algorithms. If a higher priority process arrives and wants service, the
memory manager can swap out the lower priority process so that it can
load and execute lower priority process can be swapped back in and
continued. This variant is some times called roll out, roll in. Normally a
process that is swapped out will be swapped back into the same memory
space that it occupied previously. This restriction is dictated by the
process cannot be moved to different locations. If execution time
binding is being used, then a process can be swapped into a different
memory space, because the physical addresses are computed during
execution time.
 Backing store – fast disk large enough to accommodate copies of all
memory images for all users; must provide direct access to these
memory images. It must be large enough to accommodate copies of all
memory images for all users, and it must provide direct access to these
memory images. The system maintains a ready queue consisting of all
processes whose memory images are scheduler decides to execute a
process it calls the dispatcher. The dispatcher checks to see whether the
next process in the queue is in memory. If not, and there is no free
memory region, the dispatcher swaps out a process currently in memory
and swaps in the desired process. It then reloads registers as normal
and transfers control to the selected process.
 Major part of swap time is transfer time; total transfer time is
directly proportional to the amount of memory swapped.
 Modified versions of swapping are found on many systems (i.e., UNIX, Linux,

and Windows).

Contiguous Memory Allocation


 Main memory is usually divided into two partitions:
o Resident operating system, usually held in low memory with interrupt
vector.
o User processes, held in high memory.
 In contiguous memory allocation, each process is contained in a single
contiguous section of memory.
 Single-partition allocation
o Relocation-register scheme used to protect user processes from
each other, and from changing operating-system code and data.
o Relocation register contains value of smallest physical address;
limit register contains range of logical addresses – each logical
address must be less than the limit register.

 Multiple-partition allocation
o Hole – block of available memory; holes of various size are
scattered throughout memory.
o When a process arrives, it is allocated memory from a
hole large enough to accommodate it.
o Operating system maintains information
about:
a) allocated partitions b) free partitions
(hole)
o A set of holes of various sizes, is scattered throughout memory at
any given time. When a process arrives and needs memory, the
system searches this set for a hole that is large enough for this
process. If the hole is too large, it is split into two: one part is
allocated to the arriving process; the other is returned to the set of
holes. When a process terminates, it releases its block of memory,
which is then placed back in the set of holes. If the new hold is
adjacent to other holes, these adjacent holes are merged to form
one larger hole.
o This procedure is a particular instance of the general dynamic
storage allocation problem, which is how to satisfy a request of
size n from a list of free holes. There are many solutions to this
problem. The set of holes is searched to determine which hole is
best to allocate. The first-fit, best-fit and worst-fit strategies are the
most common ones used to select a free hole from the set of
available holes.
o First-fit: Allocate the first hole that is big enough.
o Best-fit: Allocate the smallest hole that is big enough; must
search entire list, unless ordered by size.
o Worst-fit: Allocate the largest hole; must also search entire list.
Fragmentation
 External Fragmentation – total memory space exists to satisfy a
request, but it is not contiguous.
 Internal Fragmentation – allocated memory may be slightly larger than
requested memory; this size difference is memory internal to a partition,
but not being used.
 Reduce external fragmentation by compaction
o Shuffle memory contents to place all free memory together in one large
block.
o Compaction is possible only if relocation is dynamic, and is done at
execution time.
Paging
 Paging is a memory management scheme that permits the physical
address space of a process to be non contiguous.
 Divide physical memory into fixed-sized blocks called frames (size is
power of 2, for example 512 bytes).
 Divide logical memory into blocks of same size called pages. When a
process is to be executed, its pages are loaded into any available memory
frames from the backing store. The backing store is divided into fixed
sized blocks that are of the same size as the memory frames.
 The hardware support for paging is illustrated in below figure.
 Every address generated by the CPU is divided into two parts: a page
number (p) and a page offset (d). The page number is used as an index
into a page table. The page table contains the base address of each
page in physical memory. This base address is combined with the page
offset to define the physical memory address that is sent to the memory
unit.
 The paging model of memory is shown in below figure. The page size is
defined by the hardware. The size of a page is typically of a power of 2,
varying between 512 bytes and 16 MB per page, depending on the
computer architecture. The selection of a power of 2 as a page size makes
the translation of a logical address into a page number and page offset
particularly easy. If the size of logical address is 2 m, and a page size is 2n
addressing units, then the high order m-n bits of a logical address
designate the page number, and the n low order bits designate the page
offset.

 Keep track of all free frames.


 To run a program of size n pages, need to find n free frames and load program.
 Set up a page table to translate logical to physical addresses.

60 | P a g e
 Internal fragmentation may occur.

61 | P a g e
Let us take an example. Suppose a program needs 32 KB memory for
allocation. The whole program is divided into smaller units assuming 4 KB
and is assigned some address. The address consists of two parts such as:
 A large number in higher order positions and
 Displacement or offset in the lower order bits.
The numbers allocated to pages are typically in power of 2 to simplify
extraction of page numbers and offsets. To access a piece of data at a given
address, the system first extracts the page number and the offset. Then it
translates the page number to physical page frame and access data at offset
in physical page frame. At this moment, the translation of the address by the
OS is done using a page table. Page table is a linear array indexed by virtual
page number which provides the physical page frame that contains the
particular page. It employs a lookup process that extracts the page number
and the offset. The system in addition checks that the page number is within
the address space of process and retrieves the page number in the page
table. Physical address will calculated by using the formula.
Physical address = page size of logical memory X frame number + offset

62 | P a g e
When a process arrives in the system to be executed, its size expressed in
pages is examined. Each page of the process needs one frame. Thus if the
process requires n pages, at least n frames must be

63 | P a g e
available in memory. If n frames are available, they are allocated to this
arriving process. The first page of the process is loaded into one of the
allocated frames, and the frame number is put in the page table for this
process. The next page is loaded into another frame, and its frame number
is put into the page table and so on as in below figure. An important aspect
of paging is the clear separation between the user’s view of memory and the
actual physical memory. The user program views that memory as one single
contiguous space, containing only this one program. In fact, the user
program is scattered throughout physical memory, which also holds other
programs. The difference between the user’s view of memory and the actual
physical memory is reconciled by the address-translation hardware. The
logical addresses are translated into physical addresses. This mapping is
hidden from the user and is controlled by the operating system.

Implementation of Page Table


 Page table is kept in main memory.
 Page-tablebase register (PTBR) points to the page table.
 In this scheme every data/instruction-byte access requires two memory
accesses. One for the page-table entry and one for the byte.
 The two memory access problem can be solved by the use of a special
fast-lookup hardware cache called associative registers or associative
memory or translation look-aside buffers(TLBs).
 Typically, the number of entries in a TLB is between 32 and 1024.

64 | P a g e
65 | P a g e
 The TLB contains only a few of the page table entries. When a logical
address is generated by the CPU, its page number is presented to the
TLB. If the page number is found, its frame number is immediately
available and is used to access memory. The whole task may take less
than 10 percent longer than it would if an unmapped memory
reference were used.
 If the page number is not in the TLB (known as a TLB miss), a
memory reference to the page table must be made. When the frame
number is obtained, we can use it to access memory.
Hit Ratio
 Hit Ratio: the percentage of times that a page number is found in the
associative registers.
 For example, if it takes 20 nanoseconds to search the associative memory
and 100 nanoseconds to access memory; for a 98-percent hit ratio, we
have
Effective memory-access time = 0.98 x 120 + 0.02 x 220
= 122 nanoseconds.
 The Intel 80486 CPU has 32 associative registers, and claims a 98-percent hit
ratio.
Valid or invalid bit in a page table
 Memory protection implemented by associating protection bit with each frame.
66 | P a g e
 Valid-invalid bit attached to each entry in the page table:
o “Valid” indicates that the associated page is in the process’ logical
address space, and is thus a legal page.
o “Invalid” indicates that the page is not in the process’ logical address
space.

67 | P a g e
 Pay attention to the following figure. The program extends to only address
10,468, any reference beyond that address is illegal. However, references
to page 5 are classified as valid, so accesses to addresses up to 12,287 are
valid. This reflects the internal fragmentation of paging.

Structure of the Page Table


Hierarchical Paging:
 A logical address (on 32-bit machine with 4K page size) is divided into:
o A page number consisting of 20 bits.
o A page offset consisting of 12 bits.
 Since the page table is paged, the page number is further divided into:
o A 10-bit page number.
o A 10-bit page offset.
 Thus, a logical address is as follows:

Where p1 is an index into the outer page table, and p2 is the displacement
within the page of the outer page table.The below figure shows a two level
page table scheme.

68 | P a g e
69 | P a g e
Address-translation scheme for a two-level 32-bit paging architecture is shown in
below figure.

Hashed Page Table:


A common approach for handling address spaces larger than 32 bits is to
use a hashed page table, with the hash value being the virtual page number.
Each entry in the hash table contains a linked list of elements that has to
the same location. Each element consists of three fields: (a) the virtual page
number, (b) the value of the mapped page frame, and (c) a pointer to the
next element in the linked list. The algorithm works as follows: The virtual
page number in the virtual address is hashed into the hash table. The
virtual page number is compared to field (a) in the first element in the
linked list. If there is a match, the corresponding page frame (field (b)) is
used to form the desired physical address. If there is no match, subsequent
entries in the linked list are searched for a matching virtual page number.
The scheme is shown in below figure.

70 | P a g e
71 | P a g e
Inverted Page Table:
 One entry for each real page (frame) of memory.
 Entry consists of the virtual address of the page stored in that real
memory location, with information about the process that owns that
page.
 There is only one page table in the system. Not per process.
 Decreases memory needed to store each page table, but increases time
needed to search the table when a page reference occurs.
 Use hash table to limit the search to one — or at most a few — page-table

entries.
Each virtual address in the system consists of a triple <process-id, page-
number, offset>. Each inverted page table entry is a pair <process-id, page-
number> where the process-id assumes the role of the address space
identifier. When a memory reference occurs, part of the virtual address,
consisting of <process-id, page-number>, is presented to the memory
subsystem. The inverted page table is then searched for a match. If a match

72 | P a g e
is found say at entry i, then the physical address <i, offset> is generated. If
no match is found, then an illegal address access has been attempted.
Shared Page:
 Shared code

73 | P a g e
o One copy of read-only (reentrant) code shared among
processes (i.e., text editors, compilers, window systems).
o Shared code must appear in same location in the logical address space of
all processes.
 Private code and data
o Each process keeps a separate copy of the code and data.
o The pages for the private code and data can appear anywhere
in the logical address space.
Reentrant code or pure code is non self modifying code. If the code is
reentrant, then it never changes during execution. Thus, two or more
processes can execute the same code at the same time. Each process has its
own copy of registers and data storage to hold the data for the process’
execution. The data for two different processes will of course vary for each
process.

Segmentation
 Memory-management scheme that supports user view of memory.
 A program is a collection of segments. A segment is a logical unit such as:
 Main program,
 Procedure,
 Function,
 Method,
 Object,

74 | P a g e
 Local variables, global variables,
 Common block,
 Stack,

75 | P a g e
 Symbol table, arrays

 Segmentation is a memory management scheme that supports this user view of


memory.
 A logical address space is a collection of segments. Each segment has a name
and a length.
 The addresses specify both the segment name and the offset within the
segment.
 The user therefore specifies each address by two quantities such as
segment name and an offset. For simplicity of implementation, segments
are numbered and are referred to by a segment number, rather than by a
segment name.
 Logical address consists of a two tuples:
 <segment-number, offset>
 Segment table – maps two-dimensional physical addresses; each table entry
has:
o Base – contains the starting physical address where the segments reside
in memory.
o Limit – specifies the length of the segment.
 Segment-table base register (STBR) points to the segment table’s location in
memory.
 Segment-table length register (STLR) indicates number of segments used by a
program;
 Segment number s is legal if s< STLR.

76 | P a g e
77 | P a g e
 When the user program is compiled by the compiler it constructs the
segments.
 The loader takes all the segments and assigned the segment numbers.
 The mapping between the logical and physical address using the
segmentation technique is shown in above figure.
 Each entry in the segment table as limit and base address.
 The base address contains the starting physical address of a segment
where the limit address specifies the length of the segment.
 The logical address consists of 2 parts such as segment number and offset.
 The segment number is used as an index into the segment table.
Consider the below example is given below.

Segmentation with Paging


 Both paging and segmentation have advantages and disadvantages,
that’s why we can combine these two methods to improve this technique
for memory allocation.
 These combinations are best illustrated by architecture of Intel-386.
 The IBM OS/2 is an operating system of the Intel-386 architecture. In
this technique both segment table and page table is required.
 The program consists of various segments given by the segment table
where the segment table contains different entries one for each
segment.

78 | P a g e
 Then each segment is divided into a number of pages of equal size
whose information is maintained in a separate page table.

79 | P a g e
 If a process has four segments that is 0 to 3 then there will be 4 page
tables for that process, one for each segment.
 The size fixed in segmentation table (SMT) gives the total number of
pages and therefore maximum page number in that segment with
starting from 0.
 If the page table or page map table for a segment has entries for page 0 to 5.
 The address of the entry in the PMT for the desired page p in a given
segment s can be obtained by B + P where B can be obtained from the
entry in the segmentation table.
 Using the address (B +P) as an index in page map table (page table),
the page frame (f) can be obtained and physical address can be
obtained by adding offset to page frame.

Virtual Memory
 It is a technique which allows execution of process that may not be
compiled within the primary memory.
 It separates the user logical memory from the physical memory. This
separation allows an extremely large memory to be provided for
program when only a small physical memory is available.
 Virtual memory makes the task of programming much easier because the
programmer no longer needs to working about the amount of the physical
memory is available or not.

70 | P a g e
 The virtual memory allows files and memory to be shared by
different processes by page sharing.
 It is most commonly implemented by demand paging.

71 | P a g e
Demand Paging
A demand paging system is similar to the paging system with swapping
feature. When we want to execute a process we swap it into the memory. A
swapper manipulates entire process where as a pager is concerned with the
individual pages of a process. The demand paging concept is using pager
rather than swapper. When a process is to be swapped in, the pager guesses
which pages will be used before the process is swapped out again. Instead
of swapping in a whole process, the pager brings only those necessary

pages into memory. The transfer of a paged memory to contiguous disk


space is shown in below figure.

Thus it avoids reading into memory pages that will not used any way
decreasing the swap time and the amount of physical memory needed. In
this technique we need some hardware support to distinct between the
pages that are in memory and those that are on the disk. A valid and invalid
bit is used for this purpose. When this bit is set to valid it indicates that the
associate page is in memory. If the bit is set to invalid it indicates that the
72 | P a g e
page is either not valid or is valid but currently not in the disk.

73 | P a g e
Marking a page invalid will have no effect if the process never attempts to
access that page. So while a process executes and access pages that are
memory resident, execution proceeds normally. Access to a page marked
invalid causes a page fault trap. It is the result of the OS’s failure to bring
the desired page into memory.
Procedure to handle page fault
If a process refers to a page that is not in physical memory then
 We check an internal table (page table) for this process to determine
whether the reference was valid or invalid.
 If the reference was invalid, we terminate the process, if it was valid but
not yet brought in, we have to bring that from main memory.
 Now we find a free frame in memory.
 Then we read the desired page into the newly allocated frame.
 When the disk read is complete, we modify the internal table to indicate
that the page is now in memory.
 We restart the instruction that was interrupted by the illegal address
trap. Now the process can access the page as if it had always been in
memory.
Page Replacement
 Each process is allocated frames (memory) which hold the process’s pages
(data)

74 | P a g e
 Frames are filled with pages as needed – this is called demand paging

75 | P a g e
 Over-allocation of memory is prevented by modifying the page-fault
service routine to replace pages
 The job of the page replacement algorithm is to decide which page gets
victimized to make room for a new page
 Page replacement completes separation of logical and physical memory
Page Replacement Algorithm
Optimal algorithm
 Ideally we want to select an algorithm with the lowest page-fault rate
 Such an algorithm exists, and is called (unsurprisingly) the optimal algorithm:
 Procedure: replace the page that will not be used for the longest time (or
at all) – i.e. replace the page with the greatest forward distance in the
reference string
 Example using 4 frames:

Reference # 1 2 3 4 5 6 7 8 9 1 1 1
0 1 2
Page 1 2 3 4 1 2 5 1 2 3 4 5
referenced
Frames 1 1 1 1 1 1 1 1 1 1 4 4
_ = faulting 2 2 2 2 2 2 2 2 2 2 2
page
3 3 3 3 3 3 3 3 3 3
4 4 4 5 5 5 5 5 5

 Analysis: 12 page references, 6 page faults, 2 page replacements. Page


faults per number of frames = 6/4 = 1.5
 Unfortunately, the optimal algorithm requires special hardware (crystal
ball, magic mirror, etc.) not typically found on today’s computers
 Optimal algorithm is still used as a metric for judging other page replacement
algorithms
FIFO algorithm
 Replaces pages based on their order of arrival: oldest page is replaced
 Example using 4 frames:

76 | P a g e
77 | P a g e
Reference # 1 2 3 4 5 6 7 8 9 1 1 1
0 1 2
Page 1 2 3 4 1 2 5 1 2 3 4 5
referenced
Frames 1 1 1 1 1 1 5 5 5 5 4 4
_ = faulting 2 2 2 2 2 2 1 1 1 1 5
page 3 3 3 3 3 3 2 2 2 2
4 4 4 4 4 4 3 3 3

 Analysis: 12 page references, 10 page faults, 6 page replacements. Page


faults per number of frames = 10/4 = 2.5
LFU algorithm (page-based)
 procedure: replace the page which has been referenced least often
 For each page in the reference string, we need to keep a reference count.
All reference counts start at 0 and are incremented every time a page is
referenced.
 example using 4 frames:

Reference # 1 2 3 4 5 6 7 8 9 1 1 1
0 1 2
Page 1 2 3 4 1 2 5 1 2 3 4 5
referenced
1 2 3
Frames 1
1 1
1 1
1 1 2
1 2
1 1 3
1 1 3
1 3
1 31
1 2 3
_ = faulting 1
2 1
2 2 1
2 2
2 2 2
2 2 3
2 3
2 32
page 1 1 1

n
1
3 3 1
3 1
3 5 1
5 5 2
3 2
3 25
= reference 1 1 1
4 1
4 1
4 4 1
4 4 1
4 2
4 24
count

 At the 7th page in the reference string, we need to select a page to be


victimized. Either 3 or 4 will do since they have the same reference count
(1). Let’s pick 3.
 Likewise at the 10th page reference; pages 4 and 5 have been referenced
once each. Let’s pick page 4 to victimize. Page 3 is brought in, and its
reference count (which was 1 before we paged it out a while ago) is
updated to 2.
78 | P a g e
 Analysis: 12 page references, 7 page faults, 3 page replacements. Page
faults per number of frames = 7/4 = 1.75

79 | P a g e
LFU algorithm (frame-based)
 Procedure: replace the page in the frame which has been referenced least often
 Need to keep a reference count for each frame which is initialized to 1
when the page is paged in, incremented every time the page in the frame
is referenced, and reset every time the page in the frame is replaced
 Example using 4 frames:

Reference # 1 2 3 4 5 6 7 8 9 1 1 1
0 1 2
Page 1 2 3 4 1 2 5 1 2 3 4 5
referenced
1 2 3 3
Frames 1
1 1
1 1 1
1 2
1 1 2
1 1 3
1 3
1 1 3
1
1 2 2 3
_ = faulting 1
2 2 1
2 1
2 2 2
2 2 3
2 3
2 2 3
2
page 1 1 1 1
3 1
3 1
3 3 5 1
5 5 1
5 1
3 3 1
n
= reference 1 1 2

count
1
4 14 4 14 4 14 1 4 4 24
 At the 7th reference, we victimize the page in the frame which has been
referenced least often -- in this case, pages 3 and 4 (contained within
frames 3 and 4) are candidates, each with a reference count of 1. Let’s
pick the page in frame 3. Page 5 is paged in and frame 3’s reference count
is reset to 1.
 At the 10th reference, we again have a page fault. Pages 5 and 4 (contained
within frames 3 and
4) are candidates, each with a count of 1. Let’s pick page 4. Page 3 is
paged into frame 3, and frame 3’s reference count is reset to 1.
 Analysis: 12 page references, 7 page faults, 3 page replacements. Page
faults per number of frames = 7/4 = 1.75
LRU algorithm
 Replaces pages based on their most recent reference – replace the page
with the greatest backward distance in the reference string
 Example using 4 frames:

80 | P a g e
Reference # 1 2 3 4 5 6 7 8 9 1 1 1
0 1 2

81 | P a g e
Page 1 2 3 4 1 2 5 1 2 3 4 5
referenced
Frames 1 1 1 1 1 1 1 1 1 1 1 5
_ = faulting 2 2 2 2 2 2 2 2 2 2 2
page 3 3 3 3 5 5 5 5 4 4
4 4 4 4 4 4 3 3 3

 Analysis: 12 page references, 8 page faults, 4 page replacements. Page


faults per number of frames = 8/4 = 2
 One possible implementation (not necessarily the best):
o Every frame has a time field; every time a page is referenced, copy
the current time into its frame’s time field
o When a page needs to be replaced, look at the time stamps to find the
oldest
Thrashing
• If a process does not have “enough” pages, the page-fault rate is very high
– low CPU utilization
– OS thinks it needs increased multiprogramming
– adds another process to system
• Thrashing is when a process is busy swapping pages in and out
• Thrashing results in severe performance problems. Consider the
following scenario, which is based on the actual behaviour of early
paging systems. The operating system monitors CPU utilization. If CPU
utilization is too low, we increase the degree of multiprogramming by
introducing a new process to the system. A global page replacement
algorithm is used; it replaces pages with no regard to the process to
which they belong. Now suppose that a process enters a new phase in
its execution and needs more frames.

82 | P a g e
83 | P a g e
84 | P a g e

You might also like