KEMBAR78
OS Unit IV - Memory Management & Virtual Memory | PDF | Process (Computing) | Operating System
0% found this document useful (0 votes)
15 views85 pages

OS Unit IV - Memory Management & Virtual Memory

Uploaded by

milekeb824
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views85 pages

OS Unit IV - Memory Management & Virtual Memory

Uploaded by

milekeb824
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 85

OPERATING SYSTEM (OS)

A.SWAMY GOUD,
ASSISTANT PROFESSOR,
BVRIT, NSP
UNIT - IV
MEMORY MANAGEMENT
AND VIRTUAL MEMORY
OS Syllabus - Unit IV

MEMORY MANAGEMENT AND VIRTUAL MEMORY


Basic concept, Logical and Physical Address Space,
Swapping, Memory allocation - Contiguous
Allocation, Paging, Segmentation.
Virtual Memory - Basic concept, Demand Paging,
Page fault, Page Replacement, Page Replacement
Algorithms: First In First Out (FIFO), Optimal, Least
Recently Used (LRU), Allocation of Frames,
Thrashing.
Introduction
Memory is central to the operation of a modern computer
system.
Memory consists of a large array of words or bytes, each
with its own address.
Program must be brought (from disk) into memory and
placed within a process for it to be run
The CPU fetches instructions from memory according to the
value of the program counter.
Main memory and registers are only storage CPU can
access directly
Cache sits between main memory and CPU registers
Protection of memory required to ensure correct operation
Basic Hardware
Main memory and the registers built into the processor itself
are the only storage that the CPU can access directly.
There are machine instructions that take memory addresses
as arguments, but none that take disk addresses.
Memory unit only sees a stream of addresses + read
requests, or address + data and write requests
Register access in one CPU clock (or less)
Completing a memory access may take many cycles of the
CPU clock.
In such cases, the processor normally needs to stall, since it
does not have the data required to complete the instruction
that it is executing.
Base and Limit Registers
❖ We first need to make
sure that each process
has a separate memory
space.
❖ We can provide
protection by using two
registers, usually a base
and a limit
❖ CPU must check every
memory access
generated in user mode
to be sure it is between
base and limit for that
user
Hardware Address
Protection
❖ Protection of memory space is accomplished by having the CPU
hardware compare every address generated in user mode with
the registers.
❖ Any attempt by a program executing in user mode to access
operating-system memory or other users' memory results in error
Address Binding
❖ Programs on disk, ready to be brought into memory to execute
form an input queue.
❖ Input queue – collection of processes on the disk that are waiting
to be brought into memory to run the program
❖ User programs go through several steps before being run (Fig on
next slide)
❖ Addresses may be represented in different ways during these
steps.
❖ Addresses in the source program are generally symbolic (such
as count).
❖ A compiler will typically bind these symbolic addresses to
relocatable addresses (such as "14 bytes from the beginning of
this module").
❖ The linkage editor or loader will in turn bind the relocatable
addresses to absolute addresses (such as 74014).
❖ Each binding is a mapping from one address space to another.
Multistep Processing of a
User Program
Binding of Instructions and
Data to Memory
Address binding of instructions and data to memory addresses
can happen at three different stages
Compile time: If memory location known a priori, absolute
code can be generated; must recompile code if starting
location changes
Load time: Must generate relocatable code if memory
location is not known at compile time
Execution time: Binding delayed until run time if the
process can be moved during its execution from one
memory segment to another
Special hardware must be available for this scheme to
work
Most general-purpose operating systems use this
method.
Logical vs. Physical
Address Space
❖ The concept of a logical address space that is bound to a
separate physical address space is central to proper
memory management
❖ Logical address – generated by the CPU; also referred
to as virtual address
❖ Physical address – address seen by the memory unit
❖ Logical and physical addresses are the same in compile-
time and load-time address-binding schemes; logical
(virtual) and physical addresses differ in execution-time
address-binding scheme
❖ Logical address space is the set of all logical addresses
generated by a program
❖ Physical address space is the set of all physical
addresses generated by a program
Memory-Management Unit
(MMU)
❖ The run-time mapping from virtual to physical addresses is
done by a hardware device called the Memory-Management
Unit (MMU)
❖ Many different methods are available to accomplish such
mapping
❖ To start, consider simple scheme where the value in the
relocation register is added to every address generated by a
user process at the time it is sent to memory
❖ Base register now called relocation register
❖ For example, if the base is at 14000, then an attempt by the
user to address location 0 is dynamically relocated to
location 14000; an access to location 286 is mapped to
location 14286.
Dynamic relocation using a
relocation register
❖ The user program deals
with logical addresses; it
never sees the real
physical addresses
❖ The memory-mapping
hardware converts logical
addresses into physical
addresses.
286 14286 ❖ Execution-time binding
occurs when reference is
made to location in
memory.
❖ The final location of a
referenced memory
Dynamic relocation using
a relocation register.
address is not determined
until the reference is made.
Dynamic Loading
In our discussion so far, it has been necessary for the entire
program and all data of a process to be in physical memory
for the process to execute.
The size of a process has been limited to the size of physical
memory.
To obtain better memory-space utilization, we can use
dynamic loading.
With dynamic loading, routine is not loaded until it is called
Better memory-space utilization; unused routine is never
loaded
All routines kept on disk in relocatable load format
The main program is loaded into memory and is executed.
Dynamic Loading
When a routine needs to call another routine, the calling
routine first checks to see whether the other routine has
been loaded.
If it has not, the relocatable linking loader is called to load the
desired routine into memory and to update the program's
address tables to reflect this change.
Useful when large amounts of code are needed to handle
infrequently occurring cases
No special support from the operating system is required
Dynamic Linking
❖ Some operating systems support only static linking.
❖ Static linking – system language libraries are treated like
any other object module and are combined by the loader into
the binary program image.
❖ Dynamic linking – linking postponed until execution time
❖ This feature is usually used with system libraries, such as
language subroutine libraries.
❖ System also known as shared libraries
❖ Without this facility, each program on a system must include
a copy of its language library (or at least the routines
referenced by the program) in the executable image.
❖ This requirement wastes both disk space and main memory.
Dynamic Linking
❖ Small piece of code, stub, used to locate the appropriate
memory-resident library routine
❖ Stub replaces itself with the address of the routine, and
executes the routine
❖ Operating system checks if routine is in processes’ memory
address
❖ If not in address space, add to address space
❖ This feature can be extended to library updates (such as bug
fixes).
❖ A library may be replaced by a new version, and all
programs that reference the library will automatically use the
new version.
❖ Dynamic linking generally requires help from the operating
system.
Swapping
❖ A process can be swapped temporarily out of memory to a
backing store, and then brought back into memory for
continued execution
❖ Total physical memory space of processes can exceed
physical memory
❖ Backing store – fast disk large enough to accommodate
copies of all memory images for all users; must provide direct
access to these memory images
❖ For example, assume a multiprogramming environment with
a round-robin CPU-scheduling algorithm.
❖ When a quantum expires, the memory manager will start to
swap out the process that just finished and to swap another
process into the memory space that has been freed.
Schematic View of Swapping

Swapping of two processes using a disk


as a backing store.
Swapping
❖ Roll out, roll in : A variant of this swapping policy is used for
priority-based scheduling algorithms.
❖ If a higher-priority process arrives and wants service, the
memory manager can swap out the lower-priority process and
then load and execute the higher-priority process.
❖ Major part of swap time is transfer time; total transfer time is
directly proportional to the amount of memory swapped
❖ Normally, a process that is swapped out will be swapped back
into the same memory space it occupied previously.
❖ If binding is done at assembly or load time, then the process
cannot be easily moved to a different location.
❖ If execution-time binding is being used, then a process can be
swapped into a different memory space, because the physical
addresses are computed during execution time.
❖ System maintains a ready queue of ready-to-run processes
which have memory images on disk
Contiguous Allocation
❖ Main memory must support both OS and user processes
❖ Limited resource, need to allocate main memory in the
most efficient way possible.
❖ Contiguous allocation is one early method
❖ Main memory usually into two partitions:
❖ Resident operating system, usually held in low memory
with interrupt vector
❖ User processes then held in high memory
❖ Weusually want several user processes to reside in
memory at the same time.
❖ Eachprocess contained in single contiguous section of
memory
Memory Mapping and
Protection
❖ Relocation registers used to protect user processes from
each other, and from changing operating-system code and
data, just to recall….
➢ Base register contains value of smallest physical
address
➢ Limit register contains range of logical addresses –
each logical address must be less than the limit register
➢ MMU maps logical address dynamically
➢ The relocation-register scheme provides an effective
way to allow the operating system's size to change
dynamically.
➢ This flexibility is desirable in many situations.
Memory Allocation
❖ One of the simplest methods for allocating memory is to
divide memory into several Fixed-sized partitions.
❖ Each partition may contain exactly one process.
❖ Degree of multiprogramming limited by number of partitions
❖ In this multiple partition method when a partition is free, a
process is selected from the input queue and is loaded into
the free partition.
❖ When the process terminates, the partition becomes
available for another process.
❖ This method was originally used by the IBM OS/360
Operating System (called MFT - Multiprogramming with Fixed
number of Tasks); it is no longer in use.
❖ The method described next is a generalization of the fixed-
partition scheme (called MVT - Multiprogramming with
Variable number of Tasks)
Multiple-Partition allocation
❖ In the Variable-Partition scheme, the operating system
keeps a table indicating which parts of memory are available
and which are occupied.
❖ Initially, all memory is available for user processes and is
considered one large block of available memory a hole.
❖ Hole – block of available memory; holes of various size are
scattered throughout memory
❖ As processes enter the system, they are put into an input
queue.
❖ The operating system takes into account the memory
requirements of each process and the amount of available
memory space in determining which processes are allocated
memory.
❖ When a process is allocated space, it is loaded into memory,
and it can then compete for CPU time.
Multiple-Partition
allocation
❖ Process exiting frees its partition, adjacent free partitions
combined
❖ Operating system maintains information about :
a) allocated partitions b) free partitions (hole)
Dynamic Storage -
Allocation Problem
❖ The memory blocks available comprise a set of holes of
various sizes scattered throughout memory.
❖ When a process arrives and needs memory, the system
searches the set for a hole that is large enough for this
process. If the hole is too large, it is split into two parts.
❖ One part allocated to the arriving process; the other is
returned to the set of holes.
❖ How to satisfy a request of size n from a list of free holes?
❖ There are many solutions to this problem.
❖ First-fit: Allocate the first hole that is big enough.
❖Searching can start either at the beginning of the set of
holes or at the location where the previous first-fit search
ended.
Dynamic Storage -
Allocation Problem
❖ Best-fit: Allocate the smallest hole that is big enough; must
search entire list, unless ordered by size.
❖ Produces the smallest leftover hole
❖ Worst-fit: Allocate the largest hole; must also search entire
list.
❖ Produces the largest leftover hole
❖ First-fit and best-fit better than worst-fit in terms of speed
and storage utilization
1000 2500 300 250

First Fit
Worst Fit
200 Best Fit
Fragmentation
Both the first-fit and best-fit strategies for memory allocation
suffer from External fragmentation.
As processes are loaded and removed from memory, the free
memory space is broken into little pieces.
External fragmentation exists when there is enough total
memory space to satisfy a request but the available spaces
are not contiguous; storage is fragmented into a large
number of small holes.
Depending on the total amount of memory storage and the
average process size, external fragmentation may be a minor
or a major problem.
Internal Fragmentation – allocated memory may be slightly
larger than requested memory; this size difference is memory
internal to a partition, but not being used.
Fragmentation
One solution to the problem of external fragmentation is
Compaction
The goal is to shuffle the memory contents so as to place
all free memory together in one large block.
Compaction is possible only if relocation is dynamic, and is
done at execution time
If addresses are relocated dynamically, relocation requires
only moving the program and data and then changing the
base register to reflect the new base address.
Another possible solution to the external-fragmentation
problem is to permit the logical address space of the
processes to be noncontiguous
Two techniques achieve this solution:
Paging and Segmentation
Paging
Paging is a memory-management scheme that permits the
physical address space of a process to be noncontiguous.
Paging avoids external fragmentation
Avoids problem of varying sized memory chunks
Divide physical memory into fixed-sized blocks called
frames
Size is power of 2, between 512 bytes and 16 Mbytes
Divide logical memory into blocks of same size called pages
Keep track of all free frames
To run a program of size N pages, need to find N free
frames and load program
Set up a page table to translate logical to physical
addresses
Backing store likewise split into pages
Still have Internal fragmentation
Address Translation Scheme
Address generated by CPU is divided into:
Page Number (p) – used as an index into a Page Table
which contains base address of each page in physical
memory
Page Offset (d) – combined with base address to define the
physical memory address that is sent to the memory unit
page number page offset
p d
m -n n
If the size of the logical address space is 2m, and a page size
is 2n addressing units (bytes or words then the high-order m-
n bits of a logical address designate the page number, and
the n low-order bits designate the page offset.
Paging Hardware
Paging Model of Logical and
Physical Memory
Paging Example

n=2 and m=4


Memory Size
32-byte
Page Size 4-
byte

page number page offset


p d
m -n n
Paging
❖ In paging scheme, no external fragmentation: any free frame
can be allocated to a process that needs it.
❖ However, there may be some internal fragmentation.
❖ Calculating internal fragmentation
❖ Page size = 2,048 bytes
❖ Process size = 72,766 bytes
❖ 35 pages + 1,086 bytes
❖ Internal fragmentation of 2,048 - 1,086 = 962 bytes
❖ In the worst case, a process would need n pages plus 1 byte.
❖ If process size is independent of page size, we expect
internal fragmentation to average one-half page per process.
❖ This consideration suggests that small page sizes are
desirable.
❖ But each page table entry takes memory to track
Free Frames

Before allocation After allocation


Shared Pages
❖ An advantage of paging is the possibility of sharing common
code.
❖ This consideration is particularly important in a time-sharing
environment
❖ Shared code
❖ One copy of read-only (reentrant) code shared among
processes (Ex: text editors, compilers, window systems)
❖ Similar to multiple threads sharing the same process
space
❖ Also useful for interprocess communication if sharing of
read-write pages is allowed
❖ Private code and data
❖ Each process keeps a separate copy of the code and data
❖ The pages for the private code and data can appear
anywhere in the logical address space
Shared Pages Example
Structure of the Page Table
❖ Most modern computer systems support a large logical
address space (232 to 264).
❖ In such an environment, the page table itself becomes
excessively large.
❖ Consider a 32-bit logical address space with Page size of 4
KB (212)
❖ Page table would have 1 million entries (232 / 212)
❖ If each entry is 4 bytes -> 4 MB of physical address space /
memory for page table alone
❖ we would not want to allocate the page table contiguously in
main memory.
❖ Different solutions are available to solve this issue.
Hierarchical Paging
Hashed Page Tables
Inverted Page Tables
Hierarchical Page Tables
❖ One way is to use a two-
level paging algorithm, in
which the page table itself
is also paged
❖ Break up the logical
address space into multiple
page tables
❖ A simple technique is a two-
level page table

Two-Level Page-Table Scheme


Two-Level Paging Example
❖ A logical address (on 32-bit machine with 1K page size) is
divided into:
❖ A page number consisting of 22 bits
❖ A page offset consisting of 10 bits
❖ Since the page table is paged, the page number is further divided
into:
❖ A 12-bit page number
❖ A 10-bit page offset
❖ Thus, a logical address is as follows:

❖ where p1 is an index into the outer page table, and p2 is the


displacement within the page of the inner page table
❖ This scheme is also known as forward-mapped page table
Address-Translation
Scheme
Hashed Page Tables
❖ A common approach for handling address spaces larger
than 32 bits is to use hashed table
❖ The virtual page number is hashed into a page table
❖ Each element contains (1) the virtual page number (2) the
value of the mapped page frame (3) a pointer to the next
element
❖ The algorithm works as follows:
❖ The virtual page number in the virtual address is hashed into
the hash table.
❖ The virtual page number is compared with field 1 in the first
element in the linked list.
❖ If there is a match, the corresponding page frame (field 2) is
used to form the desired physical address.
❖ If there is no match, subsequent entries in the linked list are
searched for a matching virtual page number.
Hashed Page Table
Segmentation
❖ An important aspect of memory management that became
unavoidable with paging is the separation of the user's view
of memory from the actual physical memory.
❖ The user's view is mapped onto physical memory. This
mapping allows differentiation between logical memory and
physical memory.
❖ Users prefer to view program/memory as a collection of
variable-sized segments, instead of linear array of bytes,
containing instructions and data
❖ Segmentation is a memory-management scheme that
supports user view of memory.
❖ A logical address space is a collection of segments.
❖ Each segment has a name and a length.
Segmentation
❖ How you think of a program
when you are writing it?
❖ You think of it as a main
program consist of :
Methods
Procedures
Functions
Objects
Local Variables
Global Variables
Common Block
Stack
Symbol Table
Arrays, etc.
User’s View of a Program
Logical View of
Segmentation
1

4
1

3 2
4

User space Physical memory space


Example of Segmentation
VIRTUAL MEMORY
Introduction
❖ The instructions being executed must be in physical
memory.
❖ The first approach to meeting this requirement is to place the
entire logical address space in physical memory.
❖ Dynamic loading can help to ease this restriction, but it
generally requires special precautions and extra work by the
programmer.
❖ Code needs to be in memory to execute, but entire program
rarely used
❖ Error code, unusual routines, large data structures (Arrays,
lists, tables are often allocated more memory than they
actually need.)
❖ Even in those cases where the entire program is needed, it
may not all be needed at the same time.
Introduction
The ability to execute a program that is only partially in memory
would confer many benefits:
❖ A program would no longer be constrained by the amount of
physical memory that is available.
❖ As each user program could take less physical memory,
more programs could be run at the same time, with a
corresponding increase in CPU utilization and throughput
but with no increase in response time or turnaround time.
❖ Less I/O would be needed to load or swap user programs
into memory, so each user program would run faster.
❖ Running a program that is not entirely in memory would
benefit both the system and the user.
Introduction
❖ Virtual Memory – separation of user logical memory from
physical memory
❖ Only part of the program needs to be in memory for
execution
❖ Logical address space can therefore be much larger than
physical address space
❖ Allows address spaces to be shared by several processes
❖ Allows for more efficient process creation
❖ More programs running concurrently
❖ Less I/O needed to load or swap processes
❖ Virtual memory can be implemented via:
❖ Demand paging
❖ Demand segmentation
Virtual Memory that is
Larger than Physical Memory
Virtual-Address Space
❖ The virtual address space of a process refers to
the logical (or virtual) view of how a process is
stored in memory.
❖ Typically, this view is that a process begins at a
certain logical address-say, address 0-and exists
in contiguous Memory.
❖ Recall, though, that in fact physical memory may
Sparse address
be organized in page frames and that the physical spaces.
page frames assigned to a process may not be
contiguous.
❖ It is up to the memory management unit (MMU) to
map logical pages to physical page frames in
memory.
❖ In the fig Heap to grow upward in memory & stack
to grow downward in memory.
❖ Virtual address spaces that include holes are
known as Sparse address spaces.
Virtual Address Space
❖ In addition to separating logical memory from physical
memory, virtual memory allows files and memory to be
shared by two or more processes through page sharing.
❖ This leads to the following benefits:
System libraries can be shared by several processes through
mapping of the shared object into a virtual address space.
Virtual memory enables processes to share memory.
Two or more processes can communicate through the use of
shared memory. Virtual memory allows one process to create
a region of memory that it can share with another process.
Virtual memory can allow pages to be shared during process
creation with the fork() system call thus speeding up process
creation.
Shared Library using Virtual
Memory
Demand Paging
❖ Consider how an executable program might be loaded from
disk into memory.
❖ One option is to load the entire program in physical memory
at program execution time.
❖ However we may not initially need the entire program in
memory.
❖ An alternative strategy is to load pages only as they are
needed.
❖ This technique is known as Demand Paging and is
commonly used in Virtual Memory Systems.
❖ With demand-paged virtual memory, pages are only loaded
when they are demanded during program execution; pages
that are never accessed are thus never loaded into physical
memory.
Demand Paging
❖ When we bring a page into memory only when it is needed
Less I/O needed, no unnecessary I/O
Less memory needed
Faster response
More users
❖ A demand-paging system is similar to a paging system with
swapping, where processes reside in secondary memory.
❖ Page is needed  reference to it
❖ Invalid reference  abort
❖ Not-in-memory  bring to memory
❖ Lazy swapper – never swaps a page into memory unless
page will be needed.
❖ Swapper that deals with pages is a Pager.
Transfer of a Paged Memory to
Contiguous Disk Space
Demand Paging

❖ When a process is to be swapped in, the pager guesses


which pages will be used before the process is swapped out
again.
❖ The valid - invalid bit scheme can be used for this purpose.
❖ When this bit is set to "valid” the associated page is both
legal and in memory.
❖ If the bit is set to "invalid” the page either is not valid (that is,
not in the logical address space of the process) or is valid
but is currently on the disk.
Valid - Invalid Bit
❖ With each page table entry a valid–invalid bit is associated
(v  in-memory – memory resident, i  not-in-memory)
❖ Initially valid–invalid bit is set to i on all entries valid-
Frame # invalid bit
❖ Example of a page table snapshot:
v
v
v
v
Page
i
Table
….

i
i
❖ During address translation,
❖ if valid–invalid bit in page table entry is I  page fault
Page Table When Some Pages
Are Not in Main Memory
Page Fault
❖ If there is a reference to a page, first reference to that page
will trap to operating system:
Page Fault
1. Operating system looks at another table to decide:
➢ Invalid reference  abort
➢ Just not in memory
2. Get empty frame
3. Swap page into frame via scheduled disk operation
4. Reset tables to indicate page now in memory
Set validation bit = v
5. Restart the instruction that caused the page fault
Steps in Handling a Page Fault
What Happens if there is no
Free Frame?
❖ In our discussion of the page-fault rate, we assumed that
each page faults at most once, when it is first referenced.
❖ This representation is not strictly accurate
❖ How much to allocate to each?
❖ If a process of ten pages actually uses only half of them,
then demand paging saves the I/0 necessary to load the
five pages that are never used.
❖ We could also increase our degree of multiprogramming by
running many processes.
❖ If we increase our degree of multiprogramming, we are over
loading memory – if process suddenly try to use all of its
pages.
What Happens if there is no
Free Frame?
❖ System memory is not used only for holding program
Pages, Buffers for I/ 0 also consume a considerable amount
of memory.
❖ This use can increase the strain on memory-placement
algorithms.
❖ While a user process is executing, a page fault occurs.
❖ The operating system determines where the desired page is
residing on the disk but then finds that there are no free
frames on the free-frame list; all memory is in use
❖ The operating system has several options at this point.
❖ It could terminate the user process or Swap Process.
❖ But, demand paging is the operating system's attempt to
improve the computer system's utilization and throughput.
Page Replacement
❖ Algorithm – Terminate? Swap out? Replace the page?
❖ Page replacement – find some page in memory, but not
really in use, page it out
➢ Performance – want an algorithm which will result in
minimum number of page faults
❖ Same page may be brought into memory several times
❖ Prevent over-allocation of memory by modifying page-fault
service routine to include page replacement
❖ Use modify (dirty) bit to reduce overhead of page transfers
– only modified pages are written to disk
❖ Page Replacement completes separation between Logical
Memory and Physical Memory – large virtual memory can
be provided on a smaller physical memory.
Need For Page Replacement
Basic Page Replacement
❖ Page replacement takes the following approach.
❖ If no frame is free, we find one that is not currently being used and
free it.
1. Find the location of the desired page on disk
2. Find a free frame:
If there is a free frame, use it
If there is no free frame, use a page replacement algorithm to
select a victim frame
Write victim frame to disk if dirty
3. Bring the desired page into the (newly) free frame; update the
page and frame tables
4. Continue the process by restarting the instruction that caused the
trap
❖ Note now potentially 2 page transfers for page fault – increasing
Effective Access Time
❖ We can reduce this overhead by using a modify bit (or dirty bit).
Page Replacement
Page and Frame Replacement
Algorithms
❖ We can reduce this overhead by using a modify bit (or dirty bit)
❖ It completes the separation between logical memory and physical
memory.
❖ Frame-allocation algorithm determines
❖ How many frames to give each process?
❖ Which frames to replace?
❖ Page-replacement algorithm
❖ Want lowest page-fault rate on both first access and re-access
❖ Evaluate algorithm by running it on a particular string of memory
references (reference string) and computing the number of page faults
on that string
❖ String is just page numbers, not full addresses
❖ Repeated access to the same page does not cause a page fault
❖ In all our examples, the reference string is
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
First-In-First-Out (FIFO)
Algorithm
❖ Reference string:
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
❖ 3 Frames (3 pages can be in memory at a time per process)
❖ The number of page faults are 15.
❖ Similarly find number of page faults with 4 Frames (10 Page
Faults) and 5 Frames (9 Page Faults)
FIFO Illustrating Belady’s
Anomaly
Optimal Page Replacement
❖ Discovery of Belady's anomaly was the search for an
Optimal Page Replacement algorithm which has the lowest
page-fault rate of all algorithms and will never suffer from
Belady's anomaly.
❖ Replace page that will not be used for longest period of time
❖ Use of this page-replacement algorithm guarantees the
lowest possible page fault rate for a fixed number of frames.
❖ Optimal replacement is much better than a FIFO algorithm
❖ Unfortunately, the optimal page-replacement algorithm is
difficult to implement, because it requires future knowledge
of the reference string.
❖ The optimal algorithm is used mainly for comparison studies.
Optimal Page Replacement
❖ Reference string:
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
❖ 3 Frames (3 pages can be in memory at a time per process)
❖ The number of page faults are 9.
Least Recently Used (LRU)
Algorithm
❖ lf the optimal algorithm is not feasible, perhaps an
approximation of the optimal algorithm is possible.
❖ If we use the recent past as an approximation of the near
future, then we can replace the that has not been used for
the longest period of time.
❖ Use past knowledge rather than future
❖ This approach is the Least Recently Used (LRU) Algorithm.
❖ LRU replacement associates with each page the time of
that page's last use.
❖ When a page must be replaced, LRU chooses the page
that has not been used for the longest period of time.
❖ This algorithm works better than FIFO but worse than OPT
❖ Generally good algorithm and frequently used
Least Recently Used (LRU)
Algorithm
❖ Reference string:
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
❖ 3 Frames (3 pages can be in memory at a time per process)
❖ The number of page faults are 12.
Allocation of Frames
❖ How do we allocate the fixed amount of free memory among
the various processes?
❖ If we have 93 free frames and two processes, how many
frames does each process get?
❖ We can require that the operating system allocate all its
buffer and table space from the free-frame list.
❖ Our strategies for the allocation of frames are constrained in
various ways.
❖ We cannot, for example, allocate more than the total number
of available frames (unless there is page sharing).
❖ Also we must allocate at least a minimum number of frames.
❖ As the number of frames allocated to each process
decreases, the page-fault rate increases, slowing process
execution.
Allocation of Frames
❖ Each process needs minimum number of frames
❖ Example: IBM 370 – 6 pages to handle SS MOVE
instruction:
➢ instruction is 6 bytes, might span 2 pages
➢ 2 pages to handle from
➢ 2 pages to handle to
❖ Maximum of course is total frames in the system
❖ Two major allocation schemes
➢ Fixed allocation
➢ Priority allocation
❖ Many variations
Fixed Allocation
❖ Equal allocation – The easiest way to split m frames among n
processes is to give everyone an equal share, m/n frames.
❖ For example, if there are 93 frames (after allocating frames for the
OS) and 5 processes, give each process 18 frames
❖ Keep some as free frame buffer pool (in this case 3 free frames)
❖ Proportional allocation – Allocate according to the size of
process.
❖ If a small student process of 10 KB and an interactive database of
127 KB are the only two processes running in a system with 62
free frames, it does not make much sense to give 31 frames.
❖ In both equal and proportional allocation the allocation may vary
according to the multiprogramming level.
❖ If the multiprogramming level is increased, each process will lose
some frames to provide the memory needed for the new process.
❖ If the multiprogramming level decreases, the frames that were
allocated to the departed process can be spread over the
remaining processes.
Global vs. Local Allocation
❖ Another important factor in the way frames are allocated to
the various processes is page replacement.
❖ Global replacement – process selects a replacement frame
from the set of all frames; one process can take a frame from
another
❖ Process execution time can vary greatly
❖ Greater throughput so more common
❖ Local replacement – each process selects from only its own
set of allocated frames
❖ More consistent per-process performance
❖ But possibly underutilized memory
Global vs. Local Allocation
❖ For example, consider an allocation scheme wherein we
allow high-priority processes to select frames from low-
priority processes for replacement.
❖ A process can select a replacement from among its own
frames or the frames of any lower-priority process.
❖ This approach allows a high-priority process to increase its
frame allocation at the expense of a low-priority process.
❖ With a local replacement strategy, the number of frames
allocated to a process does not change.
❖ With global replacement, a process may happen to select
only frames allocated to other processes, thus increasing the
number of frames allocated to it
Thrashing
❖ If the number of frames allocated to a low-priority process falls
below the minimum number required by the computer
architecture, we must suspend that process's execution.
❖ If a process does not have “enough” pages, the page-fault rate is
very high
➢ Page fault to get page
➢ Replace existing frame
➢ But quickly need replaced frame back
➢ This leads to:
Low CPU utilization
Operating system thinking that it needs to increase the
degree of multiprogramming
Another process added to the system
❖ Thrashing  a process is busy swapping pages in and out
❖ A process is thrashing if it is spending more time paging than
executing.
Thrashing

❖ As the degree of multiprogramming increases, CPU utilization


also increases, although more slowly, until a maximum is
reached.
❖ If the degree of multiprogramming is increased even further,
thrashing sets in, and CPU utilization drops sharply.
Thrashing

❖ How to eliminate thrashing


❖ To resolve hard drive thrashing you can do any of the
suggestions below.
1. Increase the amount of RAM in the computer.
2. Decrease the number of programs being run on the
computer.
3. Adjust the size of the swap file.

You might also like