Chapter 3 Memory Management
Chapter 3 Memory Management
Memory Management
l
ca
Memory Management Unit (MMU)
gi
lo
— Hardware device that maps virtual to physical address.
n
ee
? tw
— In MMU scheme, the value in the relocation register is added to
es e
ss e b
re nc
every address generated by a user process at the time it is sent to
dd re
l a iffe
memory.
ica d
ys e
— The user program deals with logical addresses; it never sees the real
ph th
d t’s
physical address. an ha
W
12/31/2023 School of Computing, DDUIoT 7
Swapping
A process can be swapped temporarily out of memory
to a backing store and then brought back into memory
for continued execution.
Process 8 Process 10
Process 1
Input Queue )KB 7( 10
(in the disk) KB
KB 4 KB 8 KB 9 KB 7
Process 2
)KB 9( 10
KB
Internal Process 3
Fragmentation )KB 8( 10
KB
As shown
• This method suffers from internal fragmentations.
• The degree of multiprogramming is bounded to 3 although it
can be 4.
12/31/2023 School of Computing, DDUIoT 16
Dynamic Partitioning
• Partitions are of variable length and number
• Initially, all memory is available for user processes,
and is considered as one large block of available
memory.
Process 3
)KB 8(
OS
Base 999
1000
Start 1040 1010
address of Process
process 20
Legal range 1040
Limit
Process
1060
1070
Executable file
(Size =20 Process
memory words) 1100
1125
Process
1150
Loader 1200
Process
12/31/2023 School of Computing, DDUIoT 1250 20
1255
Memory
Using Best Fit
OS
Base 1000
Start 1100 1010
address of Process
process 20
Legal range 1040
Limit
1070
Executable file
(Size =20 Process
memory words) 1100
Process
1120
1125
Process
1150
Loader
1200
Process
12/31/2023 School of Computing, DDUIoT 1250 21
1255
Memory
Using Worst Fit
OS
Base 1000
Start 1150 1010
address of Process
process 20
Legal range 1040
Limit
1070
Executable file
(Size =20 Process
memory words) 1100
1125
Process
1150
Process
1170
Loader
1200
Process
12/31/2023 School of Computing, DDUIoT 1250 22
1255
Dynamic Partitioning
o The degree of multi-programming is changing according
to the number of processes.
o This method suffers from external fragmentation in the
memory (in ready queue).
External fragmentation: is the phenomenon, in
which memory that is external to all partitions
becomes increasingly fragmented.
OS
Process 4
Input Queue
(in the disk) Process 2
KB 4 KB 8 KB 9 KB 7
Process 3 External
Fragmentations
Process 9
Process 20
As shown:
dynamic partition is suffered from external Fragmentations.
OS
Process 4
Process 4
Input Queue
(in the disk)
Process 2
Process 2
Process 3
KB 4 KB 8 KB 9 KB 7
Process 3
Process 9
Process 9
Process 20
Process 20 A new hole to store a
Compaction: new process
Is a movement of the memory contents to place all free memory in a one large
block sufficient to store new process.
It is a solution for the external fragmentation but it is expensive and is not always
possible.
12/31/2023 School of Computing, DDUIoT 25
Paging and Segmentation
p d
12/31/2023
12/31/2023 School of Computing, DDUIoT 30
Paging Example
Assume:-
o page size=4 bytes
o physical memory = 32 bytes (8
pages).
How a logical memory address can
be mapped into physical memory?
Example :
Logical address 0(containing ‘a’)
00000---address of a.
Page number=000(the 3 higher order bit)
Offset number=00( the last 2 least bit)
o address a exist:
i. is on page 0.
ii. is at offset 0.
then Indexing into the page table using Logical address 0 maps 5*4+0=20
page number, you can get page 0 is in Logical address 3 maps to= 5*4+3=23
frame 5.
Page number is changed into frame Logical address 4 maps to =6*4+0=24
Logical address 13 maps to= 2*4+1=9.
number; offset number is used as it is.
Finally logical address 0 is mapped to
physical 20, i.e. 20=[(5x4)+0]
12/31/2023
12/31/2023 School of Computing, DDUIoT 31
More on Paging
o In paging scheme, Pages are allocated as units.
o In this scheme there is no external fragmentation.
o But internal fragmentation is inevitable.
Example:-
― assume page size=2048 bytes.
― a process of 61442 bytes needs 30 pages plus 2 bytes.
n
r
oth r a
u se
Since units are managed in terms of pages 31 pages are
y
an ente
OS ess e an
er
allocated.
to
o In the worst case a process needs n pages plus 1 byte.
er , is
or
me
So it will be allocated n+1 pages.
y?
u s
Wh
page.
g
or
cha agin
ces o f
s ?
pro ress
p
In
12/31/2023
12/31/2023 School of Computing, DDUIoT 32
Page table
o Page table is used to map virtual pages onto page
frames.
o Page table can have the following important fields.
Page frame number: represented the page frame number in
physical memory.
Present/absent bit: indicate whether the page to which the entry
belongs is currently in memory or not.
bit 1- the entry is valid and can be used .
Bit 0 the virtual page is not currently in memory. Accessing a page
table entry with this bit set to 0 causes a page fault.
Protection bits: tell what kinds of access are permitted. In the
simplest form, this field contains 1 bit, with 0 for read/write
and 1 for read only.
Modified bit: is bit set when page is written. If the page in it
has been modified, it must be written back to the disk. If it has
not been
12/31/2023
12/31/2023 School of Computing, DDUIoT 33
Page table(con’t…)
Referenced bit: is set whenever a page is referenced, either for
reading or writing.
Its value is to help the operating system choose a page to evict
when a page fault occurs and plays an important role in several
of the page replacement algorithms
Cache disable: With this bit, caching can be turned off. This feature
is important for pages that map onto device registers rather than
memory.
12/31/2023
12/31/2023 School of Computing, DDUIoT 34
Implementation of Page Table
o Two options: Page table can be kept in registers or main memory
o Page table is kept in main memory due to bigger size.
– Ex: address space = 232 to 264
– Page size= 212
– Page table= 232 / 212=220
– If each entry consists of 4 bytes, the page table size = 4MB.
o Page-table base register (PTBR) points to the page table.
o Page-table length register (PTLR) indicates size of the page table.
o PTBR, and PTLR are maintained in the registers.
o In this scheme every data/instruction access requires two memory
accesses. One for the page table and one for the data/instruction.
– Memory access is slowed by a factor of 2.
– Swapping might be better !
• The two memory access problem can be solved by the use of a special
fast-lookup hardware cache called associative memory or translation
look-aside buffers (TLBs)
12/31/2023
12/31/2023 School of Computing, DDUIoT 36
User’s View of a Program
12/31/2023
12/31/2023 School of Computing, DDUIoT 37
Logical View of Segmentation
11
4
1
3 22
4
33
12/31/2023
12/31/2023 School of Computing, DDUIoT 38
Segmentation Architecture
o Logical address consists of a two tuple
<segment-number, offset>
o Segment Table
• Maps two-dimensional user-defined addresses into one-
dimensional physical addresses.
• Each table entry has
– Base - contains the starting physical address where the
segments reside in memory.
– Limit - specifies the length of the segment.
• Segment-table base register (STBR) points to the segment
table’s location in memory.
• Segment-table length register (STLR) indicates the
number of segments used by a program;.
Note: segment number s is legal if s < STLR .
12/31/2023
12/31/2023 School of Computing, DDUIoT 39
Segmentation Architecture (cont.)
o Relocation is dynamic - by segment table
o Sharing
―Code sharing occurs at the segment level.
―Shared segments must have same segment number .
o Allocation- dynamic storage allocation problem
―use best fit/first fit, may cause external fragmentation.
o Protection
• protection bits associated with segments
– read/write/execute privileges
– array in a separate segment - hardware can check
for illegal array indexes.
12/31/2023
12/31/2023 School of Computing, DDUIoT 40
Segmented Paged Memory
o Segment-table entry contains not the base
address of the segment, but the base
address of a page table for this segment.
―Overcomes external fragmentation problem of
segmented memory.
―Paging also makes allocation simpler; time to
search for a suitable segment (using best-fit etc.)
reduced.
―Introduces some internal fragmentation and table
space overhead.
o Multics - single level page table
o IBM OS/2 - OS on top of Intel 386
uses a two level paging scheme
12/31/2023
12/31/2023 School of Computing, DDUIoT 41
Virtual Memory
12/31/2023
12/31/2023 School of Computing, DDUIoT 42
Background
o First requirement for execution is Instructions must be in physical
memory
o One Approach is Place entire logical address in main memory.
o The size of the program is limited to size of main memory.
o Normally entire program may not be needed in main memory.
Programs have error conditions.
Arrays, lists, and tables may be declared by 100 by 100 elements,
but seldom larger than 10 by 10 elements.
Assembler program may have room for 3000 symbols, although
average program may contain less than 200 symbols.
Certain portions or features of the program are used rarely.
o Benefits of the ability to execute program that is partially in
memory:
User can write programs and software for entirely large virtual
address space.
More programs can run at the same time.
Less I/O would be needed to load or swap each user program into
memory.
12/31/2023
12/31/2023 School of Computing, DDUIoT 43
Background(con’t…)
Virtual memory is a technique that allows the execution of
processes that may not be completely in memory.
– Programs are larger than main memory.
– VM abstract main memory into an extremely large,
uniform array of storage.
Separation of user logical memory from physical memory.
– Only part of the program needs to be in memory for
execution.
– Logical address space can therefore be much larger than
physical address space.
– Allows address spaces to be shared by several processes.
– Allows for more efficient process creation.
– Frees the programmer from memory constraints.
Virtual memory can be implemented via:
– Demand paging
– Demand segmentation
12/31/2023
12/31/2023 School of Computing, DDUIoT 44
Virtual Memory That is Larger Than Physical Memory
12/31/2023
12/31/2023 School of Computing, DDUIoT 45
Demand Paging
o Paging system with swapping.
– When we execute a process we swap into memory.
o For demand paging, we use lazy swapper.
– Never swaps a page into memory unless required.
– Bring a page into memory only when it is needed.
• Less I/O needed
• Less memory needed
• Faster response
• More users
o Page is needed reference to it
– invalid reference abort
– not-in-memory bring to memory
12/31/2023
12/31/2023 School of Computing, DDUIoT 46
Presence Bit
With each page table entry a presence bit is associated
(1 in-memory, 0 not-in-memory)
Initially presence bit is set to 0 on all entries.
Example of a page table snapshot.
Frame # presence bit
1
1
1
0
1
0
0
page table
During address translation, if presence bit in page table
entry is 0 page fault.
12/31/2023
12/31/2023 School of Computing, DDUIoT 47
Page Table When Some Pages Are Not in Main
Memory
12/31/2023
12/31/2023 School of Computing, DDUIoT 49
Steps in Handling a Page Fault
1. Check the internal table,
to determine whether this
reference is valid or
invalid.
2. If the reference is invalid,
then trap to OS.
3. Find free frame.
4. Schedule disk operation.
5. Modify the internal table
to indicate that page is in
main memory.
6. Restart the instruction.
12/31/2023
12/31/2023 School of Computing, DDUIoT 50
Page Replacement
o Prevent over-allocation of memory by modifying page-
fault service routine to include page replacement.
Basic Page Replacement
1. Find the location of the desired page on disk.
2. Find a free frame:
If there is a free frame, use it.
If there is no free frame, use a page replacement
algorithm to select a victim frame. And change page
table accordingly.
3. Read the desired page into the (newly) free frame.
Update the page table.
4. Restart the process.
12/31/2023
12/31/2023 School of Computing, DDUIoT 51
Page Replacement
12/31/2023
12/31/2023 School of Computing, DDUIoT 52
Page Replacement algorithm
o Page fault forces a choice
No room for new page (steady state)
Which page must be removed to make room for an
incoming page?
o How is a page removed from physical memory?
If the page is unmodified, simply overwrite it: a copy already
exists on disk
If the page has been modified, it must be written
back to disk: prefer unmodified pages?
o Better not to choose an often used page
It’ll probably need to be brought back in soon
12/31/2023
12/31/2023 School of Computing, DDUIoT 53
Page Replacement Algorithms(con’t…)
o Produce lowest page-fault rate.
o Evaluate algorithm by running it on a particular string of memory
references (reference string) and computing the number of page faults on
that string.
o In all our examples, the reference string is :1, 2, 3, 4, 1, 2, 5, 1, 2, 3,
4, 5.
Graph of Page
Faults Versus
The Number of
Frames
12/31/2023
12/31/2023 School of Computing, DDUIoT 54
Optimal page replacement algorithm
o replace the page that will be used furthest in the future
o Only works if we know the whole sequence!
o Nice, but not achievable in real systems!
Example: 4 frames and 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
string references
1
4
2
6 page faults
3
5
o Used for measuring how well your algorithm performs.
o How can you know what the future references will be?
12/31/2023
12/31/2023 School of Computing, DDUIoT 55
Allocation of Frames
o Each process needs minimum number of pages allocation.
o If there is a single process, entire available memory can be
allocated.
o Multi-programming puts two or more processes in memory
at same time.
o We must allocate minimum number of frames to each
process.
o Two major allocation schemes.
– fixed allocation
– priority allocation
12/31/2023
12/31/2023 School of Computing, DDUIoT 56
Allocation of Frames(con’t…)
1. fixed partition
o Equal allocation – e.g., if 100 frames and 5 processes,
give each 20 pages.
o Proportional allocation – Allocate according to the size of
process.
12/31/2023
12/31/2023 School of Computing, DDUIoT 57
Global vs. Local Allocation
o Global replacement – process selects a replacement frame
from the set of all frames; one process can take a frame
from another.
o Local replacement – each process selects from only its own
set of allocated frames.
o With local replacement # of frames does not change.
– Performance depends on the paging behavior of the
process.
– Free frames may not be used
o With global replacement, a process can take a frame from
another process.
– Performance depends not only paging behavior of that
process, but also paging behavior of other processes.
o In practice global replacement is used.
12/31/2023
12/31/2023 School of Computing, DDUIoT 58
Thrashing
o If a process does not have “enough” pages, the page-
fault rate is very high. This leads to:
– low CPU utilization.
– operating system thinks that it needs to increase the
degree of multiprogramming.
– another process is added to the system.
o Thrashing is High paging activity.
o Thrashing a process is spending more time in
swapping pages in and out.
o If the process does not have number of frames
equivalent to number of active pages, it will very
quickly page fault.
o Since all the pages are in active use it will page fault
again.
12/31/2023
12/31/2023 School of Computing, DDUIoT 59
Thrashing
12/31/2023
12/31/2023 School of Computing, DDUIoT 60
Causes of thrashing
OS monitors CPU utilization
If it is low, increases the degree of MPL
Consider that a process enters new execution phase and starts
faulting.
It takes pages from other processes
Since other processes need those pages, they also fault,
taking pages from other processes.
The queue increases for paging device and ready queue
empties
CPU utilization decreases.
Solution: provide process as many frames as it needs.
But how we know how many frames it needs ?
Locality model provides hope.
12/31/2023
12/31/2023 School of Computing, DDUIoT 61
Locality model
Locality is a set of pages that are actively used together.
A program is composed of several different localities which
may overlap.
Ex: even when a subroutine is called it defines a new
locality.
The locality model states that all the programs exhibit this
memory reference structure.
This is the main reason for caching and virtual memory!
If we allocate enough frames to a process to accommodate
its current locality, it fault till all pages are in that locality
are in the MM. Then it will not fault.
If we allocate fewer frames than current locality, the
process will thrash.
12/31/2023
12/31/2023 School of Computing, DDUIoT 62
Working-Set Model
Based on locality->Define a parameter ;
working-set window a fixed number of page references
Example: 10,000 instruction
Most recent references are examined
WSSi (working set of Process Pi) = total number of pages referenced in
the most recent (varies in time)
if too small will not encompass entire locality.
if too large will encompass several localities.
if = will encompass entire program.
D = WSSi total demand frames
if D > m Thrashing
Policy: if D > m, then suspend one of the processes.
12/31/2023
12/31/2023 School of Computing, DDUIoT 63
Working Set(con’t…)
OS monitors the WS of each process allocates to that working set enough
frames equal to WS size.
If there are enough extra frames, another process can be initiated.
If D>m, OS suspends a process, and its frames are allocated to other
processes.
The WS strategy prevents thrashing by keeping MPL as high as possible.
However, we have to keep track of working set.
Keeping track of Working Set
Approximate with interval timer + a reference bit,->Example: = 10,000
Timer interrupts after every 5000 time units.
Keep in memory 2 bits for each page.
Whenever a timer interrupts copy and sets the values of all reference
bits to 0.
If one of the bits in memory = 1 page in working set.
Why is this not completely accurate?
We can not tell when the reference was occurred.
Accuracy can be increased by increasing frequency of interrupts which
also increases the cost.
12/31/2023
12/31/2023 School of Computing, DDUIoT 64
12/31/2023 School of Computing, DDUIoT 65