KEMBAR78
CH 09 | PDF | Computer Data Storage | Office Equipment
0% found this document useful (0 votes)
5 views46 pages

CH 09

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views46 pages

CH 09

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

Main Memory

Chapter 9

April 7, 2025 Ch09:Main memory 1


Objectives

◼ Explain the difference between a logical and a physical address and the role
of the memory management unit (MMU) in translating addresses.
◼ Apply first-, best-, and worst-fit strategies for allocating memory
contiguously.
◼ Explain the distinction between internal and external fragmentation.
◼ Translate logical to physical addresses in a paging system that includes a
translation look-aside buffer (TLB).
◼ Describe hierarchical paging, hashed paging, and inverted page tables.

April 7, 2025 Ch09:Main memory 2


Chapter Outline

◼ Background

◼ Contiguous Memory Allocation

◼ Paging

◼ Structure of the Page Table

◼ Swapping (self read)

◼ Summary

April 7, 2025 Ch09:Main memory 3


- Background

◼ Program must be brought (from disk) into memory and placed within a
process for it to be run
◼ Main memory and registers are only storages CPU can access directly
◼ Register access is done in one CPU clock
◼ Main memory access can take many cycles, causing a stall.
◼ The remedy is to add Cache that sits between main memory and CPU
registers
◼ Protection of memory required to ensure correct operation

April 7, 2025 Ch09:Main memory 4


-- Memory Space of a Process

◼ Each process has a separate memory space.


◼ Separate per-process memory space protects the processes from each other and is
fundamental to having multiple processes loaded in memory for concurrent
execution.
◼ To separate memory spaces, we need the ability to determine the range of legal
addresses that the process may access and to ensure that the process can access
only these legal addresses.
◼ This protection is usually provided by two registers, namely a base and a limit.
◼ The base register holds the smallest legal physical memory address.
◼ The limit register specifies the size of the range.

April 7, 2025 Ch09:Main memory 5


-- Base and Limit Registers

◼ For example: if the base register


holds 300040 and the limit
register is 120900, then the
program can legally access all
addresses from 300040 through
420939 (inclusive).

April 7, 2025 Ch09:Main memory 6


-- Protection of Memory Space

◼ Protection of memory space is accomplished by having the CPU hardware


compare every address generated in user mode with the base and limit
registers.
◼ Any attempt by a program executing in user mode to access OS memory or
other users' memory results in a trap to the OS, which treats the attempt as a
fatal error .
◼ This scheme prevents a user program from (accidentally or deliberately)
modifying the code or data structures of either the operating system or other
users.

April 7, 2025 Ch09:Main memory 7


-- Hardware Address Protection

◼ CPU must check every memory access generated in user mode to be sure
it is between base and limit for that user.

◼ The instructions to loading the base and limit registers are privileged and
thus done by the OS.

April 7, 2025 Ch09:Main memory 8


-- Address Binding

◼ Usually, a program resides on a disk as a binary executable file.


◼ To run, the program must be brought into memory and placed within the context of
a process
◼ Most systems allow a user process to reside in any part of the physical memory.
◼ Although the address space of the computer may start at 00000, the first address of
the user process need not be 00000.
◼ Addresses in the source program are generally symbolic (such as the variable
count).
◼ A compiler typically binds these symbolic addresses to relocatable addresses (such
as "14 bytes from the beginning of this module").
◼ The linker or loader in turn binds the relocatable addresses to absolute addresses
(such as 74014).

April 7, 2025 Ch09:Main memory 9


-- Multistep Processing of a User Program

April 7, 2025 Ch09:Main memory 10


-- Binding of Instructions and Data to Memory

◼ Address binding of instructions and data to memory addresses can happen at


three different stages
◼ Compile time: If at compile time where the process will reside in memory is
known, then absolute code can be generated.
◼ must recompile code if starting location changes
◼ Load time: If it is not known at compile time where the process will reside in
memory, then the compiler must generate relocatable code. In this case,
final binding is delayed until load time.
◼ Execution time: Binding is delayed until run time if the process can be
moved during its execution from one memory segment to another.
◼ Special hardware must be available for this scheme to work (e.g., base
and limit registers)

April 7, 2025 Ch09:Main memory 11


-- Logical vs. Physical Address Space

◼ An address generated by the CPU is commonly referred to as a logical or


virtual address, whereas an address seen by the memory unit is commonly
referred to as a physical address.
◼ The user program deals with logical addresses; it never sees the physical
addresses
◼ The run-time mapping from virtual to physical addresses is done by a
hardware device called the memory-management unit (MMU)

April 7, 2025 Ch09:Main memory 12


-- A Simple MMU Scheme

◼ There are many different methods to accomplish logical to physical address mapping.
◼ Consider a simple MMU scheme which is a generalization of the base-register scheme.
◼ The base register now called relocation register
◼ The value in the relocation register is added to every address generated by a user
process at the time it is sent to memory

April 7, 2025 Ch09:Main memory 13


-- Dynamic Loading

◼ All routines are kept on disk in a relocatable load format

◼ The advantage of dynamic loading is that a routine is loaded only when it


is needed.

◼ Useful when large amounts of code are needed to handle infrequently


occurring cases.

◼ No special support from the operating system is required, implemented


through program design.

◼ However, OSs may help the programmer by providing library routines


to implement dynamic loading.

April 7, 2025 Ch09:Main memory 14


-- Dynamic Linking

◼ Linking postponed until execution time

◼ Small piece of code, stub, used to locate the appropriate memory-resident


library routine

◼ Stub replaces itself with the address of the routine, and executes the
routine

◼ Without this facility, each program on a system must include a copy of its
language library

◼ Dynamically linked libraries (DLLs) are system libraries that are linked to
user programs when the programs are run.

◼ DLLs also known as shared libraries

April 7, 2025 Ch09:Main memory 15


- Contiguous Allocation …

◼ Main memory is usually divided into two partitions:


◼ Resident OS, usually held in low memory with interrupt vector
◼ User processes then held in high memory
◼ Contiguous allocation is one early method
◼ Each process contained in single contiguous section of memory
◼ Relocation registers used to protect user processes from each other, and
from changing operating-system code and data
◼ Base register contains value of smallest physical address
◼ Limit register contains range of logical addresses – each logical address
must be less than the limit register
◼ MMU maps logical address dynamically

April 7, 2025 Ch09:Main memory 16


-- Base and Limit Registers

◼ A pair of base and limit registers define the logical address space

April 7, 2025 Ch09:Main memory 17


-- -- HW address protection with base and limit registers

April 7, 2025 Ch09:Main memory 18


-- Contiguous Allocation

◼ Multiple-partition allocation
◼ Hole – block of available memory; holes of various size are scattered
throughout memory
◼ When a process arrives, it is allocated memory from a hole large enough
to accommodate it
◼ Operating system maintains information about allocated partitions and
free partitions (holes)

April 7, 2025 Ch09:Main memory 19


-- Dynamic Storage-Allocation Problem

◼ How to satisfy a request of size n from a list of free holes

◼ First-fit: Allocate the first hole that is big enough

◼ Best-fit: Allocate the smallest hole that is big enough; must search entire list,
unless ordered by size

◼ Produces the smallest leftover hole

◼ Worst-fit: Allocate the largest hole; must also search entire list

◼ Produces the largest leftover hole

◼ First-fit and best-fit better than worst-fit in terms of speed and storage utilization

April 7, 2025 Ch09:Main memory 20


-- Fragmentation

◼ External Fragmentation – total memory space exists to satisfy a request, but it is


not contiguous

◼ Internal Fragmentation – allocated memory may be slightly larger than requested


memory; this size difference is memory internal to a partition, but not being used

◼ Reduce external fragmentation by compaction

◼ Shuffle memory contents to place all free memory together in one large block

◼ Compaction is possible only if relocation is dynamic, and is done at execution


time

April 7, 2025 Ch09:Main memory 21


- Paging

◼ Logical address space of a process can be noncontiguous; process is allocated


physical memory whenever the latter is available

◼ Divide physical memory into fixed-sized blocks called frames (size is power of
2, between 512 bytes and 16 Mbytes)

◼ Divide logical memory into blocks of same size called pages

◼ Keep track of all free frames

◼ To run a program of size n pages, need to find n free frames and load program

◼ Backing store likewise split into pages

◼ Set up a page table to translate logical to physical addresses

◼ Still have Internal fragmentation

April 7, 2025 Ch09:Main memory 22


-- Address Translation Scheme

◼ In paging, address generated by CPU is divided into:

◼ Page number (p) – used as an index into a page table which contains
base address of each page in physical memory

◼ Page offset (d) – combined with base address to define the physical
memory address that is sent to the memory unit

page number page offset


p d
m -n n
◼ For given logical address space 2m and page size 2n

April 7, 2025 Ch09:Main memory 23


-- Paging Hardware

April 7, 2025 Ch09:Main memory 24


-- Paging Model of Logical and Physical Memory

April 7, 2025 Ch09:Main memory 25


-- Paging Example

◼ Logical address:
◼ n=2
◼ m=4
◼ page size of 4 bytes
◼ physical memory of 32
bytes (8 pages)

April 7, 2025 Ch09:Main memory 26


-- Free Frames

Before allocation After allocation

April 7, 2025 Ch09:Main memory 27


-- Implementation of Page Table

◼ Page table is kept in main memory

◼ Page-table base register (PTBR) points to the page table

◼ Page-table length register (PTLR) indicates size of the page table

◼ In this scheme every data/instruction access requires two memory accesses.


One for the page table and one for the data/instruction.

◼ The two-memory access problem can be solved by the use of a special fast-
lookup hardware cache called associative memory or translation look-aside
buffers (TLBs)

◼ TLBs typically small (64 to 1,024 entries), may not hold a complete page
table.

◼ TLB entries can be searched in parallel

April 7, 2025 Ch09:Main memory 28


-- Paging Hardware With TLB

April 7, 2025 Ch09:Main memory 29


-- Effective Access Time (EAT)

◼ Hit ratio – percentage of times that a page number is found in the TLB

◼ An 80% hit ratio means that we find the desired page number in the TLB 80%
of the time.

◼ Suppose that 10 nanoseconds to access memory.

◼ If we find the desired page in TLB then a mapped-memory access take 10 ns,
otherwise we need two memory access, so it is 20 ns

◼ Effective Access Time (EAT) for a hit ratio of 80% is:

◼ EAT = 0.80 x 10 + 0.20 x 20 = 12 nanoseconds

◼ Consider a more realistic hit ratio of 99%

◼ EAT = 0.99 x 10 + 0.01 x 20 = 10.1ns

April 7, 2025 Ch09:Main memory 30


-- Memory Protection

◼ Memory protection implemented by associating protection bit with each


frame

◼ Valid-invalid bit attached to each entry in the page table:

◼ “valid” indicates that the associated page is in the process’ logical


address space, and is thus a legal page

◼ “invalid” indicates that the page is not in the process’ logical address
space

◼ Any violations result in a trap to the kernel

April 7, 2025 Ch09:Main memory 31


--- Valid (v) or Invalid (i) Bit In A Page Table

April 7, 2025 Ch09:Main memory 32


-- Shared Pages

◼ Shared code
◼ One copy of read-only (reentrant) code shared among processes (i.e.,
text editors, compilers, window systems).
◼ Similar to multiple threads sharing the same process space
◼ Also useful for interprocess communication if sharing of read-write
pages is allowed

◼ Private code and data


◼ Each process keeps a separate copy of the code and data
◼ The pages for the private code and data can appear anywhere in the
logical address space

April 7, 2025 Ch09:Main memory 33


--- Shared Pages Example

April 7, 2025 Ch09:Main memory 34


- Structure of the Page Table

◼ Memory structures for paging can get huge using straight-forward methods

◼ Consider a 32-bit logical address space as on modern computers


◼ Page size of 4 KB (212)
◼ Page table would have 1 million entries (232 / 212)
◼ If each entry is 4 bytes, then each process needs 4 MB of physical address
space for the page table alone
◼ Don’t want to allocate that contiguously in main memory

◼ One simple solution is to divide the page table into smaller units and use:
◼ Hierarchical Page Tables
◼ Hashed Page Tables
◼ Inverted Page Tables

April 7, 2025 Ch09:Main memory 35


-- Hierarchical Page Tables

◼ Break up the logical address


space into multiple page tables
◼ A simple technique is a two-
level page table
◼ We then page the page table

April 7, 2025 Ch09:Main memory 36


--- Two-Level Paging Example
◼ A logical address (on 32-bit machine with 4K page size) is divided into:
◼ a page number consisting of 20 bits
◼ a page offset consisting of 12 bits
◼ Since the page table is paged, the page number is further divided into:
◼ a 10-bit page number
◼ a 10-bit page offset
◼ Thus, a logical address is as follows:

◼ where pi is an index into the outer page table, and p2 is the displacement within
the page of the outer page table

April 7, 2025 Ch09:Main memory 37


--- Address-Translation Scheme

April 7, 2025 Ch09:Main memory 38


-- Hashed Page Tables

◼ Common in address spaces > 32 bits


◼ The virtual page number is hashed into a page table
◼ This page table contains a chain of elements hashing to the same location
◼ Each element contains:
◼ the virtual page number
◼ the value of the mapped page frame
◼ a pointer to the next element
◼ Virtual page numbers are compared in this chain searching for a match
◼ If a match is found, the corresponding physical frame is extracted

April 7, 2025 Ch09:Main memory 39


--- Hashed Page Table

April 7, 2025 Ch09:Main memory 40


-- Inverted Page Table

◼ Rather than each process having a page table and keeping track of all possible
logical pages, track all physical pages

◼ One entry for each real page of memory

◼ Entry consists of the virtual address of the page stored in that real memory
location, with information about the process that owns that page

◼ Decreases memory needed to store each page table, but increases time needed
to search the table when a page reference occurs

◼ To limit search use:


◼ hash table
◼ TLB

April 7, 2025 Ch09:Main memory 41


--- Inverted Page Table Architecture

April 7, 2025 Ch09:Main memory 42


- Summary

◼ Background
◼ Logical & physical address
◼ Dynamic loading, dynamic linking
◼ Binding
◼ Compilation time, Loading time, Execution time
◼ Contiguous Allocation
◼ Base and Limit register
◼ First, best, worst fits, External fragmentation
◼ Paging
◼ Frame, Page, PTBR, PTLR,TLB, valid invalid bit, Internal fragmentation
◼ Page table, Hierarchical, Hash, Inverted

April 7, 2025 Ch09:Main memory 43


Reading List

◼ 9.1 – 9.4
◼ The rest of the sections are self read.

April 7, 2025 Ch09:Main memory 44


Disclaimer

◼ Parts of the lecture slides contain original work of Abraham Silberschatz, Peter B.
Galvin, Greg Gagne, Andrew S. Tanenbaum, and Gary Nutt. The slides are intended
for the sole purpose of instruction of Operating Systems course at KFUPM. All
copyrighted materials belong to their original owner(s).

April 7, 2025 Ch09:Main memory 45


April 7, 2025 Ch09:Main memory 46

You might also like