Lecture 8
Lecture 8
Topics covered:
Memory subsystem
Memory hierarchy
1
Memory hierarchy (contd..)
2
Memory hierarchy (contd..)
3
Cache memories
4
Locality of reference
5
Locality of reference (contd..)
6
Cache memories
Main
Processor Cache
memory
•Processor issues a Read request, a block of words is transferred from the main memory
to the cache, one word at a time.
•Subsequent references to the data in this block of words are found in the cache.
•At any given time, only some blocks in the main memory are held in the cache. Which
blocks in the main memory are in the cache is determined by a “mapping function”.
•When the cache is full, and a block of words needs to be transferred from the main
memory, some block of words in the cache must be replaced. This is determined by a
“replacement algorithm”.
7
Cache memories (contd..)
•The processor doesnot need to know explicitly about the existence of the cache.
The processor issues Read and Write requests in the same manner.(normal memory
a ddresses)
•Read hit:
- The data is obtained from the cache.
•Write hit:
- Cache is a replica of the contents of the main memory.
- Contents of the cache and the main memory may be updated simultaneously.
This is the write-through protocol.
- Update the contents of the cache, and mark it as updated by setting a bit known
as the dirty bit or modified bit. The contents of the main memory are updated
when this block is replaced. This is write-back or copy-back protocol.
8
Cache memories (contd..)
•If the data is not present in the cache, then a Read miss or Write miss occurs.
•Read miss:
- Block of words containing this requested word is transferred from the memory.
- After the block is transferred, the desired word is forwarded to the processor.
- The desired word may also be forwarded to the processor as soon as it is
transferred without waiting for the entire block to be transferred. This is called
load-through or early-restart.
•Write-miss:
- If Write-through protocol is used, then the contents of the main memory are
updated directly.
- If write-back protocol is used, the block containing the addressed word is first
brought into the cache. The desired word is overwritten with new information.
9
Cache memories (contd..)
•Data transfers between main memory and disk occur directly bypassing the cache.
10
Mapping functions
11
Direct mapping
Main
memory
Block 0 •Block j of the main memory maps to j modulo 128 of
Cache Block 1 the cache. 0 maps to 0, 129 maps to 1.
tag •More than one memory block is mapped onto the same
Block 0
position in the cache.
tag
Block 1 •May lead to contention for cache blocks even if the
cache is not full.
Block 127
•Resolve the contention by allowing new block to
Block 128 replace the old block, leading to a trivial replacement
tag
Block 127 Block 129 algorithm.
•Memory address is divided into three fields:
- Low order 4 bits determine one of the 16
words in a block.
- When a new block is brought into the cache,
Block 255
Tag Block Word the next 7 bits determine which cache
5 7 4 Block 256 block this new block is placed in.
Block 257 - High order 5 bits determine which of the possible
Main memory address
32 (4096/128) blocks is currently present in the cache.
These are tag bits.
•Simple to implement but not very flexible.
Each 32 memory block can be
Block 4095
mapped into one cache block.
12
Associative mapping
Main
memory
Block 0 •Main memory block can be placed into any cache
Block 1
position.
Cache
tag
•Memory address is divided into two fields:
Block 0 - Low order 4 bits identify the word within a block.
tag - High order 12 bits or tag bits identify a memory
Block 1
block when it is resident in the cache.
Block 127
•Flexible, and uses cache space efficiently.
Block 128 •Replacement algorithms can be used to replace an
tag
Block 129
existing block in the cache when the cache is full.
Block 127
•Cost is higher than direct-mapped cache because of
the need to search all 128 patterns to determine
whether a given block is in the cache.
Block 255
Tag Word
Block 256
12 4
Block 257
Main memory address
13
Set-Associative mapping
Main
Cache memory
Block 0 Blocks of cache are grouped into sets.
tag Block 1
Mapping function allows a block of the main
Block 0 memory to reside in any block of a specific set.
Set 0
tag Divide the cache into 64 sets, with two blocks per set.
Block 1
tag Memory block 0, 64, 128 etc. map to block 0, and they
Block 2
Set 1 can occupy either of the two positions.
tag Block 127
Block 3 Memory address is divided into three fields:
Block 128 - 6 bit field determines the set number.
Block 129
- High order 6 bit fields are compared to the tag
fields of the two blocks in a set.
tag
Block 126
Set-associative mapping combination of direct and
Set 63
tag
associative mapping.
Block 127 Number of blocks per set is a design parameter.
Block 255
- One extreme is to have all the blocks in one set,
Block 256 requiring no set bits (fully associative mapping).
Block 257
- Other extreme is to have one block per set, is
Tag Set Word
the same as direct mapping.
6 6 4
Direct-mapped cache, the position that each memory block occupies in the cache is fixed. As
a result, the replacement strategy is trivial.
Associative and set-associative mapping provide some flexibility in deciding which memory
block occupies which cache block.
When a new block is to be transferred to the cache, and all the positions it may occupy are full,
which block in the cache should be replaced?
Locality of reference suggests that it may be okay to replace the block that has gone the longest
time without being referenced.
This block is called Least Recently Used (LRU) block, and the replacement strategy is called
LRU replacement algorithm.
LRU algorithm has been used extensively.
It provides poor performance in some cases.
Performance of the LRU algorithm may be improved by introducing a small amount of
randomness into which block might be replaced.
Other replacement algorithms include removing the “oldest” block. However, they disregard
the locality of reference principle and do not perform as well.
15
LRU replacement algorithm
16
Performance considerations
17
Interleaving
18
Interleaving (contd..)
k bits m bits
Module Address in module MM address
19
Interleaving (contd..)
m bits k bits
21
Memory interleaving Example
22
Performance enhancements
Write buffer
Write-through:
•Each write operation involves writing to the main memory.
•If the processor has to wait for the write operation to be complete, it slows down the
processor.
•Processor does not depend on the results of the write operation.
•Write buffer can be included for temporary storage of write requests.
•Processor places each write request into the buffer and continues execution.
•If a subsequent Read request references data which is still in the write buffer, then
this data is referenced in the write buffer.
Write-back:
•Block is written back to the main memory when it is replaced.
•If the processor waits for this write to complete, before reading the new block, it is
slowed down.
•Fast write buffer can hold the block to be written, and the new block can be read first.
23
Performance enhancements
Prefetching
•New data are brought into the processor when they are first needed.
•Processor has to wait before the data transfer is complete.
•Prefetch the data into the cache before they are actually needed, or a before a Read
miss occurs.
•Prefetching should occur (hopefully) when the processor is busy executing
instructions that do not result in a read miss.
•Prefetching can be accomplished through software by including a special instruction
in the machine language of the processor.
- Inclusion of prefetch instructions increases the length of the programs.
•Prefetching can also be accomplished using hardware:
- Circuitry that attempts to discover patterns in memory references and then
prefetches according to this pattern.
24
Performance enhancements
Lockup-Free Cache
•Prefetching scheme does not work if it stops other accesses to the cache until the
prefetch is completed.
•A cache of this type is said to be “locked” while it services a miss.
•Cache structure which supports multiple outstanding misses is called a lockup
free cache.
•Since only one miss can be serviced at a time, a lockup free cache must include
circuits that keep track of all the outstanding misses.
•Special registers may hold the necessary information about these misses.
25
Virtual memories
26
Virtual memories (contd..)
28
Virtual memories (contd..)
29
Virtual memory
30
Virtual(Logical) address
31
Virtual memory organization
Processor
•Memory management unit (MMU) translates
virtual addresses into physical addresses.
Virtual address
•If the desired data or instructions are in the
main memory they are fetched as described
Data MMU
previously.
•If the desired data or instructions are not in
Physical address
the main memory, they must be transferred
from secondary storage to the main memory.
Cache •MMU causes the operating system to bring
the data from the secondary storage into the
Data Physical address main memory.
Main memory
DMA transfer
Disk storage
32
Address translation
33
Address translation (contd..)
34
Address translation (contd..)
35
Address translation (contd..)
36
Address translation (contd..)
PTBR holds Virtual address from processor
Page table base register
the address of
the page table. Page table address Virtual page number Offset
Virtual address is
interpreted as page
+ number and offset.
PAGE TABLE
37
Address translation (contd..)
❑ Page table entry for a page also includes some control bits
which describe the status of the page while it is in the main
memory.
❑ One bit indicates the validity of the page.
◆ Indicates whether the page is actually loaded into the main
memory.
◆ Allows the operating system to invalidate the page without
actually removing it. (Quick and full format?)
❑ One bit indicates whether the page has been modified
during its residency in the main memory.
◆ This bit determines whether the page should be written back to
the disk when it is removed from the main memory.
◆ Similar to the dirty or modified bit in case of cache memory.
38
Address translation (contd..)
For a 4KB page you require (4K == (4 * 1024) == 4096 == 212 ==) 12
bits of offset.
39
Address translation (contd..)
40