KEMBAR78
13 - Large and Fast Exploiting Memory Hierarchy Final | PDF | Cpu Cache | Cache (Computing)
0% found this document useful (0 votes)
47 views118 pages

13 - Large and Fast Exploiting Memory Hierarchy Final

The document discusses memory hierarchy in computer architecture, focusing on different types of memory such as SRAM, DRAM, and magnetic disks, along with their access times and costs. It explains the principles of locality, emphasizing temporal and spatial locality, and how these concepts are leveraged in cache memory to improve performance by storing frequently accessed data closer to the CPU. Additionally, it covers cache memory structure, including direct-mapped cache, block placement, and the importance of hit/miss rates in optimizing memory access.

Uploaded by

wajeehazia12
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views118 pages

13 - Large and Fast Exploiting Memory Hierarchy Final

The document discusses memory hierarchy in computer architecture, focusing on different types of memory such as SRAM, DRAM, and magnetic disks, along with their access times and costs. It explains the principles of locality, emphasizing temporal and spatial locality, and how these concepts are leveraged in cache memory to improve performance by storing frequently accessed data closer to the CPU. Additionally, it covers cache memory structure, including direct-mapped cache, block placement, and the importance of hit/miss rates in optimizing memory access.

Uploaded by

wajeehazia12
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 118

Large and Fast:

Exploiting Memory
Hierarchy
CS353 – Computer Architecture

Najeeb-Ur-Rehman
Assistant Professor
Department of Computer Science
Faculty of Computing & IT
University of Gujrat
§5.1 Introduction
Memory Technology
• Static RAM (SRAM)
• 0.5ns – 2.5ns, $2000 – $5000 per GB
• Dynamic RAM (DRAM)
• 50ns – 70ns, $20 – $75 per GB
• Magnetic disk
• 5ms – 20ms, $0.20 – $2 per GB
• Ideal memory
• Access time of SRAM
• Capacity and cost/GB of disk

15
Principle of Locality

• Programs access a small proportion of their address space at any time


• Temporal locality (locality in time)
• Items accessed recently are likely to be accessed again soon
• e.g., instructions in a loop, induction variables
• Spatial locality (locality in space)
• Items near those accessed recently are likely to be accessed soon
• E.g., sequential instruction access, array data

16
Principle of Locality of Reference

• Programs access small portion of their address space


• At any time, only a small set of instructions & data is needed
• Temporal Locality (in time)
• If an item is accessed, probably it will be accessed again soon
• Same loop instructions are fetched each iteration
• Same procedure may be called and executed many times
• Spatial Locality (in space)
• Tendency to access contiguous instructions/data in memory
• Sequential execution of Instructions
• Traversing arrays element by element

17
18
19
20
Taking Advantage of Locality

• Memory hierarchy
• Store everything on disk
• Copy recently accessed (and nearby) items from disk to smaller DRAM
memory
• Main memory
• Copy more recently accessed (and nearby) items from DRAM to
smaller SRAM memory
• Cache memory attached to CPU

21
Memory Hierarchy Levels
• Block (aka line): unit of copying
• May be multiple words
• If accessed data is present in upper
level
• Hit: access satisfied by upper level
• Hit ratio: hits/accesses
• If accessed data is absent
• Miss: block copied from lower level
• Time taken: miss penalty
• Miss ratio: misses/accesses
= 1 – hit ratio
• Then accessed data supplied from upper
level
Basic Terminology
• Cache -- name given to the first level of memory seen
from the CPU
• Miss rate -- fraction of memory accesses that are not in
the cache
• Miss penalty -- additional clock cycles needed to service a
cache miss
• Hit time -- time to hit in the cache
• Block -- smallest unit of data that can be accessed from
main memory
11-Aug-17 23
§5.2 The Basics of Caches
Cache Memory
• Cache memory
• The level of the memory hierarchy closest to the CPU
• Given accesses X1, …, Xn–1, Xn

 How do we know if
the data is present?
 Where do we look?

24
The Need for Cache Memory
• Widening speed gap between CPU and main memory
• Processor operation takes less than 1 ns
• Main memory requires more than 50 ns to access

• Each instruction involves at least one memory access


• One memory access to fetch the instruction
• A second memory access for load and store instructions

• Memory bandwidth limits the instruction execution rate


• Cache memory can help bridge the CPU-memory gap
• Cache memory is small in size but fast

25
Typical Memory Hierarchy
• Registers are at the top of the hierarchy
• Typical size < 1 KB
• Access time < 0.5 ns
• Level 1 Cache (8 – 64 KB)
• Access time: 1 ns
• L2 Cache (512KB – 8MB) Microprocessor

• Access time: 3 – 10 ns Registers

• Main Memory (4 – 16 GB) L1 Cache

• Access time: 50 – 100 ns L2 Cache

Bigger
Faster
• Disk Storage (> 200 GB) Memory Bus

• Access time: 5 – 10 ms Main Memory

I/O Bus
Magnetic or Flash Disk 26
What is a Cache Memory ?
• Small and fast (SRAM) memory technology
• Stores the subset of instructions & data currently being accessed
• Used to reduce average access time to memory
• Caches exploit temporal locality by …
• Keeping recently accessed data closer to the processor
• Caches exploit spatial locality by …
• Moving blocks consisting of multiple contiguous words
• Goal is to achieve
• Fast speed of cache memory access
• Balance the cost of the memory system

27
Almost Everything is a Cache !

• In computer architecture, almost everything is a cache!


• Registers: a cache on variables – software managed
• First-level cache: a cache on second-level cache
• Second-level cache: a cache on memory
• Memory: a cache on hard disk
• Stores recent programs and their data
• Hard disk can be viewed as an extension to main memory
• Branch target and prediction buffer
• Cache on branch target and prediction information

28
Four Basic Questions on Caches
• Q1: Where can a block be placed in a cache?
• Block placement
• Direct Mapped, Set Associative, Fully Associative
• Q2: How is a block found in a cache?
• Block identification
• Block address, tag, index
• Q3: Which block should be replaced on a miss?
• Block replacement
• FIFO, Random, LRU
• Q4: What happens on a write?
• Write strategy
• Write Back or Write Through (with Write Buffer)

29
Block Placement: Direct Mapped
• Block: unit of data transfer between cache and memory
• Direct Mapped Cache:
• A block can be placed in exactly one location in the cache

000
001
010

100
101
011

110
111
In this example:

Cache
Cache index =
least significant 3 bits
of Memory address

Memory
Main
00000
00001
00010

00100
00101

01000
01001
01010

10000
10001
10010

10100
10101

11001
00011

00110

01011

10011

10110

11000

11010
01100
01101
00111

10111

11011
01110

11100
11101
01111

11110
11111
30
Direct-Mapped Cache
• A memory address is divided into
Block Address
• Block address: identifies block in memory
Tag Index offset
• Block offset: to access bytes within a block
• A block address is further divided into V Tag Block Data
• Index: used for direct cache access
• Tag: most-significant bits of block address
Index = Block Address mod Cache Blocks
• Tag must be stored also inside cache
• For block identification
=
• A valid bit is also required to indicate
• Whether a cache block is valid or not Data
Hit

31
Direct Mapped Cache – cont’d
• Cache hit: block is stored inside cache
Block Address
• Index is used to access cache block
• Address tag is compared against stored tag Tag Index offset

• If equal and cache block is valid then hit


• Otherwise: cache miss V Tag Block Data

• If number of cache blocks is 2n


• n bits are used for the cache index
• If number of bytes in a block is 2b
• b bits are used for the block offset
• If 32 bits are used for an address
• 32 – n – b bits are used for the tag =
• Cache data size = 2n+b bytes Data
Hit

32
6-bit Address Main Memory
Direct Mapping 00 00 00
00 01 00
5600
3223
Cache 00 10 00 23
00 11 00 1122

Valid
01 00 00 0
Index Tag Data 01 01 00 32324
00 Y 00 5600 01 10 00 845
01 Y 11 775 01 11 00 43
10 Y 01 845 10 00 00 976
11 N 00 33234 10 01 00 77554
10 10 00 433
In a direct-mapped cache: 10 11 00 7785
-Each memory address 11 00 00 2447
corresponds to one location in the 11 01 00 775
cache 11 10 00 433
-There are many different 11 11 00 3649
memory locations for each cache Tag
entry (four in this case) Index Always zero (words)

33
Direct Mapped Cache
• Location determined by address
• Direct mapped: only one choice
• (Block address) modulo (#Blocks in cache)

 #Blocks is a
power of 2
 Use low-order
address bits

34
Direct Mapping - Address Structure

Tag s-r
Line or Slot r Word w or Offset
8 14 2
• Find Cache Size?

• 24 bit address
• 2 bit word identifier (4 byte block)
• 22 bit block identifier
• 14 bit slot or line
• 8 bit tag (=22-14)
• No two blocks in the same line have the same Tag field
• Check contents of cache by finding line and checking Tag

35
Direct Mapping - Cache Line Table

• The mapping is expressed as:


i = j modulo m
Where
i= cache line number
j= main memory block number
m= number of lines in the cache
For example:
• Cache line Main Memory blocks held
• 0 0, m, 2m, 3m…2s-m
• 1 1,m+1, 2m+1…2s-m+1

• m-1 m-1, 2m-1,3m-1…2s-1

37
Tags and Valid Bits

• How do we know which particular block is stored in a cache location?


• Store block address as well as the data
• Actually, only need the high-order bits
• Called the tag
• What if there is no data in a location?
• Valid bit: 1 = present, 0 = not present
• Initially 0

38
Cache Example
• 8-blocks, 1 word/block, direct mapped
• Initial state

Index V Tag Data


000 N
001 N
010 N
011 N
100 N
101 N
110 N
111 N

39
Cache Example
Word addr Binary addr Hit/miss Cache block
22 10 110 Miss 110

Index V Tag Data


000 N
001 N
010 N
011 N
100 N
101 N
110 Y 10 Mem[10110]
111 N

40
Cache Example
Word addr Binary addr Hit/miss Cache block
26 11 010 Miss 010

Index V Tag Data


000 N
001 N
010 Y 11 Mem[11010]
011 N
100 N
101 N
110 Y 10 Mem[10110]
111 N

41
Cache Example
Word addr Binary addr Hit/miss Cache block
22 10 110 Hit 110
26 11 010 Hit 010

Index V Tag Data


000 N
001 N
010 Y 11 Mem[11010]
011 N
100 N
101 N
110 Y 10 Mem[10110]
111 N

42
Cache Example
Word addr Binary addr Hit/miss Cache block
16 10 000 Miss 000
3 00 011 Miss 011
16 10 000 Hit 000

Index V Tag Data


000 Y 10 Mem[10000]
001 N
010 Y 11 Mem[11010]
011 Y 00 Mem[00011]
100 N
101 N
110 Y 10 Mem[10110]
111 N

43
Cache Example
Word addr Binary addr Hit/miss Cache block
18 10 010 Miss 010

Index V Tag Data


000 Y 10 Mem[10000]
001 N
010 Y 10 Mem[10010]
011 Y 00 Mem[00011]
100 N
101 N
110 Y 10 Mem[10110]
111 N

44
Mapping an Address to a Cache Block

• Example
• Consider a direct-mapped cache with 256 blocks
• Block size = 16 bytes
• Compute tag, index, and block offset of address: 0x01FFF8AC
Block Address

• Solution 20 8 4

• 32-bit address is divided into:


Tag Index offset

• 4-bit byte offset field, because block size = 24 = 16 bytes


• 8-bit cache index, because there are 28 = 256 blocks in cache
• 20-bit tag field
• Byte offset = 0xC = 12 (least significant 4 bits of address)
• Cache index = 0x8A = 138 (next lower 8 bits of address)
• Tag = 0x01FFF (upper 20 bits of address)

45
Example: Larger Block Size
•64 blocks, 16 bytes/block
•Tag? Index ? Block Offset?
•To what block number does address 1200
map?
•Block address = 1200/16 = 75
•Block number = 75 modulo 64 = 11
31 10 9 4 3 0
Tag Index Offset
22 bits 6 bits 4 bits
47
Example: Derive Index(Block) and Offset (Word bits).
Given

Memory Size = 256KB = 2 18


Cache Size = 64KB = 2 16
Block size = 32Bytes = 2 5

Number of blocks in cache = Cache size/Block size


= 64KB/32B = 216/2 5 = 211

Number of bits in Tag = Total bits - Index bits -


Offset bits = 18-11-5 = 2
48
Block Size Considerations
• Larger blocks should reduce miss
rate
• Due to spatial locality
• But in a fixed-sized cache
• Larger blocks  fewer of them
• More competition  increased miss
rate
• Larger blocks  pollution
• Larger miss penalty
• Can override benefit of reduced miss
rate
• Early restart and critical-word-first
can help
49
Cache Misses
• On cache hit, CPU proceeds normally
• On cache miss
• Stall the CPU pipeline
• Fetch block from next level of hierarchy
• Instruction cache miss
• Restart instruction fetch
• Data cache miss
• Complete data access
50
Write-Through

• On data-write hit, could just update the block in cache


• But then cache and memory would be inconsistent
• Write through: also update memory
• But makes writes take longer
• e.g., if base CPI = 1, 10% of instructions are stores, write to memory takes 100
cycles
• Effective CPI = 1 + 0.1×100 = 11
• Solution: write buffer
• Holds data waiting to be written to memory
• CPU continues immediately
• Only stalls on write if write buffer is already full

51
Write-Back

• Alternative: On data-write hit, just update the block in cache


• Keep track of whether each block is dirty
• When a dirty block is replaced
• Write it back to memory
• Can use a write buffer to allow replacing block to be read first

52
Write Allocation
• What should happen on a write miss?
• Alternatives for write-through
• Allocate on miss: fetch the block
• Write around: don’t fetch the block
• Since programs often write a whole block before reading it (e.g., initialization)
• For write-back
• Usually fetch the block

53
Example: Intrinsity FastMATH

• Embedded MIPS processor


• 12-stage pipeline
• Instruction and data access on each cycle
• Split cache: separate I-cache and D-cache
• Each 16KB: 256 blocks × 16 words/block
• D-cache: write-through or write-back
• SPEC2000 miss rates
• I-cache: 0.4%
• D-cache: 11.4%
• Weighted average: 3.2%

54
Main Memory Supporting Caches
• Use DRAMs for main memory • 15 bus cycles per DRAM access
• Fixed width (e.g., 1 word) • 1 bus cycle per data transfer
• Connected by fixed-width • For 4-word block, 1-word-wide
clocked bus DRAM
• Bus clock is typically slower than • Miss penalty = 1 + 4×15 + 4×1 =
CPU clock
65 bus cycles
• Example cache block read • Bandwidth = 16 bytes / 65
• 1 bus cycle for address transfer cycles = 0.25 B/cycle

56
Increasing Memory Bandwidth
 4-bank interleaved memory
 Miss penalty
= 1 + 15 + 4×1 = 20 bus
cycles
Bandwidth = 16 bytes / 20
cycles = 0.8 B/cycle
 4-bank interleaved memory
 Miss penalty = 1 + 15 + 4×1 =
20 bus cycles
 Bandwidth = 16 bytes / 20
cycles = 0.8 B/cycle
57
Advanced DRAM Organization
• Bits in a DRAM are organized as a
rectangular array
• DRAM accesses an entire row
• Burst mode: supply successive words from a
row with reduced latency
• Double data rate (DDR) DRAM
• Transfer on rising and falling clock edges
• Quad data rate (QDR) DRAM
• Separate DDR inputs and outputs

58
DRAM Generations
Year Capacity $/GB 300

1980 64Kbit $1500000


250
1983 256Kbit $500000
1985 1Mbit $200000 200
1989 4Mbit $50000 Trac
150
1992 16Mbit $15000 Tcac

1996 64Mbit $10000 100


1998 128Mbit $4000
50
2000 256Mbit $1000
2004 512Mbit $250 0
2007 1Gbit $50 '80 '83 '85 '89 '92 '96 '98 '00 '04 '07

59
Measuring Cache Performance
• Components of CPU time
• Program execution cycles
• Includes cache hit time
• Memory stall cycles
• Mainly from cache misses
• With simplifying assumptions:

Memory Acc esses


Memory Stall Cycles   Miss Rate  Miss Penalty
Program
Instructio ns Misses
   Miss Penalty
Program Instructio n

60
Cache Performance Example
• Given
• I-cache miss rate = 2%
• D-cache miss rate = 4%
• Miss penalty = 100 cycles
• Base CPI (ideal cache) = 2
• Load & stores are 36% of instructions
• Miss cycles per instruction
• I-cache: 0.02 × 100 = 2
• D-cache: 0.36 × 0.04 × 100 = 1.44
• Actual CPI = 2 + 2 + 1.44 = 5.44
• Ideal CPU is 5.44/2 =2.72 times faster

61
Average Access Time
• Hit time is also important for performance
• Average Memory Access Time (AMAT)
• AMAT = Hit time + Miss rate × Miss penalty
• Example
• CPU with 1ns clock, hit time = 1 cycle, miss penalty = 20
cycles, I-cache miss rate = 5%
• AMAT = ?
• AMAT = 1 + 0.05 × 20 = 2ns
• 2 cycles per instruction
62
Performance Summary

• When CPU performance increased


• Miss penalty becomes more significant
• Decreasing base CPI
• Greater proportion of time spent on memory stalls
• Increasing clock rate
• Memory stalls account for more CPU cycles
• Can’t neglect cache behavior when evaluating system performance

63
Associative Caches
• Fully associative
• Allow a given block to go in any cache entry
• Requires all entries to be searched at once
• Comparator per entry (expensive)
• n-way set associative
• Each set contains n entries
• Block number determines which set
• (Block number) modulo (#Sets in cache)
• Search all entries in a given set at once
• n comparators (less expensive)
64
Associative Cache Example

65
Spectrum of Associativity
• For a cache with 8 entries

66
Associativity Example
• Compare 4-block Block Cache Hit/mis
address index s
Cache content after access
0 1 2 3
caches
0 0 miss Mem[0]
• Direct mapped, 2-way 8 0 miss Mem[8]
set associative, 0 0 miss Mem[0]
fully associative 6 2 miss Mem[0] Mem[6]
• Block access 8 0 miss Mem[8] Mem[6]
sequence:
0, 8, 0, 6, 8
• Direct mapped

67
Associativity Example
• 2-way set associative
Block Cache Hit/miss Cache content after access
address index Set 0 Set 1
0 0 miss Mem[0]
8 0 miss Mem[0] Mem[8]
0 0 hit Mem[0] Mem[8]
6 0 miss Mem[0] Mem[6]
8 0 miss Mem[8] Mem[6]

 Fully associative
Block Hit/miss Cache content after access
address
0 miss Mem[0]
8 miss Mem[0] Mem[8]
0 hit Mem[0] Mem[8]
6 miss Mem[0] Mem[8] Mem[6]
8 hit Mem[0] Mem[8] Mem[6]

68
How Much Associativity
•Increased associativity decreases miss rate
• But with diminishing returns
•Simulation of a system with 64KB
D-cache, 16-word blocks, SPEC2000
• 1-way: 10.3%
• 2-way: 8.6%
• 4-way: 8.3%
• 8-way: 8.1%
69
Associative Mapping

• A main memory block can load into any line of cache


• Memory address is interpreted as tag and word
• Tag uniquely identifies block of memory
• Every line’s tag is examined for a match
• Cache searching gets expensive
Main memory address
Tag Word

70
71
72
(a). Main memory capacity = 16MByte = 16 * 1MByte = 24 * 220 = 224
Therefore the number of bits for the main memory address = 24 bits

(b) Given a cache memory of the size 16 Kline = 16 * Kline = 24 * 210 = 214 locations
(line number) – not important…! (Since the any word can be loaded into any cache location)
(c) Each line stores a block of words.
Block size = 4 words (4x8bit=32bit) = 22 words Example: Given
Therefore 2 bits for block of words M.M= 16MB,
(d) Memory address format consists of tag and word field. Cache has 16K Lines,
Block Size=4 Words,
Tag bits + words bits = 24 bits
Devise Memory address
Tag bits + 2 = 24 bits format
Therefore tag bits = 24 – 2 = 22 bits
Draw the format of main memory address

TAG=22 Bits Word=2Bits 73


Set Associative Mapping
• Cache is divided into a number of sets
• Each set contains a number of lines
• A given block maps to any line in a given set
• e.g. Block B can be in any line of set i
• e.g. 2 lines per set
• 2 way set associative mapping
• A given block can be in one of 2 lines in only one set
Main memory address
Tag Set/Index Word
74
75
76
Set Associative Mapping: Example 1
• Given memory size of 256KB, Cache size of 64KB, and memory block size of 2
Bytes, find the address format using 4-Way Set Associate Mapping ?
• Solution:
Memory Size=256KB= 218
Block Size(Offset) = 2 Bytes = 21
Cache Size = 64KB=216
Set Size = 4 blocks
To find:
Number of Sets in Cache(Index) = 216/(4*2)=213
Number of bits in Tag = Total – Index-Offset = 18-13-1 = 4
Tag : 4 Bits Index: 13 Bits Offset : 1 Bit 77
Set Associative Mapping Example 2
• Given memory size of 256KB, Cache size of 64KB, and memory block
size of 2 Bytes, find the address format using 8-Way Set Associate
Mapping ?
• Solution: Tag : 5 bits Index: 12 bits Offset : 1 bit

Memory Size=256KB= 218


Block Size(Offset) = 2 Bytes = 21
Cache Size = 64KB=216
Set Size = 8 blocks
To find:
Number of Sets in Cache(Index) = 216/(8*2)=212
Number of bits in Tag = Total – Index-Offset = 18-12-1 = 5

78
Replacement Policy
• Direct mapped: no choice
• Set associative
• Prefer non-valid entry, if there is one
• Otherwise, choose among entries in the set
• Least-recently used (LRU)
• Choose the one unused for the longest time
• Simple for 2-way, manageable for 4-way, too hard beyond that
• Random
• Gives approximately the same performance as LRU for high
associativity

80
Multilevel Caches
• Primary cache attached to CPU
• Small, but fast
• Level-2 cache services misses from primary
cache
• Larger, slower, but still faster than main memory
• Main memory services L-2 cache misses
• Some high-end systems include L-3 cache

81
Multilevel Cache Example
•Given
• CPU base CPI = 1, clock rate = 4GHz
• Miss rate/instruction = 2%
• Main memory access time = 100ns
•With just primary cache
• Miss penalty = 100ns/0.25ns = 400 cycles
• Effective CPI = 1 + 0.02 × 400 = 9

82
Example (cont.)
• Now add L-2 cache
• Access time = 5ns
• Global miss rate to main memory = 0.5%
• Primary miss with L-2 hit
• Penalty = 5ns/0.25ns = 20 cycles
• Primary miss with L-2 miss
• Extra penalty = 500 cycles
• CPI = 1 + 0.02 × 20 + 0.005 × 400 = 3.4
• Performance ratio = 9/3.4 = 2.6

83
Multilevel Cache Considerations
• Primary cache
• Focus on minimal hit time
• L-2 cache
• Focus on low miss rate to avoid main memory
access
• Hit time has less overall impact
• Results
• L-1 cache usually smaller than a single cache
• L-1 block size smaller than L-2 block size
84
Interactions with Advanced CPUs

• Out-of-order CPUs can execute instructions during


cache miss
• Pending store stays in load/store unit
• Dependent instructions wait in reservation stations
• Independent instructions continue
• Effect of miss depends on program data flow
• Much harder to analyze
• Use system simulation

85
Cache Design Trade-offs
Design Change Effect on Miss Rate Negative Performance Effect
Increase cache size Decrease capacity May increase access time
misses
Increase Decrease conflict May increase access time
associativity misses
Increase block size Decrease Increases miss penalty. For
compulsory misses very large block size, may
increase miss rate due to
pollution.

86
§5.8 Parallelism and Memory Hierarchies: Cache Coherence
Cache Coherence Problem
• Suppose two CPU cores share a physical address
space
• Write-through caches
Time Event CPU A’s CPU B’s Memory
step cache cache
0 0

1 CPU A reads X 0 0

2 CPU B reads X 0 0 0

3 CPU A writes 1 to X 1 0 1
87
Coherence Defined
• Informally: Reads return most recently written value
• Formally:
• P writes X; P reads X (no intervening writes)
 read returns written value
• P1 writes X; P2 reads X (sufficiently later)
 read returns written value
• c.f. CPU B reading X after step 3 in example
• P1 writes X, P2 writes X
 all processors see writes in the same order
• End up with the same final value for X
88
Cache Coherence Protocols
• Operations performed by caches in multiprocessors to
ensure coherence
• Migration of data to local caches
• Reduces bandwidth for shared memory
• Replication of read-shared data
• Reduces contention for access
• Snooping protocols
• Each cache monitors bus reads/writes
• Directory-based protocols
• Caches and memory record sharing status of blocks in a
directory 89
Invalidating Snooping Protocols
• Cache gets exclusive access to a block when it is to be
written
• Broadcasts an invalidate message on the bus
• Subsequent read in another cache misses
• Owning cache supplies updated value

CPU activity Bus activity CPU A’s CPU B’s Memory


cache cache
0
CPU A reads X Cache miss for X 0 0
CPU B reads X Cache miss for X 0 0 0
CPU A writes 1 to X Invalidate for X 1 0
CPU B read X Cache miss for X 1 1 1

90
Memory Consistency
• When are writes seen by other processors
• “Seen” means a read returns the written value
• Can’t be instantaneously
• Assumptions
• A write completes only when all processors have seen it
• A processor does not reorder writes with other accesses
• Consequence
• P writes X then writes Y
 all processors that see new Y also see new X
• Processors can reorder reads, but not writes

91
§5.10 Real Stuff: The AMD Opteron X4 and Intel Nehalem
Multilevel On-Chip Caches
Intel Nehalem 4-core processor

Per core: 32KB L1 I-cache, 32KB L1 D-cache, 512KB L2 cache

92
Examples
Mapping

93
Memory System Performance

• The hit time is how long it takes data to be sent from the cache to the
processor. This is usually fast, on the order of 1-3 clock cycles
• The miss penalty is the time to copy data from main memory to the
cache. This often requires dozens of clock cycles (at least)
• The miss rate is the percentage of misses

94
Average Memory Access Time
• The average memory access time, or AMAT, can then be
computed
AMAT = Hit time + (Miss rate x Miss penalty)
• This is just averaging the amount of time for cache hits and the
amount of time for cache misses
• Obviously, a lower AMAT is better
• Miss penalties are usually much greater than hit times, so the
best way to lower AMAT is to reduce the miss penalty or the
miss rate
95
Performance Example 1
• Assume that 33% of the instructions in a program are data
accesses. The cache hit ratio is 97% and the hit time is one
cycle, but the miss penalty is 20 cycles.
AMAT = Hit time + (Miss rate x Miss penalty)
= 1 cycle + (3% x 20 cycles)
= 1.6 cycles
• If the cache was perfect and never missed, the AMAT
would be one cycle. But even with just a 3% miss rate, the
AMAT here increases 1.6 times!
96
Performance Example 2
A CPU has access to 2 levels of memory. Level 1 contains
1000 words and has access time 0.01 ms; level 2 contains
100,000 words and has access time 0.1 ms. Assume that
if a word to be accessed is in level 1, then, the CPU
accesses it directly. If it is in level 2, then the word is first
transferred to level 1 and then accessed by the CPU. For
simplicity, we ignored the time required for the CPU to
determine whether the word is in level 1 or level 2.
Suppose 95% of the memory access are found in the
cache. Then, the average access to a word can be
expressed as:
97
Peformance Example 3

(0.95)(0.01 ms) + (0.05)(0.01 ms +0.1 ms)


= 0.0095 + 0.0055
= 0.015 ms.

98
§5.4 Virtual Memory
Virtual Memory

• Use main memory as a “cache” for secondary (disk) storage


• Managed jointly by CPU hardware and the operating system (OS)
• Programs share main memory
• Each gets a private virtual address space holding its frequently used code and
data
• Protected from other programs
• CPU and OS translate virtual addresses to physical addresses
• VM “block” is called a page
• VM translation “miss” is called a page fault

99
Address Translation
• Fixed-size pages (e.g., 4K)

100
Interactions with Software
• Misses depend on
memory access
patterns
• Algorithm behavior
• Compiler optimization
for memory access

101
Page Fault Penalty

• On page fault, the page must be fetched from disk


• Takes millions of clock cycles
• Handled by OS code
• Try to minimize page fault rate
• Fully associative placement
• Smart replacement algorithms

102
Page Tables

• Stores placement information


• Array of page table entries, indexed by virtual page number
• Page table register in CPU points to page table in physical memory
• If page is present in memory
• PTE stores the physical page number
• Plus other status bits (referenced, dirty, …)
• If page is not present
• PTE can refer to location in swap space on disk

103
Translation Using a Page Table

104
Mapping Pages to Storage

105
Replacement and Writes

• To reduce page fault rate, prefer least-recently used (LRU)


replacement
• Reference bit (aka use bit) in PTE set to 1 on access to page
• Periodically cleared to 0 by OS
• A page with reference bit = 0 has not been used recently
• Disk writes take millions of cycles
• Block at once, not individual locations
• Write through is impractical
• Use write-back
• Dirty bit in PTE set when page is written

106
Fast Translation Using a TLB

• Address translation would appear to require extra memory references


• One to access the PTE
• Then the actual memory access
• But access to page tables has good locality
• So use a fast cache of PTEs within the CPU
• Called a Translation Look-aside Buffer (TLB)
• Typical: 16–512 PTEs, 0.5–1 cycle for hit, 10–100 cycles for miss, 0.01%–1%
miss rate
• Misses could be handled by hardware or software

107
Fast Translation Using a TLB

108
TLB Misses

• If page is in memory
• Load the PTE from memory and retry
• Could be handled in hardware
• Can get complex for more complicated page table structures
• Or in software
• Raise a special exception, with optimized handler
• If page is not in memory (page fault)
• OS handles fetching the page and updating the page table
• Then restart the faulting instruction

109
TLB Miss Handler

• TLB miss indicates


• Page present, but PTE not in TLB
• Page not preset
• Must recognize TLB miss before destination register overwritten
• Raise exception
• Handler copies PTE from memory to TLB
• Then restarts instruction
• If page not present, page fault will occur

110
Page Fault Handler

• Use faulting virtual address to find PTE


• Locate page on disk
• Choose page to replace
• If dirty, write to disk first
• Read page into memory and update page table
• Make process runnable again
• Restart from faulting instruction

111
TLB and Cache Interaction
• If cache tag uses physical
address
• Need to translate before
cache lookup
• Alternative: use virtual
address tag
• Complications due to
aliasing
• Different virtual
addresses for shared
physical address
Memory Protection

• Different tasks can share parts of their virtual address spaces


• But need to protect against errant access
• Requires OS assistance
• Hardware support for OS protection
• Privileged supervisor mode (aka kernel mode)
• Privileged instructions
• Page tables and other state information only accessible in supervisor mode
• System call exception (e.g., syscall in MIPS)

113
§5.5 A Common Framework for Memory Hierarchies
The Memory Hierarchy
The BIG Picture
• Common principles apply at all levels of the memory
hierarchy
• Based on notions of caching
• At each level in the hierarchy
• Block placement
• Finding a block
• Replacement on a miss
• Write policy

114
Block Placement

• Determined by associativity
• Direct mapped (1-way associative)
• One choice for placement
• n-way set associative
• n choices within a set
• Fully associative
• Any location
• Higher associativity reduces miss rate
• Increases complexity, cost, and access time

115
Finding a Block
Associativity Location method Tag comparisons
Direct mapped Index 1
n-way set Set index, then search n
associative entries within the set
Fully associative Search all entries #entries
Full lookup table 0

• Hardware caches
• Reduce comparisons to reduce cost
• Virtual memory
• Full table lookup makes full associativity feasible
• Benefit in reduced miss rate

116
Replacement

• Choice of entry to replace on a miss


• Least recently used (LRU)
• Complex and costly hardware for high associativity
• Random
• Close to LRU, easier to implement
• Virtual memory
• LRU approximation with hardware support

117
Write Policy

• Write-through
• Update both upper and lower levels
• Simplifies replacement, but may require write buffer
• Write-back
• Update upper level only
• Update lower level when block is replaced
• Need to keep more state
• Virtual memory
• Only write-back is feasible, given disk write latency

118
Sources of Misses

• Compulsory misses (aka cold start misses)


• First access to a block
• Capacity misses
• Due to finite cache size
• A replaced block is later accessed again
• Conflict misses (aka collision misses)
• In a non-fully associative cache
• Due to competition for entries in a set
• Would not occur in a fully associative cache of the same total size

119
§5.6 Virtual Machines
Virtual Machines

• Host computer emulates guest operating system and machine


resources
• Improved isolation of multiple guests
• Avoids security and reliability problems
• Aids sharing of resources
• Virtualization has some performance impact
• Feasible with modern high-performance comptuers
• Examples
• IBM VM/370 (1970s technology!)
• VMWare
• Microsoft Virtual PC

120
Virtual Machine Monitor

• Maps virtual resources to physical resources


• Memory, I/O devices, CPUs
• Guest code runs on native machine in user mode
• Traps to VMM on privileged instructions and access to protected resources
• Guest OS may be different from host OS
• VMM handles real I/O devices
• Emulates generic virtual I/O devices for guest

121
Example: Timer Virtualization

• In native machine, on timer interrupt


• OS suspends current process, handles interrupt, selects and resumes next
process
• With Virtual Machine Monitor
• VMM suspends current VM, handles interrupt, selects and resumes next VM
• If a VM requires timer interrupts
• VMM emulates a virtual timer
• Emulates interrupt for VM when physical timer interrupt occurs

122
Instruction Set Support

• User and System modes


• Privileged instructions only available in system mode
• Trap to system if executed in user mode
• All physical resources only accessible using privileged instructions
• Including page tables, interrupt controls, I/O registers
• Renaissance of virtualization support
• Current ISAs (e.g., x86) adapting

123
§5.7 Using a Finite State Machine to Control A Simple Cache
Cache Control
• Example cache characteristics
• Direct-mapped, write-back, write allocate
• Block size: 4 words (16 bytes)
• Cache size: 16 KB (1024 blocks)
• 32-bit byte addresses
• Valid bit and dirty bit per block
• Blocking cache
• CPU waits until access is complete

31 10 9 4 3 0
Tag Index Offset
18 bits 10 bits 4 bits

124
Interface Signals

Read/Write Read/Write
Valid Valid
32 32
Address Address
32 Cache 128 Memory
CPU Write Data Write Data
32 128
Read Data Read Data
Ready Ready

Multiple cycles
per access

125
Finite State Machines
• Use an FSM to sequence
control steps
• Set of states, transition on
each clock edge
• State values are binary
encoded
• Current state stored in a
register
• Next state
= fn (current state,
current inputs)
• Control output signals
= fo (current state)

126
Cache Controller FSM
Could
partition into
separate
states to
reduce clock
cycle time

127
2-Level TLB Organization
Intel Nehalem AMD Opteron X4
Virtual addr 48 bits 48 bits
Physical addr 44 bits 48 bits
Page size 4KB, 2/4MB 4KB, 2/4MB
L1 TLB L1 I-TLB: 128 entries for small L1 I-TLB: 48 entries
(per core) pages, 7 per thread (2×) for L1 D-TLB: 48 entries
large pages Both fully associative, LRU
L1 D-TLB: 64 entries for small replacement
pages, 32 for large pages
Both 4-way, LRU replacement
L2 TLB Single L2 TLB: 512 entries L2 I-TLB: 512 entries
(per core) 4-way, LRU replacement L2 D-TLB: 512 entries
Both 4-way, round-robin LRU
TLB misses Handled in hardware Handled in hardware

128
3-Level Cache Organization
Intel Nehalem AMD Opteron X4
L1 caches L1 I-cache: 32KB, 64-byte L1 I-cache: 32KB, 64-byte
(per core) blocks, 4-way, approx LRU blocks, 2-way, LRU
replacement, hit time n/a replacement, hit time 3 cycles
L1 D-cache: 32KB, 64-byte L1 D-cache: 32KB, 64-byte
blocks, 8-way, approx LRU blocks, 2-way, LRU
replacement, write- replacement, write-
back/allocate, hit time n/a back/allocate, hit time 9 cycles
L2 unified 256KB, 64-byte blocks, 8-way, 512KB, 64-byte blocks, 16-way,
cache approx LRU replacement, write- approx LRU replacement, write-
(per core) back/allocate, hit time n/a back/allocate, hit time n/a
L3 unified 8MB, 64-byte blocks, 16-way, 2MB, 64-byte blocks, 32-way,
cache replacement n/a, write- replace block shared by fewest
(shared) back/allocate, hit time n/a cores, write-back/allocate, hit
time 32 cycles
n/a: data not available

129
Mis Penalty Reduction

• Return requested word first


• Then back-fill rest of block
• Non-blocking miss processing
• Hit under miss: allow hits to proceed
• Mis under miss: allow multiple outstanding misses
• Hardware prefetch: instructions and data
• Opteron X4: bank interleaved L1 D-cache
• Two concurrent accesses per cycle

130
§5.11 Fallacies and Pitfalls
Pitfalls

• Byte vs. word addressing


• Example: 32-byte direct-mapped cache,
4-byte blocks
• Byte 36 maps to block 1
• Word 36 maps to block 4
• Ignoring memory system effects when writing or generating code
• Example: iterating over rows vs. columns of arrays
• Large strides result in poor locality

131
Pitfalls

• In multiprocessor with shared L2 or L3 cache


• Less associativity than cores results in conflict misses
• More cores  need to increase associativity
• Using AMAT to evaluate performance of out-of-order processors
• Ignores effect of non-blocked accesses
• Instead, evaluate performance by simulation

132
Pitfalls

• Extending address range using segments


• E.g., Intel 80286
• But a segment is not always big enough
• Makes address arithmetic complicated
• Implementing a VMM on an ISA not designed for virtualization
• E.g., non-privileged instructions accessing hardware resources
• Either extend ISA, or require guest OS not to use problematic instructions

133
§5.12 Concluding Remarks
Concluding Remarks

• Fast memories are small, large memories are slow


• We really want fast, large memories 
• Caching gives this illusion 
• Principle of locality
• Programs use a small part of their memory space frequently
• Memory hierarchy
• L1 cache  L2 cache  …  DRAM memory
 disk
• Memory system design is critical for multiprocessors

134

You might also like