+
Chapter 4 Cache Memory
William Stallings, Computer Organization and Architecture, 9th Edition
+
Objectives
How are internal memory elements of a computer
structured?
After studying this chapter, you should be able to:
Present an overview of the main characteristics of
computer memory systems and the use of a memory
hierarchy.
Describe the basic concepts and intent of cache
memory. Discuss the key elements of cache design.
Distinguish among direct mapping, associative
mapping, and set-associative mapping.
Explain the reasons for using multiple levels of cache.
Understand the performance implications of multiple
levels of memory.
+
Contents
4.1- Computer Memory Systems Overview
4.2- Cache Memory Principles
4.3- Elements of Cache Design
+
4.1- Computer Memory System
Overview
Characteristics of Memory System.
The Memory Hierarchy
Key Characteristics of Computer Memory Systems
+ Characteristics of Memory Systems
Location
Refers to whether memory is internal and external to the computer
Internal memory is often equated (make equal) with main memory
Processor requires its own local memory, in the form of registers
Cache is another form of internal memory
External memory consists of peripheral storage devices that are
accessible to the processor via I/O controllers
Capacity
Memory is typically expressed in terms of bytes
Unit of transfer
For internal memory the unit of transfer is equal to the number of
electrical lines into and out of the memory module
Method of Accessing Units of Data
Sequential Direct Random
Associative
access access access
Each addressable A word is retrieved
Memory is organized into
Involves a shared read- location in memory has a based on a portion of its
units of data called
write mechanism unique, physically wired- contents rather than its
records
in addressing mechanism address
Each location has its own
The time to access a
Individual blocks or addressing mechanism
Access must be made in given location is
records have a unique and retrieval time is
a specific linear independent of the
address based on constant independent of
sequence sequence of prior
physical location location or prior access
accesses and is constant
patterns
Any location can be
Cache memories may
selected at random and
Access time is variable Access time is variable employ associative
directly addressed and
access
accessed
Main memory and some
cache systems are
random access
Capacity and Performance:
The two most important characteristics of
memory
Three performance parameters are used:
Memory cycle time
Access time (latency) Transfer rate
•Access time plus any additional
•For random-access memory it is the time required before second •The rate at which data can be
time it takes to perform a read or access can commence transferred into or out of a memory
write operation •Additional time may be required unit
•For non-random-access memory it for transients to die out on signal •For random-access memory it is
is the time it takes to position the lines or to regenerate data if they equal to 1/(cycle time)
read-write mechanism at the are read destructively
desired location •Concerned with the system bus,
not the processor
+ Memory
The most common forms are:
Semiconductor memory
Magnetic surface memory
Optical
Magneto-optical
Several physical characteristics of data storage are important:
Volatile memory
Information decays naturally or is lost when electrical power is switched off
Nonvolatile memory
Once recorded, information remains without deterioration until deliberately
changed
No electrical power is needed to retain information
Magnetic-surface memories : Are nonvolatile
Semiconductor memory : May be either volatile or nonvolatile
Nonerasable memory
Cannot be altered, except by destroying the storage unit
Semiconductor memory of this type is known as read-only memory (ROM)
For random-access memory the organization is a key design issue
Organization refers to the physical arrangement of bits to form words
+ Memory Hierarchy
Design
constraints on a computer’s memory can
be summed up by three questions:
How much, how fast, how expensive
Thereis a trade-off among capacity, access time,
and cost
Faster access time, greater cost per bit
Greater capacity, smaller cost per bit
Greater capacity, slower access time
Theway out of the memory dilemma is not to rely
on a single memory component or technology, but
to employ a memory hierarchy
A great capacity memory but cheap and low cost + one or some small
capacity memory but fast and more expensive (cache) .
+ Memory Hierarchy…
+
4.2- Cache Memory Principles
What is cache?
Cache and Main Memory
What is a Cache?
Cache: A small
size, expensive,
memory which
has high-speed
access is located
between CPU
and RAM (large
memory size,
cheaper, and
lower-speed
memory.
L0: cache in
CPU
Cache/Main Memory Structure
Main memory is
divided into the
same size blocks
(called as pages).
One or some
pages will be
loaded to cache.
Each line Address in cache
includes a tag is different from
that identifies those in main
which particular memory A
block is mapping is
currently being needed.
stored
Cache Addr Main Mem Addr
Cache/Main Memory: Some concepts
1- Instruction length: Specific
CPUs have registers which have the
same length and all instructions
have he same length. The
instruction length of 32-bit CPU is
32 bit and is 64 bits in 64-bit CPU.
So, OSs can manage executing order
of a program using the index of
instruction only (stored in the PC- 10
program counter- register). From PC
and base address where the 3000
program is loaded, the address of the
executing instruction is determined.
MEM
Cache/Main Memory: Some concepts
2- Specific OS must conform to a specific
CPU.
3- Program includes high-level 13 X=10
instructions and data. Compilers will
translate instructions to machine codes and
data to their addresses where their values
are stored appropriately with specific OSs.
5- Almost all instructions access data. So,
binary instruction contains data address.
6- Load an instruction from memory:
Needs instruction address
3000
7- Execute an instruction: Needs data
addresses.
8- Old OSs such as use physical address or MEM
virtual address <base, offset> (3000, 13)
Cache/Main Memory: Some concepts
9- Contemporary OSs allows many X=10
programs running concurrently 4
although system’s memory is 3
limited. Solution is that the
program content and memory will
be divided into some pages (same-
size, ex: 4KB) or segments 2
(different size). Only needed
pages/segments are loaded to Address of X:
system memory. (3,4)
1
10- Compilers must translate
program’s addresses to a suitable
form (virtual address). Virtual
0
address format: <page, offset>.
Cache/Main Memory: Some concepts
11- When an instruction/data
is accessed, physical address
must be supplied. A mapping
is needed as a mean for
determining physical
addresses from their virtual
addresses. This mapping is
implemented in OS as a page
table.
12- A hardware is needed to
translate virtual address to
physical address MMU –
Memory Management Unit.
Operating Systems - Tannenbaum
Cache/Main Memory: Some concepts
Time-sharing
mechanism
13- How do OSs permit many program running concurrently.
Time-sharing mechanism.
14- Advantages of memory paging: Many apps can run
concurrently in limited memory. A page of a program can
loaded into arbitrary physical memory location.
15- Disadvantages of memory paging: Cost must be paid when
an in-memory frame must be swapped to disk from memory
(swap out) then a page from disk will be loaded to memory
(swap in) – Paging replacement.
4.3- Elements of Cache Design
Overview of
cache design
parameters
+
Cache Addresses: Virtual Address
Virtual memory
Facilitythat allows programs to address memory
from a logical point of view, without regard to the
amount of main memory physically available
When used, the address fields of machine
instructions contain virtual addresses
For reads to and writes from main memory, a
hardware memory management unit (MMU)
translates each virtual address into a physical
address in main memory
+
Logical
and
Physical
Caches
Virtual cache stores data using virtual addresses
Physical cache stores data using physical addresses
Mapping Function
Because there are fewer cache lines than main memory
blocks, an algorithm is needed for mapping main memory
blocks into cache lines
Three techniques can be used:
Direct Associative Set Associative
• The simplest technique • Permits each main • A compromise that
• Maps each block of main memory block to be exhibits the strengths of
memory into only one loaded into any line of the both the direct and
possible cache line cache associative approaches
while reducing their
• The cache control logic disadvantages
interprets a memory
address simply as a Tag
and a Word field
• To determine whether a
block is in the cache, the
cache control logic must
simultaneously examine
every line’s Tag for a
match
+ Direct
Mapping
The block j in main
memory will be loaded
to the line i of the
cache: i = j mode m
A block in main
memory can be
load to any line of
the cache
Direct Mapping Cache Organization
s: Block index
r: Line index
w: word index
penalty
+
Direct
Mapping
Example
READ BY YOURSELF
+
Direct Mapping Summary
Address length = (s + w) bits
Number of addressable units = 2s+w words
or bytes
Block size = line size = 2w words or bytes
Number of blocks in main memory = 2s+
w/2w = 2s
Number of lines in cache = m = 2r
Size of tag = (s – r) bits
+
Victim Cache
Originally proposed as an approach to reduce the conflict
misses of direct mapped caches without affecting its fast
access time
Fully associative cache
Typical size is 4 to 16 cache lines
Residing between direct mapped L1 cache and the next level
of memory
Fully Associative Cache Organization
A block
can be
loaded to
any
cache
line
Compare
to each
Tag
+
Associative
Mapping
Example
READ BY YOURSELF
+
Associative Mapping Summary
Address length = (s + w) bits
Number of addressable units = 2s+w words or bytes
Block size = line size = 2w words or bytes
Number of blocks in main memory= 2s+ w/2w = 2s
Number of lines in cache = undetermined
Size of tag = s bits
+
Set Associative Mapping READ BY YOURSELF
Compromise (thỏa hiệp) that exhibits the strengths
of both the direct and associative approaches
while reducing their disadvantages
Cache consists of a number of sets Go to Replacement
Algorithms
Each set contains a number of lines
A given block maps to any line in a given set
e.g. 2 lines per set
2 way associative mapping
A given block can be in one of 2 lines in only one set
+
Mapping From
Main Memory
to Cache:
k-Way
Set Associative
READ BY YOURSELF
k-Way READ BY YOURSELF
Set
Associative
Cache
Organization
+
Set Associative Mapping Summary
READ BY YOURSELF
Address length = (s + w) bits
Number of addressable units = 2s+w words or bytes
Block size = line size = 2w words or bytes
Number of blocks in main memory = 2s+w/2w=2s
Number of lines in set = k
Number of sets = v = 2d
Number of lines in cache = m=kv = k * 2d
Size of cache = k * 2d+wwords or bytes
Size of tag = (s – d) bits
READ BY YOURSELF
+
Varying Associativity Over Cache Size
READ BY YOURSELF
+
Replacement Algorithms
Once the cache has been filled, when a new block
is brought into the cache, one of the existing
blocks must be replaced
Fordirect mapping there is only one possible line
for any particular block and no choice is possible
Forthe associative and set-associative techniques
a replacement algorithm is needed
To
achieve high speed, an algorithm must be
implemented in hardware
+
The four most common
replacement algorithms are:
Least recently used (LRU)
Most effective
Replace that block in the set that has been in the cache longest with
no reference to it
Because of its simplicity of implementation, LRU is the most popular
replacement algorithm
First-in-first-out (FIFO)
Replace that block in the set that has been in the cache longest
Easily implemented as a round-robin or circular buffer technique
Least frequently used (LFU)
Replace that block in the set that has experienced the fewest
references
Could be implemented by associating a counter with each line
Write Policy
When a block that is resident in
There are two problems to
the cache is to be replaced
contend (đấu tranh) with:
there are two cases to consider:
If the old block in the cache has not
been altered then it may be overwritten More than one device may have access to
with a new block without first writing out main memory
the old block
If at least one write operation has been A more complex problem occurs when
performed on a word in that line of the multiple processors are attached to
cache then main memory must be the same bus and each processor has
updated by writing the line of cache out its own local cache - if a word is altered
to the block of memory before bringing in one cache it could conceivably
in the new block invalidate a word in other caches
+
Write Through
and Write Back
Write through- Ghi thẳng
Simplest technique
All write operations are made to main memory as well as to the
cache
The main disadvantage of this technique is that it generates substantial
(heavy) memory traffic and may create a bottleneck
Write back-Ghi ngầm
Minimizes memory writes
Updates are made only in the cache
Portions of main memory are invalid and hence accesses by I/O
modules can be allowed only through the cache
This makes for complex circuitry and a potential bottleneck
Line Size
When a block of Two specific effects
data is retrieved come into play:
and placed in the • Larger blocks reduce the
cache not only the As the block size number of blocks that fit into
desired word but increases more a cache
also some number useful data are • As a block becomes larger
each additional word is
of adjacent words brought into the farther from the requested
are retrieved cache word
As the block size The hit ratio will
increases the hit begin to decrease
ratio will at first as the block
increase because becomes bigger
of the principle of and the probability
locality of using the newly
fetched
Larger line size More date information
becomes less than
Cache hit increase, bur expensive the probability of
reusing the
and more data in cache but not information that
has to be replaced
used (Normal: 64-128 bytes)
+
Multilevel Caches
As logic density has increased it has become possible to have a cache
on the same chip as the processor
The on-chip cache reduces the processor’s external bus activity and
speeds up execution time and increases overall system performance
When the requested instruction or data is found in the on-chip cache, the bus
access is eliminated
On-chip cache accesses will complete appreciably faster than would even
zero-wait state bus cycles
During this period the bus is free to support other transfers
Two-level cache:
Internal cache designated as level 1 (L1)
External cache designated as level 2 (L2)
Potential savings due to the use of an L2 cache depends on the hit rates
in both the L1 and L2 caches
The use of multilevel caches complicates all of the design issues related
to caches, including size, replacement algorithm, and write policy
Hit Ratio (L1 & L2) For 8 Kbyte and 16Kbyte L1
+
Unified Versus Split Caches
Has become common to split cache:
One dedicated to instructions
One dedicated to data
Both exist at the same level, typically as two L1 caches
Advantages of unified cache: Higher hit rate
Balances load of instruction and data fetches automatically
Only one cache needs to be designed and implemented
Trend is toward split caches at the L1 and unified caches for
higher levels
Advantages of split cache:
Eliminates cache contention (tranh chấp) between instruction
fetch/decode unit and execution unit
Important in pipelining (cơ chế đường ống, output của xử lý này
là input của xử lý kế tiếp)
+
Exercises
4.1- What are the differences among sequential access, direct
access, and random access?
4.2-What is the general relationship among access time,
memory cost, and capacity?
4.3- How does the principle of locality relate to the use of
multiple memory levels?
4.4- What are the differences among direct mapping and
associative mapping,?
4.5- For a direct-mapped cache, a main memory address is
viewed as consisting of three fields. List and define the three
fields.
4.6- For an associative cache, a main memory address is
viewed as consisting of two fields. List and define the two fields.
+ Summary Cache
Memory
Chapter 4
Characteristics of Memory
Elements of cache design
Systems
Cache addresses
Location
Cache size
Capacity
Mapping function
Unit of transfer
Replacement algorithms
Memory Hierarchy Write policy
How much? Line size
Number of caches
How fast?
How expensive?
Cache memory principles