COMPUTER ORGANIZATION AND DESIGN
RISC-V
Edition
The Hardware/Software Interface
Chapter 1
Computer Abstractions
and Technology
The Computer Revolution
§1.1 Introduction
Progress in computer technology
Underpinned by Moore’s Law
Makes novel applications feasible
Computers in automobiles
Cell phones
Human genome project
World Wide Web
Search Engines
Computers are pervasive
2
Classes of Computers
Personal computers
General purpose, variety of software
Subject to cost/performance tradeoff
Server computers
Network based
High capacity, performance, reliability
Range from small servers to building sized
3
Classes of Computers
Supercomputers
High-end scientific and engineering
calculations
Highest capability but represent a small
fraction of the overall computer market
Embedded computers
Hidden as components of systems
Stringent power/performance/cost constraints
4
The PostPC Era
5
The PostPC Era
Personal Mobile Device (PMD)
Battery operated
Connects to the Internet
Hundreds of dollars
Smart phones, tablets, electronic glasses
Cloud computing
Warehouse Scale Computers (WSC)
Software as a Service (SaaS)
Portion of software run on a PMD and a
portion run in the Cloud
Amazon and Google
6
What You Will Learn
How programs are translated into the
machine language
And how the hardware executes them
The hardware/software interface
What determines program performance
And how it can be improved
How hardware designers improve
performance
What is parallel processing
7
Understanding Performance
Algorithm
Determines number of operations executed
Programming language, compiler, architecture
Determine number of machine instructions executed
per operation
Processor and memory system
Determine how fast instructions are executed
I/O system (including OS)
Determines how fast I/O operations are executed
8
Eight Great Ideas
§1.2 Eight Great Ideas in Computer Architecture
Design for Moore’s Law
Use abstraction to simplify design
Make the common case fast
Performance via parallelism
Performance via pipelining
Performance via prediction
Hierarchy of memories
Dependability via redundancy
9
Below Your Program
§1.3 Below Your Program
Application software
Written in high-level language
System software
Compiler: translates HLL code to
machine code
Operating System: service code
Handling input/output
Managing memory and storage
Scheduling tasks & sharing resources
Hardware
Processor, memory, I/O controllers
10
Levels of Program Code
High-level language
Level of abstraction closer
to problem domain
Provides for productivity
and portability
Assembly language
Textual representation of
instructions
Hardware representation
Binary digits (bits)
Encoded instructions and
data
11
Components of a Computer
§1.4 Under the Covers
The BIG Picture Same components for
all kinds of computer
Desktop, server,
embedded
Input/output includes
User-interface devices
Display, keyboard, mouse
Storage devices
Hard disk, CD/DVD, flash
Network adapters
For communicating with
other computers
12
Touchscreen
PostPC device
Supersedes keyboard
and mouse
Resistive and
Capacitive types
Most tablets, smart
phones use capacitive
Capacitive allows
multiple touches
simultaneously
13
Through the Looking Glass
LCD screen: picture elements (pixels)
Mirrors content of frame buffer memory
14
Opening the Box
Capacitive multitouch LCD screen
3.8 V, 25 Watt-hour battery
Computer board
15
Inside the Processor (CPU)
Datapath: performs operations on data
Control: sequences datapath, memory, ...
Cache memory
Small fast SRAM memory for immediate
access to data
16
Inside the Processor
Apple A5
17
Abstractions
The BIG Picture
Abstraction helps us deal with complexity
Hide lower-level detail
Instruction set architecture (ISA)
The hardware/software interface
Application binary interface
The ISA plus system software interface
Implementation
The details underlying and interface
18
A Safe Place for Data
Volatile main memory
Loses instructions and data when power off
Non-volatile secondary memory
Magnetic disk
Flash memory
Optical disk (CDROM, DVD)
19
Networks
Communication, resource sharing,
nonlocal access
Local area network (LAN): Ethernet
Wide area network (WAN): the Internet
Wireless network: WiFi, Bluetooth
20
Technology Trends
§1.5 Technologies for Building Processors and Memory
Electronics
technology
continues to evolve
Increased capacity
and performance
Reduced cost
DRAM capacity
Year Technology Relative performance/cost
1951 Vacuum tube 1
1965 Transistor 35
1975 Integrated circuit (IC) 900
1995 Very large scale IC (VLSI) 2,400,000
2013 Ultra large scale IC 250,000,000,000
21
Semiconductor Technology
Silicon: semiconductor
Add materials to transform properties:
Conductors
Insulators
Switch
22
Manufacturing ICs
Yield: proportion of working dies per wafer
23
Intel Core i7 Wafer
300mm wafer, 280 chips, 32nm technology
Each chip is 20.7 x 10.5 mm
24
Integrated Circuit Cost
Nonlinear relation to area and defect rate
Wafer cost and area are fixed
Defect rate determined by manufacturing process
Die area determined by architecture and circuit design
25
Defining Performance
§1.6 Performance
Which airplane has the best performance?
26
Response Time and Throughput
Response time
How long it takes to do a task
Throughput
Total work done per unit time
e.g., tasks/transactions/… per hour
How are response time and throughput affected
by
Replacing the processor with a faster version?
Adding more processors?
We’ll focus on response time for now…
27
Relative Performance
Define Performance = 1/Execution Time
“X is n time faster than Y”
Example: time taken to run a program
10s on A, 15s on B
Execution TimeB / Execution TimeA
= 15s / 10s = 1.5
So A is 1.5 times faster than B
28
Measuring Execution Time
Elapsed time
Total response time, including all aspects
Processing, I/O, OS overhead, idle time
Determines system performance
CPU time
Time spent processing a given job
Discounts I/O time, other jobs’ shares
Comprises user CPU time and system CPU
time
Different programs are affected differently by
CPU and system performance
29
CPU Clocking
Operation of digital hardware governed by a
constant-rate clock
Clock period
Clock (cycles)
Data transfer
and computation
Update state
Clock period: duration of a clock cycle
e.g., 250ps = 0.25ns = 250×10–12s
Clock frequency (rate): cycles per second
e.g., 4.0GHz = 4000MHz = 4.0×109Hz
30
CPU Time
Performance improved by
Reducing number of clock cycles
Increasing clock rate
Hardware designer must often trade off clock
rate against cycle count
31
CPU Time Example
Computer A: 2GHz clock, 10s CPU time
Designing Computer B
Aim for 6s CPU time
Can do faster clock, but causes 1.2 × clock cycles
How fast must Computer B clock be?
32
Instruction Count and CPI
Instruction Count for a program
Determined by program, ISA and compiler
Average cycles per instruction
Determined by CPU hardware
If different instructions have different CPI
Average CPI affected by instruction mix
33
CPI Example
Computer A: Cycle Time = 250ps, CPI = 2.0
Computer B: Cycle Time = 500ps, CPI = 1.2
Same ISA
Which is faster, and by how much?
A is faster…
…by this much
34
CPI in More Detail
If different instruction classes take different
numbers of cycles
Weighted average CPI
Relative frequency
35
CPI Example
Alternative compiled code sequences using
instructions in classes A, B, C
Class A B C
CPI for class 1 2 3
IC in sequence 1 2 1 2
IC in sequence 2 4 1 1
Sequence 1: IC = 5 Sequence 2: IC = 6
Clock Cycles Clock Cycles
= 2×1 + 1×2 + 2×3 = 4×1 + 1×2 + 1×3
= 10 =9
Avg. CPI = 10/5 = 2.0 Avg. CPI = 9/6 = 1.5
36
Performance Summary
The BIG Picture
Performance depends on
Algorithm: affects IC, possibly CPI
Programming language: affects IC, CPI
Compiler: affects IC, CPI
Instruction set architecture: affects IC, CPI, Tc
37
Power Trends
§1.7 The Power Wall
In CMOS IC technology
×30 5V → 1V ×1000
38
Reducing Power
Suppose a new CPU has
85% of capacitive load of old CPU
15% voltage and 15% frequency reduction
The power wall
We can’t reduce voltage further
We can’t remove more heat
How else can we improve performance?
39
Uniprocessor Performance
§1.8 The Sea Change: The Switch to Multiprocessors
Constrained by power, instruction-level parallelism,
memory latency
40
Multiprocessors
Multicore microprocessors
More than one processor per chip
Requires explicitly parallel programming
Compare with instruction level parallelism
Hardware executes multiple instructions at once
Hidden from the programmer
Hard to do
Programming for performance
Load balancing
Optimizing communication and synchronization
41
SPEC CPU Benchmark
Programs used to measure performance
Supposedly typical of actual workload
Standard Performance Evaluation Corp (SPEC)
Develops benchmarks for CPU, I/O, Web, …
SPEC CPU2006
Elapsed time to execute a selection of programs
Negligible I/O, so focuses on CPU performance
Normalize relative to reference machine
Summarize as geometric mean of performance ratios
CINT2006 (integer) and CFP2006 (floating-point)
42
CINT2006 for Intel Core i7 920
43
SPEC Power Benchmark
Power consumption of server at different
workload levels
Performance: ssj_ops/sec
Power: Watts (Joules/sec)
44
SPECpower_ssj2008 for Xeon X5650
45
Pitfall: Amdahl’s Law
§1.10 Fallacies and Pitfalls
Improving an aspect of a computer and
expecting a proportional improvement in
overall performance
Example: multiply accounts for 80s/100s
How much improvement in multiply performance to
get 5× overall?
Can’t be done!
Corollary: make the common case fast
46
Fallacy: Low Power at Idle
Look back at i7 power benchmark
At 100% load: 258W
At 50% load: 170W (66%)
At 10% load: 121W (47%)
Google data center
Mostly operates at 10% – 50% load
At 100% load less than 1% of the time
Consider designing processors to make
power proportional to load
47
Pitfall: MIPS as a Performance Metric
MIPS: Millions of Instructions Per Second
Doesn’t account for
Differences in ISAs between computers
Differences in complexity between instructions
CPI varies between programs on a given CPU
48
Concluding Remarks
§1.11 Concluding Remarks
Cost/performance is improving
Due to underlying technology development
Hierarchical layers of abstraction
In both hardware and software
Instruction set architecture
The hardware/software interface
Execution time: the best performance
measure
Power is a limiting factor
Use parallelism to improve performance
49