Computer Architecture
By: Mahrukh Batool
Computer architecture
What: Computer Architecture is the science and art of
selecting and interconnecting hardware components to
create computers that meet functional, performance and
cost goals.
Why: Computer science is the base and platform for
countless industries and disciplines. Todays employers
need graduates with both a solid foundation in the
principles of computer science and specialized
computing skills and backgrounds individuals with a
generalists knowledge, but an experts eye for
innovation and problem solving.
Architecture
Computer architecture, like other architecture, is the art of determining the needs of the user of a structure and
then designing to meet those needs as effectively as possible within economic and technological constraints.
Materials
Buildings
Plans
Steel
Concrete
Brick
Wood
Glass
Construction
Goals
Function
Cost
Safety
Ease of Construction
Energy Efficiency
Fast Build Time
Aesthetics
Technology
Logic Gates
Houses
Offices
Apartments
Stadiums
Museums
SRAM
Computers
Plan
Manufacture
Desktops
Servers
DRAM
Goals
Mobile Phones
Circuit Techniques
Function
Supercomputers
Packaging
Performance
Game Consoles
Magnetic Storage
Reliability
Embedded
Flash Memo
Cost/Manufacturability
Energy Efficiency
Time to Market
Computer evolution
Computer evolution
Subcategories of Architecture
The discipline of computer architecture has three main
subcategories
Instruction Set Architecture: The ISA defines the machine
code that a processor reads and acts upon as well as word size,
memory address modes, processor registers, and data type.
Microarchitecture: or computer organization describes how a
particular processor will implement the ISA. The size of a
computer's CPU cache for instance,
System Design includes all of the other hardware components
within a computing system. These include:
Data processing other than the CPU, such as direct memory access (DMA)
Other issues such as virtualization, multiprocessing, and software features .
Basic Architecture
Central Processing Unit
The fundamental operation of most CPUs, regardless of the
physical form they take, is to execute a sequence of stored
instructions that is called a program. The instructions to be
executed are kept in some kind of computer memory. Nearly all
CPUs follow the fetch, decode and execute steps in their
operation, which are collectively known as the instruction cycle.
Fetch: The first step, fetch, involves retrieving an instruction (which is represented by a number
or sequence of numbers) from program memory. The instruction's location (address) in program
memory is determined by a program counter (PC), which stores a number that identifies the
address of the next instruction to be fetched. After an instruction is fetched, the PC is
incremented by the length of the instruction so that it will contain the address of the next
instruction in the sequence
Decode: The instruction that the CPU fetches from memory determines what the CPU will do.
In the decode step, performed by the circuitry known as the instruction decoder, the
instruction is converted into signals that control other parts of the CPU.
Execute: After fetch and decode steps, the execute step is performed. Depending on the CPU
architecture, this may consist of a single action or a sequence of actions. During each action,
various parts of the CPU are electrically connected so they can perform all or part of the desired
operation and then the action is completed, typically in response to a clock pulse. Very often the
results are written to an internal CPU register for quick access by subsequent instruction
System Bus
The control unit of the processor
communicates with the rest of the
architecture through a processor bus,
which can be viewed as consisting of
three distinct sets of wires denoted as
address bus, data bus and control
bus. The address bus is a set of wires
used to communicate an address. The
data bus is a set of wires used to
communicate data. The control bus is
a set of wires with functions other than
those of the address and data buses,
especially signals that tell when the
information on the address and data
buses is valid.
Bus Address Latch Enable
Direct Memory Access Controller
To provide means of transferring data without processor
attention, the processor is equipped with the hold and
hold acknowledge signals. The control unit of the
processor checks whether the hold signal is set, and if it
is, the control unit responds with setting the hold
acknowledge signal and holding access to the processor
bus until the hold signal is reset, effectively relinquishing
control of the processor bus.
To better cope with situations where more devices can
transfer data without processor attention, the handling of
the hold signal is delegated to a direct memory access
controller. The controller has several transfer request
inputs associated with transfer counters and takes care
of taking over the processor bus and setting the address
and control signals during transfer.
Interrupt Controller
To provide means of requesting attention from outside, the
processor is equipped with the interrupt and interrupt
acknowledge signals. Before executing an instruction, the
control unit of the processor checks whether the interrupt
signal is set, and if it is, the control unit responds with
setting the interrupt acknowledge signal and setting the
program counter to a predefined address, effectively
executing a subroutine call instruction.
To better cope with situations where more devices can
request attention, the handling of the interrupt request
signal is delegated to an interrupt controller. The controller
has several interrupt request inputs and takes care of
aggregating those inputs into the interrupt request input of
the processor using priorities and queuing and providing
the processor with information to distinguish the interrupt
requests.
Memory
Instruction And Data Caching
Memory accesses used to fetch instructions and their operands and results exhibit locality of
reference. A processor can keep a copy of the recently accessed instructions and data in a
cache that can be accessed faster than other memory.
A cache is limited in size by factors such as access speed and chip area. A cache may be
fully associative or combine a limited degree of associativity with hashing. Multiple levels of
caches with different sizes and speeds are typically present to accommodate various
memory access patterns.
The copy of the instructions and data in a cache must be coherent with the original. This is a
problem when other devices than a processor access memory. A cache coherency protocol
solves the problem by snooping on the processor bus and either invalidating or updating the
cache when an access by other devices than a processor is observed.
Thank you