KEMBAR78
Ift212 Lecture Notes | PDF | Random Access Memory | Computer Data Storage
0% found this document useful (0 votes)
41 views43 pages

Ift212 Lecture Notes

The document provides an overview of computer architecture and organization, detailing the differences between the two concepts and the role of Instruction Set Architecture (ISA) as a bridge between software and hardware. It outlines the five generations of computers, highlighting their characteristics and technological advancements from vacuum tubes to ultra-large scale integration. Additionally, it discusses CPU internal organization, memory types, and the functions of various CPU components.

Uploaded by

kefasalpha2023
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views43 pages

Ift212 Lecture Notes

The document provides an overview of computer architecture and organization, detailing the differences between the two concepts and the role of Instruction Set Architecture (ISA) as a bridge between software and hardware. It outlines the five generations of computers, highlighting their characteristics and technological advancements from vacuum tubes to ultra-large scale integration. Additionally, it discusses CPU internal organization, memory types, and the functions of various CPU components.

Uploaded by

kefasalpha2023
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

IFT 212

COMPUTER ARCHTECTURE AND ORGANIZATION

LECTURE NOTE

1.0 COMPUTER ARCHITECTURE AND ORGANIZATION


Computer architecture can be considered as a catalogue of tools or operational
attributes (components) that are visible to the user. It deals with details like physical
memory, CPU, ISA (Instruction Set Architecture) of the processor, the number of
bits used to represent the data types, Input-Output mechanism and technique for
addressing memories.

Computer Organization is the realization of what is specified by the computer


architecture. It deals with how operational attributes are linked together to meet the
requirements specified by computer architecture. Some organizational attributes are
hardware details, control signals, peripherals.

Differences between Computer Architecture and Organization


Computer Architecture Computer Organization

Computer Architecture is Computer Organization is


concerned with the way hardware concerned with the structure and
components are connected behavior of a computer system as
together to form a computer seen by the user.
system.

It acts as the interface between It deals with the components of a


hardware and software. connection in a system.

It helps us to understand the Computer Organization tells us


functionalities of a system. how exactly all the units in the
system are arranged and
interconnected.
A programmer can view Whereas Organization expresses
architecture in terms of the realization of architecture.
instructions, addressing modes and
registers.

While designing a computer An organization is done on the


system, architecture is considered basis of architecture.
first.

It deals with high-level design Computer Organization deals


issues. with low-level design issues.

Architecture involves Logic Organization involves Physical


(Instruction sets, Addressing Components (Circuit design,
modes, Data types, Cache Adders, Signals, Peripherals)
optimization)

1. Instruction set architecture (ISA)


Instruction set architecture (ISA) is a bridge between the software and hardware of
a computer. It functions as a programmer’s viewpoint on a machine. Computers can
only comprehend binary language (0 and 1), but humans can comprehend high-level
language (if-else, while, conditions, and the like). Consequently, ISA plays a crucial
role in user-computer communications by translating high-level language into binary
language.

In addition, ISA outlines the architecture of a computer in terms of the fundamental


activities it must support. It’s not involved with implementation-specific computer
features. Instruction set architecture dictates that the computer must assist:

 Arithmetic/logic instructions: These instructions execute various


mathematical or logical processing elements solely on a single or maybe
more operands (data inputs).
 Data transfer instructions: These instructions move commands from
the memory or into the processor registers, or vice versa.
 Branch and jump instructions: These instructions are essential to
interrupt the logical sequence of instructions and jump to other
destinations.

Another definition of computer architecture is built on four basic viewpoints. These


are the structure, the organization, the implementation, and the performance.
In this definition,
 the structure defines the interconnection of various hardware components,
 the organization defines the dynamic interplay and management of the various
components,
 the implementation defines the detailed design of hardware components, and
 the performance specifies the behavior of the computer system.

1.1 GENERATIONS OF A COMPUTER


Generation in computer terminology is a change in technology a computer is/was
being used. Initially, the generation term was used to distinguish between varying
hardware technologies. But nowadays, generation includes both hardware and
software, which together make up an entire computer system.
There are totally five computer generations known till date. Each generation has
been discussed in detail along with their time period and characteristics. Here
approximate dates against each generations have been mentioned which are
normally accepted.

Following are the main five generations of computers

S/N. Generation & Description


1 First Generation
The period of first generation: 1946-1959. Vacuum tube based.
2 Second Generation
The period of second generation: 1959-1965. Transistor based.
3 Third Generation
The period of third generation: 1965-1971. Integrated Circuit
based.
4 Fourth Generation
The period of fourth generation: 1971-1980. VLSI
microprocessor based.
5 Fifth Generation
The period of fifth generation: 1980-onwards. ULSI
microprocessor based

First generation
The period of first generation was 1946-1959. The computers of first generation used
vacuum tubes as the basic components for memory and circuitry for CPU (Central
Processing Unit). These tubes, like electric bulbs, produced a lot of heat and were
prone to frequent fusing of the installations, therefore, were very expensive and
could be afforded only by very large organizations. In this generation, mainly batch
processing operating system were used. Punched cards, paper tape, and magnetic
tape were used as input and output devices. The computers in this generation used
machine code as programming language.

The main features of first generation are:


- Vacuum tube technology - Unreliable
- Supported machine language only - Huge size
- Very costly - Need of A.C.
- Generated lot of heat - Non-portable
- Slow input and output devices
- Consumed lot of electricity

Some computers of this generation were:


- ENIAC
- EDVAC
- UNIVAC
- IBM-701
- IBM-650

Second generation
The period of second generation was 1959-1965. In this generation transistors
were used that were cheaper, consumed less power, more compact in size, more
reliable and faster than the first generation machines made of vacuum tubes. In
this generation, magnetic cores were used as primary memory and magnetic tape
and magnetic disks as secondary storage devices. In this generation assembly
language and high-level programming languages like FORTRAN, COBOL
were used. The computers used batch processing and multiprogramming
operating system.

The main features of second generation are:


 Use of transistors
 Reliable in comparison to first generation computers
 Smaller size as compared to first generation computers
 Generated less heat as compared to first generation computers
 Consumed less electricity as compared to first generation computers
 Faster than first generation computers
 Still very costly
 A.C. needed
 Supported machine and assembly languages

Some computers of this generation were:


 IBM 1620
 IBM 7094
 CDC 1604
 CDC 3600
 UNIVAC 1108
Third generation

The period of third generation was 1965-1971. The computers of third


generation used integrated circuits (IC's) in place of transistors. A single IC has
many transistors, resistors and capacitors along with the associated circuitry.
The IC was invented by Jack Kilby. This development made computers smaller
in size, reliable and efficient. In this generation remote processing, time-sharing,
multi-programming operating system were used. High-level languages
(FORTRAN-II TO IV, COBOL, PASCAL PL/1, BASIC, ALGOL-68 etc.) were
used during this generation.

The main features of third generation are:


 IC used
 More reliable in comparison to previous two generations
 Smaller size
 Generated less heat
 Faster
 Lesser maintenance
 Still costly
 A.C needed
 Consumed lesser electricity
 Supported high-level language

Some computers of this generation were:


 IBM-360 series
 Honeywell-6000 series
 PDP(Personal Data Processor)
 IBM-370/168
 TDC-316

Fourth generation

The period of fourth generation was 1971-1980. The computers of fourth generation
used Very Large Scale Integrated (VLSI) circuits. VLSI circuits having about 5000
transistors and other circuit elements and their associated circuits on a single chip
made it possible to have microcomputers of fourth generation. Fourth generation
computers became more powerful, compact, reliable, and affordable. As a result, it
gave rise to personal computer (PC) revolution. In this generation time sharing, real
time, networks, distributed operating system were used. All the high-level languages
like C, C++, DBASE etc., were used in this generation.

The main features of fourth generation are:


 VLSI technology used
 Very cheap
 Portable and reliable
 Use of PC's
 Very small size
 Pipeline processing
 No A.C. needed
 Concept of internet was introduced
 Great developments in the fields of networks
 Computers became easily available

Some computers of this generation were:


 DEC 10
 STAR 1000
 PDP 11
 CRAY-1(Super Computer)
 CRAY-X-MP(Super Computer)

Fifth generation
The period of fifth generation is 1980-till date. In the fifth generation, the VLSI
technology became ULSI (Ultra Large Scale Integration) technology, resulting in
the production of microprocessor chips having ten million electronic components.
This generation is based on parallel processing hardware and AI (Artificial
Intelligence) software. AI is an emerging branch in computer science, which
interprets means and method of making computers think like human beings. All the
high-level languages like C and C++, Java, .Net etc., are used in this generation.
AI includes:
 Robotics
 Neural Networks
 Game Playing
 Development of expert systems to make decisions in real life situations.
 Natural language understanding and generation.

The main features of fifth generation are:


 ULSI technology
 Development of true artificial intelligence
 Development of Natural language processing
 Advancement in Parallel Processing
 Advancement in Superconductor technology
 More user friendly interfaces with multimedia features
 Availability of very powerful and compact computers at cheaper rates

Some computer types of this generation are:


Desktop, Laptop, NoteBook, UltraBook, ChromeBook
1.2 CPU INTERNAL ORGANIZATION
Inside every computer is a central processing unit (CPU) and inside every CPU are
small components that carry out all the instructions for every program to run. These
components include AND gates, OR gates, NOT gates, Clock, Multiplexer, ALU
(arithmetic logic unit), etc. Data bus performs data transfer within a CPU and a
computer. As shown in Fig.1, CPU is organized with Program Counter (PC),
Instruction Register (IR), Instruction Decoder, Control Unit, Arithmetic Logic Unit
(ALU), Registers, and Buses.

Components of the CPU

1. Registers: These are temporary storage locations within the CPU used to hold
data and instructions during execution.
a. Program Counter (PC): This holds the address of the next instruction to
be fetched from Memory.
b. Instruction Register (IR): This holds the instructions that are currently
being executed
c. Instruction Decoder: It decodes and interprets the contents of the IR,
and splits a whole instruction into fields for the Control Unit to
interpret.
2. Control Unit: It co-ordinates all activities within the CPU, has connections to
all parts of the CPU, and includes a sophisticated timing circuit.
3. Arithmetic and Logic Unit (ALU): It carries out arithmetic and logical
operations.
 Arithmetic operations like addition, subtraction, multiplication,
division and
 Logical operations like AND, OR, NOT operations.
Within ALU, input registers hold the input operands and output register holds
the result of an ALU operation. Once completing ALU operation, the result is
copied from the ALU output register to its final destination.
4. Buses: The internal bus of a CPU includes three buses, which are the Address
bus, the data bus and the control bus.
a. Data Bus:
Data bus is the most common type of bus. It is used to transfer data
between different components of computer. The number of lines in data
bus affects the speed of data transfer between different components.
The data bus consists of 8, 16, 32, or 64 lines. A 64-line data bus can
transfer 64 bits of data at one time. The data bus lines are bi-directional.
It means that CPU can read data from memory using these lines CPU
can write data to memory locations using these lines.
b. Address Bus:
Many components are connected to one another through buses. Each
component is assigned a unique ID. This ID is called the address of
that component. If a component wants to communicate with another
component, it uses address bus to specify the address of that
component. The address bus is a unidirectional bus. It can carry
information only in one direction. It carries address of memory
location from microprocessor to the main memory.
c. Control Bus:
Control bus is used to transmit different commands or control signals
from one component to another component. Suppose CPU wants to
read data from main memory, it will use the control bus. It is also used
to transmit control signals like ASKS (Acknowledgement signals). A
control signal contains the following:
 Timing information: It specifies the time for which a device can
use data and address bus.
 Command Signal: It specifies the type of operation to be
performed. Suppose that CPU gives a command to the main
memory to write data, the memory sends acknowledgement
signal to CPU after writing the data successfully. CPU receives
the signal and then moves to perform some other action.
5. Memory (will be discussed later)

Figure 1: CPU organization


Types of CPU organization
There are three main types of CPU organization in computer architecture: Single
Accumulator Organization, General Register Organization, and Stack
Organization. Each type utilizes a different approach for storing and manipulating
data within the CPU.

1. Single Accumulator Organization


This organization uses a single register, called the accumulator, to store the data
being manipulated. Instructions often assume that the accumulator is used as the
primary operand for arithmetic and logical operations. It's a simpler design but can
be less flexible for complex operations.

2. General Register Organization

This organization provides multiple general-purpose registers. These registers can


be used to store different types of data, intermediate results, and even pointers. It
offers more flexibility and allows for more complex and efficient operations.

3. Stack Organization

This organization uses a stack, a data structure where the last element added is the
first element retrieved (LIFO). The stack is primarily used for managing data during
subroutine calls, local variables, and expression evaluation. It can be efficient for
certain types of operations and simplifies some programming tasks.

1.3 MEMORY
Memory unit enables us to store data inside the computer. The memory can be
referred to as the storage area in which running programs are kept, and it also
contains data needed by the running programs. It enables a processor to access
running applications and services that are temporarily stored in a specific memory
location.

The memory is divided into large number of small parts called cells. Each location
or cell has a unique address, which varies from zero to memory size minus one. For
example, if the computer has 64k words, then this memory unit has 64 * 1024 =
65536 memory locations. The address of these locations varies from 0 to 65535. The
memory unit can be categorized into three namely, cache, primary memory and
secondary memory.
1.3.1 Primary Memory
Primary storage is a volatile memory that holds only those data and instructions on
which the computer is currently working on. It contains a large number of
semiconductor storage cells, capable of storing a bit of information. The data and
instruction required to be processed resides in the main memory. Primary memory
is divided into two: Random Access Memory (RAM) and Read Only Memory
(ROM).
Characteristics
 It has a limited capacity
 It is generally made up of semiconductor devices
 It is a volatile form of memory, means when the computer is shut down,
anything contained in RAM is lost.
 These memories are not as fast as the cache.

1. Random Access Memory (RAM)


RAM is the internal memory of the CPU for storing data, program and program
result. It is a read/write memory which stores data when the machine is working and
as soon as the machine is switched off, data is erased. Access time in RAM is
independent of the address, that is, each storage location inside the memory is as
easy to reach as other locations and takes the same amount of time. RAM is of two
types: Static RAM (SRAM) and dynamic RAM (DRAM).
o Static RAM (SRAM)
The word static indicates that the memory retains its contents as long as power is
being supplied. However, data is lost when the power gets down due to volatile
nature. SRAM chips use a matrix of 6-transistors and no capacitors. Transistors do
not require power to prevent leakage, so SRAM need not be refreshed on a regular
basis. There is extra space in the matrix, hence SRAM uses more chips than DRAM
for the same amount of storage space, making the manufacturing costs higher.
SRAM is thus used as cache memory and has very fast access.

o Dynamic RAM (DRAM)


DRAM, unlike SRAM must be continually refreshed in order to maintain the data.
This is done by placing the memory on a refresh circuit that rewrites the data several
hundred times per second. DRAM is used for most system memory as it is cheap
and small. All DRAMs are made up of memory cells, which are composed of one
capacitor and one transistor.

2. Read Only Memory (ROM)


ROM is the type of memory from which data can only be read but cannot be written.
This type of memory is non-volatile and the information is stored permanently
during manufacture. A ROM stores instructions that are required to start a computer.
This operation is referred to as bootstrap. ROM chips are not only used in the
computer but also in other electronic items like washing machine and microwave
oven. The various types of ROMs and their characteristics are as follows.
o MROM (Masked ROM)

The very first ROMs were hard-wired devices that contained a pre-programmed set
of data or instructions. This kind of ROMs are known as masked ROMs, which are
inexpensive.

o PROM (Programmable Read Only Memory)

PROM is read-only memory that can be modified only once by a user. The user buys
a blank PROM and enters the desired contents using a PROM program. Inside the
PROM chip, there are small fuses which are burnt open during programming. It can
be programmed only once and is not erasable.

o EPROM (Erasable and Programmable Read Only Memory)


EPROM can be erased by exposing it to ultra-violet light for a duration of up to 40
minutes. Usually, an EPROM eraser achieves this function. During programming,
an electrical charge is trapped in an insulated gate region. The charge is retained for
more than 10 years because the charge has no leakage path. For erasing this charge,
ultra-violet light is passed through a quartz crystal window (lid). This exposure to
ultra-violet light dissipates the charge during normal use, the quartz lid is sealed with
a sticker.

o EEPROM (Electrically Erasable and Programmable Read Only


Memory)
EEPROM is programmed and erased electrically, it can be erased and
reprogrammed about ten thousand times. Both erasing and programming take about
4 to 10 ms (millisecond) and any location can be selectively erased and programmed.
Hence, the process of reprogramming is flexible but slow. Figure 3 shows the
different types of memory

Figure 3: Classification of memory


1.3.2 Secondary Memory

This type of memory is also known as external memory. It is a non-volatile form of


memory. It is slower than the main memory. These are used for storing
data/information permanently and on a long-term basis. The CPU directly does not
access these memories, instead they are accessed via input-output routines. The
contents of secondary memories are first transferred to the main memory and then
the CPU can access it. The most common examples of secondary memory are
magnetic disks, magnetic tapes, and optical disks.

1.3.3 Cache memory


Cache memory (CPU memory) is a high-speed SRAM that the processor can access
more quickly than it can access regular RAM. This memory is typically integrated
directly into the CPU chip or placed on a separate chip that has a separate bus
interconnect with the CPU. It is a kind of memory which is used to fetch data very
fast. It is a very high-speed semiconductor memory which can speed up the CPU. It
is used to hold those parts of data and program which are most frequently used by
the CPU.
Characteristics:
 Cache memory is faster than main memory.
 It consumes less access time as compared to main memory.
 It stores programs that can be executed within a short period of time.
 It stores data for temporary use.
 It acts as a buffer between the CPU and the main memory.

Terminologies in Cache
Split cache: It has separate data cache and a separate instruction cache. The two
caches work in parallel, one transferring data and the other transferring instructions.
A dual or unified cache: The data and the instructions are stored in the same cache.
A combined cache with a total size equal to the sum of the two split caches will
usually have a better hit rate.
Mapping Function: A mapping function specifies the correspondence between the
main memory blocks and those in the cache.

Cache Replacement: When the cache is full and a memory word that is not in the
cache is referenced, the cache control hardware must decide which block should be
removed to
create space for the new block that contains the referenced word. The collection of
rules for making this decision is the replacement algorithm.

Cache performance
When the processor needs to read or write a location in main memory, it first checks
for a corresponding entry in the cache. If the processor finds that the memory
location is in the cache, a cache hit is said to have occurred. If the processor does
not find the memory location in the cache, a cache miss has occurred.

When a cache miss occurs, the cache replacement is made by allocating a new entry
and copies in data from main memory. The performance of cache memory is
frequently measured in terms of a quantity called Hit ratio.
Cache performance can be enhanced by

 using higher cache block size,


 higher associativity,
 reducing miss rate,
 reducing miss penalty, and
 reducing the time to hit in the cache.

CPU execution time of a given task is defined as the time spent by the system
executing that task, including the time spent executing run-time or system
services.

1.4 Memory Hierarchy Design


Memory is one of the important units in any computer system. It serves as a storage
for all the processed and unprocessed data or programs in a computer system.
However, due to the fact that most computer users often store large amount of files
in their computer memory devices, the use of one memory device in a computer
system has become inefficient and unsatisfactory. This is because only one memory
cannot contain all the files needed by the computer users and when the memory is
large, it decreases the speed of the processor and the general performance of the
computer system.

Therefore, to curb this challenges, memory unit must be divided into smaller
memories for more storage, speedy program executions and the enhancement of the
processor performance. The recently accessed files or programs must be placed in
the fastest memory since the memory with large capacity is cheap and slow and the
memory with smaller capacity is fast and costly. The organization of smaller
memories to hold the recently accessed files or programs closer to the CPU is termed
memory hierarchy. These memories are successively larger as they move away from
the CPU. The hierarchy system encompasses all the storage devices used in a
computer system. It ranges from the cache memory, which is smaller in size but
faster in speed to a relatively auxiliary memory which is larger in size but slower in
speed. The smaller the size of the memory the costlier it becomes.
The Memory Hierarchy Design is classified into 2 major types:
1. Internal Memory
It is also known as Primary memory. It is directly accessible by the processor. It
includes three basic levels, as given below
 Zero level (CPU Registers)
 level 1 (Cache Memory)
 Level 2 (Main memory)
2. External Memory
It is also known as Secondary memory. It is accessible by the processor via the I/O
Module. It includes level 3 (Magnetic Disk), level 4(Optical Disk and Magnetic
Tape). Let explain the memory hierarchy with a block diagram as shown below,

Figure 2: Memory Hierarchy Design


The Computer memory hierarchy looks like a pyramid structure which is used to
describe the differences among memory types. It separates the computer storage
based on hierarchy.

In Memory Hierarchy the cost of memory, capacity is inversely proportional to


speed. The levels are discussed below:

Level 0 − Registers
The registers are present inside the CPU and because they are present inside the
CPU, they have least access time. Registers are most expensive and smallest in size
generally in kilobytes. They are used to store data/instruction in execution and are
implemented by using Flip-Flops.

Level 1 – Cache

Cache memory is used to store the segments of a program that are frequently
accessed by the processor. It is expensive and the smallest in size, generally in
Megabytes and is implemented by using Static RAM. It is very costly compared to
the main memory and the auxiliary memory.

Level 2 − Primary or Main Memory


It directly communicates with the CPU and with auxiliary memory devices through
an I/O processor. Main memory is less expensive than cache memory and larger in
size generally in Gigabytes. This memory is implemented by using dynamic RAM.
During program execution, the files that are not currently needed by the CPU are
often moved to the auxiliary storage devices in order to create space in the main
memory for the currently needed files to be stored. The main memory is made up of
Random Access Memory (RAM) and Read Only Memory (ROM).

Level 3 − Secondary storage


Secondary storage devices like Magnetic Disks are present at level 3. They are used
as backup storage. They are cheaper than both cache and main memory and larger
in size generally in a few TB. They are relatively slow in speed. They store programs
that are not currently needed by the CPU.

Level 4 − Tertiary storage


Tertiary storage devices like magnetic tapes and optical disks are present at level 4.
They are used to store removable files and are the cheapest and largest in size (1-20
TB).

1.5 Principle of Locality


Memory is accessed through unique system assigned address. The accessing of data
from memory is based on principle of locality.
The principle of locality or locality of reference is the tendency of a processor to
access the same set of memory locations repetitively over a short period of time.
There are three basic types of locality of reference:
Temporal locality: Here a resource that is referenced at one point in time is
referenced again soon afterwards.

Spatial locality: Here the likelihood of referencing a storage location is greater if a


storage location near it has been recently referenced.

Sequential locality: Here storage is accessed sequentially, in descending or


ascending order. The locality or reference leads to memory hierarchy.

Terminologies in memory access


Block or line: The minimum unit of information that could be either present or
totally absent.

Hit: If the requested data is found in the upper levels of memory hierarchy it is called
hit.
Miss: If the requested data is not found in the upper levels of memory hierarchy it
is called miss.

Hit rate or Hit ratio: It is the fraction of memory access found in the upper level.
It is a performance metric. Hit Ratio = Hit/ (Hit + Miss)

Miss rate: It is the fraction of memory access not found in the upper level (1-hit
rate).

Hit Time: The time required for accessing a level of memory hierarchy, including
the time needed for finding whether the memory access is a hit or miss.

Miss penalty: The time required for fetching a block into a level of the memory
hierarchy from the lower level; this includes the time to access, transmit, insert it to
new level and pass the block to the requestor.

Bandwidth: The data transfer rate by the memory.


Latency or access time: Memory latency is the length of time between the
memory’s receipt of a read request and its release of data corresponding with the
request.

Cycle time: It is the minimum time between requests to memory.

1.6 Virtual Memory


Virtual memory is a memory management capability of an operating system that
uses hardware and software to allow a computer to compensate for physical
memory shortages by temporarily transferring data from RAM to disk storage.

2.0 DATA REPRESENTATION


Registers are made up of flip-flops and flip-flops are two-state devices that can store
only 1’s and 0’s.
Computer does not understand human language. Any data, viz., letters, symbols, pictures, audio,
videos, etc., fed to computer should be converted to machine language first, which is a sequence
of zeros and ones.
Computers not only process numbers, letters and special symbols but also complex types of data
such as sound and pictures. However, these complex types of data take a lot of memory and
processor time when coded in binary form. This limitation necessitates the need to develop better
ways of handling long streams of binary digits. Higher number systems are used in computing to
reduce these streams of binary digits into manageable form. This helps to improve the processing
speed and optimize memory usage.

2.1 NUMBER SYSTEM REPRESENTATION


Number systems can be classified into four major categories:

There are many methods or techniques which can be used to convert numbers from
one base to another. These include −
 Decimal to Other Base System
 Other Base System to Decimal
 Other Base System to Non-Decimal
 Shortcut method − Binary to Octal
 Shortcut method − Octal to Binary
 Shortcut method − Binary to Hexadecimal
 Shortcut method − Hexadecimal to Binary

2.2 REPRESENTATION OF SIGNED BINARY NUMBERS


There are three types of representations for signed binary numbers
● Sign-Magnitude form
● 1’s complement form
● 2’s complement form

a. Sign-Magnitude
The Most Significant Bit (MSB) of signed binary numbers is used to indicate the sign of the
numbers. Hence, it is also called a sign bit. In sign-magnitude, the MSB is used for representing
sign of the number and the remaining bits represent the magnitude of the number. The positive
sign is represented by placing ‘0’ in the sign bit. Similarly, the negative sign is represented by
placing ‘1’ in the sign bit.
If the signed binary number contains ‘N’ bits, then N−1 bits only represent the magnitude of the
number since one bit, MSB is reserved for representing sign of the number. Below is an 8-bit
representation:

magnitude

Sign bit

Example
Consider the negative decimal number -108. The magnitude of this number is 108. We know the
unsigned binary representation of 108 is 1101100. It is having 7 bits. All these bits represent the
magnitude.
Since the given number is negative, consider the sign bit as 1, which is placed on left most side of
magnitude.
−108
−10810 = 11101100
Therefore, the sign-magnitude representation of -108 is 11101100.

Signed sign-
decimal magnitude

+6 0110

-6 1110

+0 0000

-0 1000
+7 0111

-7 1111

The above are examples of sign magnitude. Note that it is 4-bit representation

Complement Arithmetic
Complements are used in the digital computers in order to simplify the subtraction
operation and for the logical manipulations.
Binary system complements
As the binary system has base r = 2. So the two types of complements for the binary
system are 2's complement and 1's complement.

b. 1's complement
The 1's complement of a number is found by changing all 1's to 0's and all 0's to 1's.
This is called as taking complement or 1's complement. Example of 1's Complement
is as follows.

c. 2's complement
The 2's complement of binary number is obtained by adding 1 to the Least
Significant Bit (LSB) of 1's complement of the number.
2's complement = 1's complement + 1
Example of 2's Complement is as follows.
2.3 Binary Arithmetic
Binary arithmetic is essential part of all the digital computers and many other digital
system.
Binary Addition
It is a key for binary subtraction, multiplication, division. There are four rules of
binary addition.

In fourth case, a binary addition is creating a sum of (1 + 1 = 10) i.e. 0 is written in


the given column and a carry of 1 over to the next column.
Example − Addition
Binary Subtraction
Subtraction and Borrow, these two words will be used very frequently for the
binary subtraction. There are four rules of binary subtraction.

Example – Subtraction

Binary Multiplication

Binary multiplication is similar to decimal multiplication. It is simpler than decimal


multiplication because only 0s and 1s are involved. There are four rules of binary
multiplication.

Example − Multiplication
Binary Division
Binary division is similar to decimal division. It is called as the long division
procedure.
Example − Division

Subtraction using 1’s Complement


The steps involved are:
i. To write down the 1’s complement of the subtrahend (the one being
subtracted)
ii. Add it to the minuend (the one you are subtracting from)
iii. If the result of addition has a carry over then it is dropped and an 1 is added
in the last bit.
iv. If there is no carry over, then 1’s complement of the result of addition is
obtained to get the final result and it is negative.

Evaluate:
i. 110101 – 100101
Solution:
a. 1’s complement of 100101 is 011010,
b. add this to the minuend. You obtain 001111 with a carry.
c. Add 1 to get the final answer
d. Answer is 010000

ii. 101011 – 111001


Solution:
a. 1’s complement of 111001 is 000110.
b. Add to the minuend, 1 0 1 0 1 1.
c. Result is 110001, no carry
d. Take one’s complement of the result in (iii) to get final answer.
e. The result is 1110 and it is negative.

3.0 ASSEMBLY
ASSEMBLY REGISTERS
Processor operations mostly involve processing data. This data can be stored in
memory and accessed from there. However, reading data from and storing data into
memory slows down the processor, as it involves complicated processes of sending
the data request across the control bus and into the memory storage unit and getting
the data through the same channel. To speed up the processor operations, the
processor includes some internal memory storage locations, called REGISTERS.
The registers store data elements for processing without having to access the
memory. A limited number of registers are built into the processor chip.

Processor Registers
There are ten 32-bit and six 16-bit processor registers in IA-32 architecture. The IA-
32, also known as x86 or i386, is a 32-bit instruction set architecture designed by
Intel. The registers are grouped into three categories:
1. General registers
2. Control registers, and
3. Segment registers

1. The general registers are further divided into the following groups -
a. Data registers
b. Pointer registers
c. Index registers

a. Data Registers
Four 32-bit data registers are used for arithmetic, logical, and other operations.
These 32-bit registers can be used in three ways-
 As complete 32-bit data registers: EAX, EBX, ECX, EDX.
 Lower halves of the 32-bit registers can be used as four 16-bit data
registers: AX, BX, CX, and DX.
 Lower and higher halves of the above-mentioned four 16-bit registers can
be used as eight 8-bit data registers: AH, AL, BH, BL, CH, CL, DH, and
DL.
Some of these data registers have specific use in arithmetic operations
AX is the primary accumulator; it is used in input/output and most arithmetic
instructions. For example, in multiplication operation, one operand is stored in EAX
or AX or AL register according to the size of the operand.
BX is known as the base register; used in indexed addressing.
CX is known as the count register, as the ECX, CX registers store the loop count in
iterative operations.
DX is known as the data register. It is also used in input/output operations. It is also
used with AX register along with DX for multiply and divide operations involving
large values.

b. Pointer Registers
These are 32-bit EIP, ESP, and EBP registers and corresponding 16-bit right portions
IP, SP, and BP. There are three categories of pointer registers:
Instruction pointer (IP)
Stack pointer (SP)
Base pointer (BP)
IP - The 16-bit IP register stores the offset address of the next instruction to be
executed. IP in association with the CS register (as CS:IP) gives the complete
address of the current instruction in the code segment.
SP - The 16-bit SP register provides the offset value within the program stack. SP in
association with the SS register (SS:SP) refers to be current position of data or
address within the program stack.
BP - The 16-bit BP register mainly helps in referencing the parameter variables
passed to a subroutine. The address in SS register is combined with the offset in BP
to get the location of the parameter. BP can also be combined with DI and SI as base
register for special addressing.
c. Index registers
The 32-bit index registers, ESI and EDI, and their 16-bit rightmost portions. SI and
DI, are used for indexed addressing and sometimes used in addition and subtraction.
There are two sets of index pointers -
Source Index - It is used as source index for string operations.
Destination Index - It is used as destination index for string operations.

2. Control Registers
The 32-bit instruction pointer register and the 32-bit flags register combined are
considered as the control registers.
Many instructions involve comparisons and mathematical calculations which may
cause change of status of the flags. Some other conditional instructions may test the
value of these status flags to take the control flow to other locations.
The common flag bits are:
Overflow flag (OF) - It indicates the overflow of a high-order bit (leftmost bit) of
data after a signed arithmetic operation.
Direction flag (DF) - It determines left or right direction for moving or comparing
string data. When the DF value is 0, the string operation takes left-to-right direction
and when the value is set to 1, the string operation takes right-to-left direction.
Interrupt flag (IF) - It determines whether the external interrupts like keyboard entry,
etc., are to be ignored or processed. It disables the external interrupt when the value
is 0 and enables interrupts when set to 1.
Trap flag (TF) - It allows setting the operation of the processor in single-step mode.
The DEBUG program we used sets the trap flag, so we could step through the
execution one instruction at a time.
Sign flag - It shows the sign of the result of an arithmetic operation. This flag is set
according to the sign of a data item following the arithmetic operation. The sign is
indicated by the high-order of leftmost bit. A positive result clears the value of SF
to 0 and negative result sets it to 1.
Zero flag (ZF) - It indicates the result of an arithmetic or comparison operation. A
nonzero result clears the zero flag to 0, and a zero result sets to 1.
Auxiliary Carry Flag (AF) - It contains the carry from bit 3 to bit 4 following an
arithmetic operation; used for specialized arithmetic. The AF is set when a 1-byte
arithmetic operation causes a carry from bit 3 into bit 4.
Parity Flag (PF) - It indicates the total number of 1-bits in the result obtained from
an arithmetic operation. An even number of 1-bits clears the parity flag to 0 and an
odd number of 1-bits sets the parity flag to 1.
Carry Flag (CF) - It contains the carry of 0 or 1 from a high-order bit (leftmost) after
an arithmetic operation. It also stores the contents of last bit of
a shift or rotate operation.

3. Segment Registers
Segments are specific areas defined in a program for containing data, code and stack.
There are three main segments:
Code segment (CS): It contains all the instructions to be executed. A 16-bit CS
register stores the starting address of the code segment.
Data segment (DS): It contains data, constants and work areas. A 16-bit DS register
stores the starting address of the data segment.
Stack segment (SS): It contains data and return addresses of procedures or
subroutines. It is implemented as a 'stack' data structure. The SS register stores the
starting address of the stack.
The segment registers stores the starting addresses of a segment. To get the exact
location of data or instruction within a segment, an offset value (or displacement) is
required. To reference any memory location in a segment, the processor combines
the segment address in the segment register with the offset value of the location.

4.0 INSTRUCTION FORMAT


Information involved in any operation performed by the CPU needs to be addressed.
In computer terminology, such information is called the operand. Therefore, any
instruction issued by the processor must carry at least two types of information.
These are
 the operation to be performed, encoded in what is called the op-code field,
and
 the address information of the operand on which the operation is to be
performed, encoded in what is called the address field.

An instruction is a command given to the computer to perform specified operation


on given data. Instructions can be classified based on the number of operands as:
zero-address, one-address, two-address and three-address. It should be noted that
the convention “operation, source, destination” will be used to express any
instruction. In that convention,
 operation represents the operation to be performed, for example, add,
subtract, write, or read.
 The source field represents the source operand(s), which can be a constant, a
value stored in a register, or a value stored in the memory.
 The destination field represents the place where the result of the operation is
to be stored, for example, a register or a memory location.

Zero Address Instructions

These instructions do not specify any operands or addresses. Instead, they


operate on data stored in registers or memory locations implicitly defined by the
instruction. For example, a zero-address instruction might simply add the contents
of two registers together without specifying the register names.
A stack-based computer does not use the address field in the instruction. To
evaluate an expression, it is first converted to reverse Polish Notation i.e. Postfix
Notation.

Expression: X = (A+B)*(C+D)
Postfixed: X=AB+CD+*
TOP means top of stack
M[X] is any memory location

PUSH A TOP = A

PUSH B TOP = B

ADD TOP = A+B

PUSH C TOP = C

PUSH D TOP = D

ADD TOP = C+D

MUL TOP = (C+D)*(A+B)

POP X M[X] = TOP

One Address Instructions


These instructions specify one operand or address, which typically refers to a
memory location or register. The instruction operates on the contents of that
operand, and the result may be stored in the same or a different location. For
example, a one-address instruction might load the contents of a memory location
into a register.
This uses an implied ACCUMULATOR register for data manipulation. One
operand is in the accumulator and the other is in the register or memory location.
Implied means that the CPU already knows that one operand is in the accumulator
so there is no need to specify it.

One Address Instruction

Expression: X = (A+B)*(C+D)
AC is accumulator
M[] is any memory location
M[T] is temporary location

LOAD A AC = M[A]

ADD B AC = AC + M[B]

STORE T M[T] = AC

LOAD C AC = M[C]

ADD D AC = AC + M[D]

MUL T AC = AC * M[T]

STORE X M[X] = AC

Two Address Instructions


These instructions specify two operands or addresses, which may be memory
locations or registers. The instruction operates on the contents of both operands,
and the result may be stored in the same or a different location. For example, a two-
address instruction might add the contents of two registers together and store the
result in one of the registers.
This is common in commercial computers. Here two addresses can be specified in
the instruction. Unlike earlier in one address instruction, the result was stored in
the accumulator, here the result can be stored at different locations rather than just
accumulators, but require more number of bit to represent the address.

Two Address Instruction

Here destination address can also contain an operand.


Expression: X = (A+B)*(C+D)
R1, R2 are registers
M[] is any memory location

MOV R1, A R1 = M[A]

ADD R1, B R1 = R1 + M[B]

MOV R2, C R2 = M[C]

ADD R2, D R2 = R2 + M[D]

MUL R1, R2 R1 = R1 * R2

MOV X, R1 M[X] = R1

Three Address Instructions


These instructions specify three operands or addresses, which may be memory
locations or registers. The instruction operates on the contents of all three operands,
and the result may be stored in the same or a different location. For example, a
three-address instruction might multiply the contents of two registers together and
add the contents of a third register, storing the result in a fourth register.
This has three address fields to specify a register or a memory location. Programs
created are much shorter in size but number of bits per instruction increases. These
instructions make the creation of the program much easier but it does not mean that
program will run much faster because now instructions only contain more
information but each micro-operation (changing the content of the register, loading
address in the address bus etc.) will be performed in one cycle only.

Three Address Instruction

Expression: X = (A+B)*(C+D)
R1, R2 are registers
M[] is any memory location

ADD R1, A, B R1 = M[A] + M[B]

ADD R2, C, D R2 = M[C] + M[D]

MUL X, R1, R2 M[X] = R1 * R2

Advantages
Zero-address instructions
 They are simple and can be executed quickly since they do not require any
operand fetching or addressing. They also take up less memory space.
One-address instructions
 They allow for a wide range of addressing modes, making them more flexible
than zero-address instructions. They also require less memory space than two
or three-address instructions.
Two-address instructions
 They allow for more complex operations and can be more efficient than one-
address instructions since they allow for two operands to be processed in a
single instruction. They also allow for a wide range of addressing modes.
Three-address instructions
 They allow for even more complex operations and can be more efficient than
two-address instructions since they allow for three operands to be processed in
a single instruction. They also allow for a wide range of addressing modes.

Disadvantages
Zero-address instructions
 They can be limited in their functionality and do not allow for much flexibility
in terms of addressing modes or operand types.
One-address instructions
 They can be slower to execute since they require operand fetching and
addressing.
Two-address instructions
 They require more memory space than one-address instructions and can be
slower to execute since they require operand fetching and addressing.
Three-address instructions
 They require even more memory space than two-address instructions and can
be slower to execute since they require operand fetching and addressing.

Overall, the choice of instruction format depends on the specific requirements


of the computer architecture and the trade-offs between code size, execution
time, and flexibility.

5.0 ADDRESSING MODES


In computer architecture, addressing modes are different ways to specify the location
of an operand in an instruction. They determine how the CPU retrieves the data
needed to perform an instruction. Common addressing modes include immediate,
direct, indirect, register, register indirect, and indexed addressing, each with its own
characteristics and use cases.
1. Immediate Addressing:
 Definition: The operand (data) is directly included in the instruction.
 Example: MOV R1, 100 (Move the value 100 into register R1).
 Use: Used for constants, initial values, or when the operand needs to be
known at instruction time.

2. Direct Addressing:
 Definition: The instruction contains the actual memory address of the
operand.
 Example: MOV R1, [1000] (Move the contents of memory location 1000
into register R1).
 Use: Used for accessing data directly from memory when the address is
known.

3. Indirect Addressing:
 Definition: The instruction contains a memory address that holds the
actual address of the operand.
 Example: MOV R1, [1000] (Memory location 1000 contains the address
of the operand).
 Use: Used for accessing data that's located within a table of addresses,
often to avoid hardcoding addresses.

4. Register Addressing:
 Definition: The instruction specifies a register as the operand.
 Example: MOV R1, R2 (Move the contents of register R2 into register
R1).
 Use: Used for fast data transfer within the CPU's registers.

5. Register Indirect Addressing:


 Definition: The instruction specifies a register that contains the address
of the operand in memory.
 Example: MOV R1, [R2] (Move the contents of the memory location
whose address is in register R2 into register R1).
 Use: Used for accessing data in memory where the address is calculated
based on a register value.

6. Indexed Addressing:
 Definition: The effective address of the operand is calculated by adding
the contents of a register (index register) to a constant (index offset).
 Example: MOV R1, [BASE+INDEX] (Move the contents of memory
location calculated by adding the contents of BASE register to the
INDEX offset).
 Use: Used for accessing array elements, where the index register is used
to calculate the memory offset.

7. Relative Addressing:
 Definition: The effective address is calculated by adding the contents
of the program counter (PC) to a displacement value in the instruction.
 Example: JMP +10 (Jump to the instruction 10 bytes forward from the
current PC location).
 Use: Used for jumping to instructions within a limited range relative to
the current instruction's location.

8. Stack Addressing:
 Definition: Operands are accessed from a stack, which is a last-in, first-
out (LIFO) data structure.
 Example: PUSH R1 (Push the contents of register R1 onto the stack).
 Use: Used for function calls, subroutine calls, and managing data during
program execution.

9. Auto-Increment/Decrement Addressing:
 Definition: The register containing the address of the operand is
incremented or decremented before or after the operand is accessed.
 Example: MOV R1, [R2+] (Move the contents of the memory location
pointed to by register R2 into R1, then increment R2).
 Use: Used for accessing sequential elements in an array or table, often
used with data structures like stacks.

You might also like