KEMBAR78
CPE 102 Introduction To Computer Engineering NOTE | PDF | Computer Data Storage | Operating System
0% found this document useful (0 votes)
22 views30 pages

CPE 102 Introduction To Computer Engineering NOTE

The document outlines the historical development of modern computing and the computer engineering profession, tracing its evolution from ancient calculation tools to contemporary digital systems. It highlights key figures, milestones, and the emergence of various computing technologies, including mechanical calculators, electronic computers, and the internet. Additionally, it discusses the roles, responsibilities, and career paths of computer engineers in various sectors, emphasizing their impact on society and the importance of continuous professional development.

Uploaded by

dadekunle200
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views30 pages

CPE 102 Introduction To Computer Engineering NOTE

The document outlines the historical development of modern computing and the computer engineering profession, tracing its evolution from ancient calculation tools to contemporary digital systems. It highlights key figures, milestones, and the emergence of various computing technologies, including mechanical calculators, electronic computers, and the internet. Additionally, it discusses the roles, responsibilities, and career paths of computer engineers in various sectors, emphasizing their impact on society and the importance of continuous professional development.

Uploaded by

dadekunle200
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

CPE 102

INTRODUCTION TO COMPUTER ENGINEERING

HISTORICAL DEVELOPMENT OF MODERN COMPUTING AND THE


COMPUTER ENGINEERING PROFESSION

The evolution of modern computing and the computer engineering profession is a fascinating
journey that spans centuries. From ancient tools for calculation to the sophisticated digital
systems we use today, this development has profoundly impacted society. The key milestones
in computing history and the emergence of the computer engineering profession as explain as
follows.
Early Beginnings
Ancient Tools: Early civilizations developed tools for arithmetic, such as the abacus used in
Mesopotamia and China. These devices, though primitive, laid the groundwork for future
computational methods.
Mechanical Calculators: In the 17th century, inventors like Blaise Pascal and Gottfried
Wilhelm Leibniz created mechanical calculators. Pascal's Pascaline could add and subtract,
while Leibniz's Step Reckoner could perform multiplication and division.

The Birth of Modern Computing


Charles Babbage: Often called the "father of the computer," Babbage designed the Difference
Engine in the early 19th century, a machine intended to automate polynomial calculations. He
later conceptualized the Analytical Engine, a more advanced device that introduced the idea of
programmable computing.
Ada Lovelace: Recognized as the first computer programmer, Ada Lovelace worked with
Babbage and wrote algorithms for the Analytical Engine. Her notes included what is now
considered the first algorithm intended for a machine.

The Advent of Electronic Computing


Alan Turing: In the 1930s, Turing developed the concept of a theoretical computing machine,
now known as the Turing Machine. This abstract device could simulate the logic of any
computer algorithm, forming the foundation of computer science.
World War II: The war accelerated computing advancements. The British built the Colossus
to decrypt German codes, while Americans developed the ENIAC (Electronic Numerical
1
Integrator and Computer) to calculate artillery trajectories. ENIAC is often regarded as the first
general-purpose electronic computer.

The Rise of Commercial Computing


UNIVAC: In the 1950s, the UNIVAC I (Universal Automatic Computer) became the first
commercially available computer. It was used for business and government applications,
marking the beginning of the computer industry.
Transistors and Integrated Circuits: The invention of the transistor in 1947 revolutionized
computing by replacing bulky vacuum tubes. Later, integrated circuits (ICs) allowed for the
miniaturization of computers, leading to more powerful and compact machines.

The Personal Computer Revolution


Microprocessors: The development of microprocessors in the 1970s paved the way for
personal computers. Intel's 4004, released in 1971, was the first commercially available
microprocessor.
Apple and IBM: Apple’s Apple II (1977) and IBM’s Personal Computer (1981) brought
computing to the masses. These machines were affordable, accessible, and user-friendly,
igniting the personal computer revolution.

The Internet and the Digital Age


ARPANET: The precursor to the internet, ARPANET was developed in the late 1960s by the
U.S. Department of Defense. It introduced packet switching and laid the groundwork for global
networking.
World Wide Web: In 1989, Tim Berners-Lee invented the World Wide Web, making the
internet accessible to everyone. This development revolutionized communication, commerce,
and information sharing.

Evolution of the Computer Engineering Profession


Early Computer Engineers: The first computer engineers were often self-taught or had
backgrounds in electrical engineering, mathematics, or physics. They worked on building and
maintaining early computers.
Formal Education: By the 1960s and 1970s, universities began offering computer science and
engineering programs. These programs provided structured education in programming,
hardware design, and systems engineering.
2
Professional Organizations: Groups like the Association for Computing Machinery (ACM),
the Institute of Electrical and Electronics Engineers (IEEE), Nigeria Society of Engineers
(NSE) and Council for the Regulation of Engineering in Nigeria (COREN) played significant
roles in defining the profession, setting standards, and promoting research.
Modern Computer Engineering: Today, computer engineers work in diverse fields, including
software development, hardware design, artificial intelligence, cybersecurity, and more. They
are critical in driving technological innovation and addressing complex societal challenges.

Key Figures in Computing History


John von Neumann: Developed the architecture that underpins most modern computers,
known as the von Neumann architecture.
Grace Hopper: Pioneered computer programming and developed the first compiler, a program
that translates high-level code into machine language.
Bill Gates and Steve Jobs: Co-founders of Microsoft and Apple, respectively, they played
pivotal roles in popularizing personal computing and advancing software and hardware
technologies.

Roles and Responsibilities of the Computer Engineer


Computer engineers play a crucial role in shaping the technological landscape. They bridge the
gap between hardware and software, designing and developing systems that power everything
from personal devices to large-scale infrastructure.
Overview of Computer Engineering
Computer engineering is a discipline that combines elements of electrical engineering and
computer science. It focuses on the design, development, and maintenance of computer systems
and components.
Core Areas: Key areas include hardware engineering, software engineering, systems
architecture, and network engineering.
A. Designing and Developing Hardware
Computer engineers design circuits that form the backbone of computer hardware. This
includes microprocessors, memory devices, and other critical components. They create
embedded systems, which are specialized computing systems that perform dedicated functions
within larger systems. Examples include automotive control systems, medical devices, and

3
home appliances. Engineers develop prototypes to test new designs and ensure they meet
performance and reliability standards before mass production.
B. Software Development
Writing efficient and reliable code is a core responsibility. Computer engineers
often work with various programming languages, such as C, C++, Java, and Python, to
create software applications.
They design software architectures that define the structure and behavior of
software systems. This involves creating algorithms, data structures, and user
interfaces.
Engineers rigorously test software to identify and fix bugs. They use debugging
tools and techniques to ensure software operates correctly and efficiently.

C. Systems Integration
Computer engineers ensure that hardware and software components work seamlessly
together. This involves interfacing hardware with software applications and optimizing system
performance. Engineers configure and manage computer systems, including operating systems,
networks, and databases, to ensure they meet organizational needs.

D. Network Engineering
Designing and implementing networks that enable communication between computers
and other devices. This includes local area networks (LANs), wide area networks (WANs), and
wireless networks. Ensuring the security of data transmitted across networks. Computer
engineers develop and implement security protocols to protect against cyber threats.
Identifying and resolving network issues to maintain smooth and reliable network operations.

E. Cybersecurity
Conducting security assessments to identify vulnerabilities in systems and networks.
This includes penetration testing and risk analysis.
Creating and implementing security measures, such as firewalls, encryption, and
intrusion detection systems, to protect data and systems.
Responding to security breaches and cyber-attacks. Engineers develop strategies to
mitigate damage and prevent future incidents.

4
Project Management
Planning and Coordination: Computer engineers often lead or participate in project teams.
They plan and coordinate activities to ensure projects are completed on time and within budget.
Resource Management: Managing resources, including personnel, hardware, and software, to
achieve project goals.
Documentation: Creating detailed documentation for projects, including design
specifications, user manuals, and maintenance guides.

Research and Development


Innovating New Technologies: Engaging in research to develop new technologies and
improve existing ones. This includes exploring emerging fields such as artificial intelligence,
quantum computing, and the Internet of Things (IoT).
Prototyping and Testing: Developing prototypes to test new concepts and technologies. This
involves experimental design, data analysis, and iterative improvement.
Collaboration: Working with other engineers, scientists, and researchers to share knowledge
and drive innovation.

Professional Development and Ethics


Continuing Education: Staying current with technological advancements and industry trends
through ongoing education and professional development.
Ethical Responsibility: Adhering to ethical standards and practices. Computer engineers are
responsible for ensuring their work does not harm individuals or society and that it respects
privacy and intellectual property rights.
Mentorship and Leadership: Providing guidance and mentorship to junior engineers and
aspiring students. Experienced engineers often take on leadership roles, influencing the
direction of projects and organizations.

Impact on Society
Economic Growth: Computer engineers contribute to economic development by driving
technological innovation, creating new products, and improving productivity.
Quality of Life: Their work enhances the quality of life by developing technologies that
improve healthcare, education, communication, and entertainment.

5
Environmental Sustainability: Engineers design energy-efficient systems and develop
technologies that promote environmental sustainability, helping to address global challenges
such as climate change.

6
CAREER PATHS AND DEVELOPMENT
A. Public Sector
Roles and Opportunities
Government Agencies: Computer engineers can work in various government departments
such as defense, homeland security, public health, and transportation. They may develop secure
communication systems, manage IT infrastructure, or work on large-scale public projects.
Public Institutions: Working in public universities, research institutions, and hospitals.
Engineers can contribute to research projects, develop public health systems, or improve
educational technologies.
Non-Profit Organizations: Developing technologies to address social issues, improve
education in underserved areas, or enhance disaster response systems.
Skills and Development
Regulatory Knowledge: Understanding government regulations and compliance
requirements.
Public Policy: Knowledge of how technology impacts public policy and vice versa.
Project Management: Managing large-scale public projects with multiple stakeholders.
B. Private Sector
Roles and Opportunities
Tech Companies: Working for tech giants like Google, Apple, Microsoft, and Amazon in
roles such as software development, system architecture, network engineering, and
cybersecurity.
Startups: Developing innovative products and services in a fast-paced environment. Engineers
may work on emerging technologies such as AI, blockchain, and IoT.
Consulting Firms: Providing expert advice on IT infrastructure, cybersecurity, software
development, and digital transformation to various clients.
Skills and Development
Innovation: Staying current with the latest technological advancements and trends.
Business Acumen: Understanding business goals and aligning technical solutions to meet those
goals.
Agility: Adapting to rapid changes in technology and market demands.
C. Academic and Research
Roles and Opportunities
Professors and Lecturers: Teaching computer engineering courses at universities and
colleges. They also mentor students and guide their research projects.
7
Researchers: Conducting cutting-edge research in areas like artificial intelligence, quantum
computing, cybersecurity, and more. Researchers may work in academic institutions, research
labs, or industry R&D departments.
Academic Administration: Taking on leadership roles such as department head, dean, or
provost, and shaping the academic programs and policies.
Skills and Development
Teaching: Developing effective teaching methodologies and materials.
Research: Conducting rigorous research, publishing papers, and securing research funding.
Collaboration: Working with other researchers, institutions, and industry partners on joint
projects.
D. Industry
Roles and Opportunities
Healthcare: Developing medical software, electronic health records systems, telemedicine
solutions, and medical devices.
Finance: Working on financial technology (fintech) solutions, cybersecurity for financial
institutions, and high-frequency trading systems.
Telecommunications: Designing and managing communication networks, developing
protocols, and working on wireless technologies.
Automotive: Developing software for autonomous vehicles, infotainment systems, and
vehicle-to-everything (V2X) communication.
Entertainment: Creating video games, virtual reality (VR) experiences, and multimedia
systems.
Skills and Development
Domain Knowledge: Gaining expertise in the specific industry you work in, such as healthcare
regulations or financial compliance.
Interdisciplinary Skills: Combining knowledge from computer engineering with other fields
like biology, finance, or telecommunications.
Innovation: Developing new products and services that push the boundaries of the industry.

E. Professional Development
Certifications and Continuing Education

8
Certifications: Obtaining certifications such as Certified Information Systems Security
Professional (CISSP), Cisco-Certified Network Associate (CCNA), or Microsoft Certified:
Azure Solutions Architect Expert.
Workshops and Conferences: Attending industry conferences, workshops, and seminars to
stay updated on the latest trends and technologies.
Advanced Degrees: Pursuing advanced degrees such as a Master’s or Ph.D. in computer
engineering or related fields.
Networking and Professional Organizations
Professional Organizations: Joining organizations such as the Institute of Electrical and
Electronics Engineers (IEEE), Association for Computing Machinery (ACM), COREN, NSE
and relevant industry-specific groups.
Networking: Building a professional network through conferences, online forums, and local
tech meetups.
Mentorship: Finding mentors in the field and becoming a mentor to junior engineers.

Overview of Computer Engineering Design


Computer engineering design involves creating and developing hardware and software systems
that are efficient, reliable, and meet specified requirements. It encompasses various stages,
including problem identification, requirements analysis, system design, implementation,
testing, and maintenance.
Design Process
Problem Identification: Understanding the problem to be solved, the context, and the end-
users' needs.
Requirements Analysis: Gathering and analyzing requirements through stakeholder
interviews, surveys, and research. Requirements are categorized as functional (what the system
should do) and non-functional (performance, security, usability, etc.).
System Design
Architectural Design: Defining the system's overall structure, including hardware and
software components and their interactions. Common architectures include client-server, peer-
to-peer, and microservices.
Detailed Design: Specifying the design of individual components, including algorithms, data
structures, interfaces, and communication protocols.
Modelling Tools: Using tools like UML (Unified Modelling Language) to create design
diagrams such as class diagrams, sequence diagrams, and state diagrams.
9
Hardware Design
Circuit Design: Designing electronic circuits using components like transistors, capacitors,
and resistors. Tools like SPICE (Simulation Program with Integrated Circuit Emphasis) are
used for simulation.
PCB Design: Designing printed circuit boards (PCBs) to physically connect electronic
components. Tools like Eagle and KiCad are commonly used.
Embedded Systems: Developing systems where software is integrated into hardware. This
includes selecting microcontrollers, designing firmware, and optimizing power consumption.
Software Design
Algorithm Design: Creating algorithms that solve specific problems. This includes choosing
appropriate data structures and ensuring efficiency.
Programming Languages: Selecting suitable programming languages based on the project
requirements. Common languages include C, C++, Java, Python, and assembly languages for
low-level programming.
Software Tools: Using integrated development environments (IDEs), version control systems
(e.g., Git), and continuous integration tools to streamline the development process.
Implementation
Coding: Writing code for the hardware and software components based on the detailed design
specifications.
Integration: Combining hardware and software components to create a functioning system.
This includes interfacing microcontrollers with sensors, actuators, and communication
modules.
Prototyping: Building prototypes to test and validate the design. Prototyping tools include
breadboards for hardware and simulation software for both hardware and software.
Testing and Verification
Unit Testing: Testing individual components to ensure they work as expected. This includes
writing test cases and using automated testing tools.
Integration Testing: Testing the interactions between integrated components to identify and
resolve issues.
System Testing: Conducting comprehensive tests on the complete system to verify that it meets
the requirements. This includes performance testing, security testing, and user acceptance
testing.
Debugging: Identifying and fixing bugs using debugging tools and techniques. This is an
iterative process that often involves multiple rounds of testing and correction.
10
Maintenance and Evolution
Maintenance: Ongoing support and updates to fix bugs, improve performance, and add new
features. Maintenance can be corrective, adaptive, perfective, or preventive.
Evolution: Adapting the system to changing requirements and technologies. This includes
upgrading hardware components, refactoring software, and incorporating new features.
Tools and Technologies
Computer-Aided Design (CAD) Tools: For designing and simulating hardware. Examples
include AutoCAD, SolidWorks, and OrCAD.
Software Development Tools: For writing, testing, and managing code. Examples include
Visual Studio, Eclipse, and PyCharm.
Hardware Prototyping Tools: For building and testing hardware prototypes. Examples
include Arduino, Raspberry Pi, and FPGA development boards.
Collaboration Tools: For managing projects and collaborating with team members. Examples
include Jira, Trello, Slack, and Confluence.

11
COMPUTER DEVICES/HARDWARE IN THE AGE OF SMARTNESS AND
INTERNET OF THINGS AND PEOPLE (IOTS AND P)
Understanding Smart Devices and IoTs and P
Smart Devices: Electronic devices connected to other devices or networks via different
protocols (such as Bluetooth, Wi-Fi, or cellular), capable of interacting with users and other
devices.
Internet of Things (IoT): A network of physical objects embedded with sensors, software, and
other technologies to connect and exchange data with other devices and systems over the
internet.
Internet of People (IoP): Extends IoT to include not just things but also people, focusing on
human interactions and the data generated from these interactions.
Key Concepts
Connectivity: The ability of devices to connect to the internet and communicate with each
other.
Interoperability: The capacity for different devices and systems to work together seamlessly.
Automation: The use of technology to perform tasks with minimal human intervention.
Data Analytics: The process of examining data to uncover patterns and insights, critical for
IoT applications.
Components of Smart Devices and IoT Hardware
Sensors
Types of Sensors: Temperature, humidity, motion, light, pressure, and more.
Function: Sensors collect data from the environment and send it to other devices or systems
for processing.
Microcontrollers and Microprocessors
Microcontrollers: Small computers on a single integrated circuit containing a processor,
memory, and programmable input/output peripherals.
Microprocessors: The central processing unit (CPU) of a computer, performing computations
and controlling devices.
Communication Modules
Wi-Fi: Allows devices to connect to local networks and the internet wirelessly.
Bluetooth: Short-range wireless communication technology for exchanging data over short
distances.
Zigbee: A specification for a suite of high-level communication protocols using low-power
digital radios.
12
Cellular: Uses mobile networks (like 4G, 5G) to provide connectivity over wide areas.
Power Management
Battery Technologies: Various types of batteries (e.g., lithium-ion, nickel-metal hydride) used
to power smart devices.
Energy Harvesting: Techniques for collecting energy from external sources (such as solar
power, thermal energy) to power devices.
Design and Architecture of IoT Systems
Edge Computing: Processing data closer to the data source rather than in a centralized data-
processing warehouse. This type of computing reduces latency, saves bandwidth, and enhances
data security.
Cloud Computing: Using remote servers hosted on the internet to store, manage, and process
data. It offers scalability, flexibility, and extensive storage and processing capabilities.
Network Topologies
Star Topology: All devices are connected to a central hub.
Mesh Topology: Devices are interconnected, each node relays data for the network.
Hybrid Topology: Combines elements of both star and mesh topologies.

Applications of Smart Devices and IoTs and P


Smart Homes: It consists of devices such as Smart thermostats, smart lighting, smart locks,
and home security systems. This improved energy efficiency, convenience, and security.
Smart Cities: It consists of components such as Smart traffic lights, waste management
systems, public safety systems, and smart utilities. It enhanced urban living, reduced
environmental impact, improved resource management.
Healthcare: It made used of devices such as wearable health monitors, remote patient
monitoring systems, smart medical devices. It gives better patient care, real-time health
monitoring, early detection of health issues.
Industrial IoT (IIoT): It made used of components such as smart manufacturing equipment,
predictive maintenance systems, connected supply chains. It increased operational efficiency,
reduced downtime, enhanced productivity.
Future Trends in Smart Devices and IoTs and P
Artificial Intelligence (AI) Integration: Enhancing smart devices with AI to enable more
intelligent and autonomous decision-making.
5G Connectivity: Providing faster, more reliable internet connections, enabling more
advanced IoT applications.
13
Advanced Sensors: Creating more sensitive and accurate sensors to enhance data collection
and improve system performance.
Blockchain: Using blockchain for secure and transparent data management in IoT systems.

IDENTIFICATION OF COMPUTER SOFTWARE AND HARDWARE


COMPONENTS AND OPERATIONAL RELATIONSHIPS
This section describes the various components of a computer system and how they
interact is fundamental to comprehending how computers work. It covers the identification of
both hardware and software components and explain their operational relationships, including
central processing units (CPUs), input/output (I/O) devices, operating systems (OS), and
programming languages.
Hardware Components
Central Processing Unit (CPU): The CPU is the primary component of a computer that
performs most of the processing inside a computer.
Components:
➢ Control Unit (CU): Directs the operation of the processor.
➢ Arithmetic Logic Unit (ALU): Performs arithmetic and logical operations.
➢ Registers: Small, fast storage locations inside the CPU.
Functions:
➢ Fetch: Retrieves instructions from memory.
➢ Decode: Interprets the instructions.
➢ Execute: Performs the specified operations.
➢ Store: Writes results back to memory.
14
Memory
Primary Memory (RAM): Volatile memory used to store data and instructions that are currently
in use.
Secondary Memory: Non-volatile storage for data and programs. Includes hard drives, SSDs,
CDs, and USB drives.
Input/Output (I/O) Devices
Input Devices: Devices that send data to the computer. E.g. Keyboard, mouse, scanner,
microphone.
Output Devices: Devices that receive data from the computer. E.g. Monitor, printer, speakers.
Storage Devices
Hard Disk Drives (HDD): Magnetic storage device.
Solid State Drives (SSD): Flash memory storage device.
Optical Drives: Devices that read/write data from optical disks (CDs, DVDs).
Motherboard
It is the main circuit board that houses the CPU, memory, and other essential components. It
consists of chipset, which manages data flow between the CPU, memory, and peripheral
devices. It also consists of slots and ports, which are connectors for expansion cards, memory
modules, and peripherals.
Software Components
Operating System (OS): It is a software that manages computer hardware and software
resources and provides common services for computer programs.
Functions:
➢ Manages the execution of multiple processes.
➢ Allocates and deallocates memory space as needed by programs.
➢ Organizes and controls access to data on storage devices.
➢ Manages communication with I/O devices.
➢ Provides a way for users to interact with the computer (e.g., GUI, command line).

System Software: It is a software designed to provide a platform for other software. Examples
are Firmware (that is, embedded software in hardware devices) and Basic Input/Output System
or Unified Extensible Firmware Interface (BIOS/UEFI), which initializes and tests hardware
at startup.
Application Software: It is a program designed to perform specific tasks for users. Examples
are Word processors, web browsers, games, and databases.
15
Programming Languages: It is a formal language that comprising a set of instructions used
to produce various kinds of output.
Types:
➢ Low-Level Languages: Machine code and assembly language.
➢ High-Level Languages: More abstract, closer to human language. Examples include
Python, Java, C++, and JavaScript.
Functions:
✓ Compilation: Translating high-level code into machine code and then execute all code
at once.
✓ Interpretation: Translating and executing high-level code into machine code line-by-
line.
Operational Relationships
CPU and Memory: The CPU fetches instructions from memory, decodes them, executes them,
and stores the results back in memory.
Bus System: Data transfer between the CPU and memory occurs via the system bus, including
the data bus, address bus, and control bus.
CPU and I/O Devices: The CPU uses I/O ports and buses to communicate with input and
output devices. Create an Interrupts, which is the mechanism for I/O devices to signal the CPU
to handle an event (e.g., data ready to be read).
Operating System and Hardware: The OS provides a layer of abstraction between hardware
and application software, allowing programs to interact with hardware in a standardized way.
OS uses device drivers to communicate with hardware devices.
Operating System and Application Software: OS allocates resources such as CPU time,
memory, and I/O devices to applications. OS provides an environment for applications to run,
handling execution, memory management, and I/O operations.
Programming Languages and OS: OS provides tools and environments for software
development, including compilers, interpreters, and integrated development environments
(IDEs). High-level programs use system calls provided by the OS to perform low-level
operations such as file handling, process management, and networking.

The Organization and Operation of a Basic Digital Computer


A digital computer is an electronic device that processes data using binary (0s and 1s) to
perform various tasks according to programmed instructions.
Key Components
16
1. Central Processing Unit (CPU)
2. Memory (RAM and ROM)
3. Input/Output (I/O) Devices
4. Storage Devices
5. Motherboard
1. Central Processing Unit (CPU)
Structure and Components
Control Unit (CU): Directs the operation of the processor by fetching, decoding, and executing
instructions.
Arithmetic Logic Unit (ALU): Performs arithmetic and logical operations.
Registers: Small, fast storage locations within the CPU used to hold data and instructions
temporarily.
Operation Cycle
Fetch: Retrieves an instruction from memory.
Decode: Interprets the instruction.
Execute: Performs the operation specified by the instruction.
Store: Writes the result back to memory.
2. Memory
Types of Memory
(a) Primary Memory (RAM): Volatile memory used to store data and instructions that the
CPU needs immediately.
✓ Dynamic RAM (DRAM): Needs to be refreshed periodically.
✓ Static RAM (SRAM): Faster and more expensive than DRAM, does not need
to be refreshed.
(b) Read-Only Memory (ROM): Non-volatile memory used to store firmware and
bootstrap programs.
(c) Cache Memory: A small, fast type of volatile memory located close to the CPU to speed
up access to frequently used data.
Memory Hierarchy
➢ Registers: Fastest, smallest capacity.
➢ Cache: Very fast, small capacity.
➢ RAM: Fast, moderate capacity.
➢ Secondary Storage: Slower, large capacity.
3. Input/Output (I/O) Devices
17
Input Devices
➢ Keyboard: Converts keystrokes into electrical signals.
➢ Mouse: Translates motion and clicks into signals the computer can process.
➢ Scanner: Converts physical documents into digital format.
➢ Microphone: Captures sound and converts it into digital data.
Output Devices
➢ Monitor: Displays visual output from the computer.
➢ Printer: Produces physical copies of digital documents.
➢ Speakers: Output sound from the computer.
➢ Projector: Projects computer display onto a larger screen.
4. Storage Devices
Types of Storage
➢ Hard Disk Drive (HDD): Uses magnetic storage to store and retrieve digital
information.
➢ Solid State Drive (SSD): Uses flash memory to store data, faster than HDDs.
➢ Optical Discs: CDs, DVDs, Blu-ray discs that use laser technology to read and
write data.
➢ USB Flash Drives: Portable storage devices that use flash memory.
Storage Operations
➢ Read: Retrieve data from storage.
➢ Write*: Save data to storage.
5. Motherboard
Components
✓ Chipset: Manages data flow between the CPU, memory, and peripherals.
✓ BIOS/UEFI: Firmware interface between the operating system and hardware.
✓ Expansion Slots: Allow additional cards (e.g., graphics, sound) to be connected.
✓ Ports and Connectors: Interface for connecting external devices.
System Bus
Types of Buses
✓ Data Bus: Transfers data between components.
✓ Address Bus: Carries the addresses of data (not the data itself).
✓ Control Bus: Carries control signals from the CPU to other components.
The Fetch-Decode-Execute Cycle
Detailed Steps
18
✓ Fetch: The CPU retrieves the next instruction from memory.
✓ Decode: The control unit interprets the instruction and determines the necessary
actions.
✓ Execute: The ALU performs the required operations.
✓ Store: The result of the operation is written back to a register or memory.
Interaction Between Components
Data Flow
✓ Data and instructions are fetched from RAM and stored in the CPU registers.
✓ The CPU processes the data using the ALU and CU.
✓ Processed data is either sent to an output device, written back to RAM, or stored
in secondary storage.
Control Flow
✓ The control unit directs the operations of the CPU and coordinates interactions
between the CPU, memory, and I/O devices.
✓ Interrupts signal the CPU to handle urgent tasks.
Software and Hardware Integration
Operating System (OS)
✓ Manages hardware resources and provides a user interface.
✓ Process management, memory management, file system management, device
management.
Drivers
✓ Interface between the OS and hardware devices, enabling communication and
functionality.

Application Software
✓ Performs specific tasks for the user, utilizing the hardware and OS.

19
REPRESENTING AND MANIPULATING INFORMATION IN BINARY FORM
Introduction
In computer science, binary representation is fundamental. Computers operate using binary
(base-2) system, which consists of only two digits: 0 and 1. Understanding how to represent
20
and manipulate information in binary form is crucial for various applications, including data
storage, processing, and communication.
Often, binary numbers are used to describe the contents of computer memory; at other
times, decimal and hexadecimal numbers are used. You must develop a certain fluency with
number formats, so you can quickly translate numbers from one format to another.
Each numbering format, or system, has a base, or maximum number of symbols that
can be assigned to a single digit. Table 1.1 shows the possible digits for the numbering systems
used most commonly in hardware and software manuals. In the last row of the table,
hexadecimal numbers use the digits 0 through 9 and continue with the letters A through F to
represent decimal values 10 through 15. It is quite common to use hexadecimal numbers when
showing the contents of computer memory and machine-level instructions.
Table 1.1: Binary, Octal, Decimal, and Hexadecimal Digits.

1. Binary Integers
A computer stores instructions and data in memory as collections of electronic charges.
Representing these entities with numbers requires a system geared to the concepts of on and
off or true and false. Binary numbers are base 2 numbers, in which each binary digit (called a
bit) is either 0 or 1. Bits are numbered sequentially starting at zero on the right side and
increasing toward the left. The bit on the left is called the most significant bit (MSB), and the
bit on the right is the least significant bit (LSB). The MSB and LSB bit numbers of a 16-bit
binary number are shown in Figure 1.2:

Figure 1.2
Binary integers can be signed or unsigned. A signed integer is positive or negative. An unsigned
integer is by default positive. Zero is considered positive. When writing down large binary
numbers, many people like to insert a dot every 4 bits or 8 bits to make the numbers eas ier to
read. Examples are 1101.1110.0011.1000.0000 and 11001010.10101100.
(a) Unsigned Binary Integers

21
Starting with the LSB, each bit in an unsigned binary integer represents an increasing power
of 2. Table 1.2 contains an 8-bit binary number, showing how powers of two increase from
right to left:

Table 1.2: List the decimal values of 20 through 215 which shows Binary Bit Position Values

Translating Unsigned Binary Integers to Decimal


Weighted positional notation represents a convenient way to calculate the decimal value of an
unsigned binary integer having n digits:

D indicates a binary digit. For example, binary 00001001 is equal to 9. We calculate this value
by leaving out terms equal to zero:

The same calculation is shown by the following figure:

Translating Unsigned Decimal Integers to Binary


To translate an unsigned decimal integer into binary, repeatedly divide the integer by 2, saving
each remainder as a binary digit. The following table shows the steps required to translate
decimal 37 to binary. The remainder digits, starting from the top row, are the binary digits D0,
D1, D2, D3, D4, and D5

22
We can just concatenate the binary bits from the remainder column of the table in reverse order
(D5, D4, . . .) to produce binary 100101. Because x86 computer storage always consists of
binary numbers whose lengths are multiples of 8, we fill the remaining two digit positions on
the left with zeros, producing 00100101.
Binary Addition
When adding two binary integers, proceed bit by bit, starting with the low-order pair of bits
(on the right) and add each subsequent pair of bits. There are four ways to add two binary digits,
as shown here:

When adding 1 to 1, the result is 10 binary (think of it as the decimal value 2). The extra digit
generates a carry to the next-highest bit position. In the following figure, we add binary
00000100 to 00000111:

Beginning with the lowest bit in each number (bit position 0), we add 0+1, producing
a 1 in the bottom row. The same happens in the next highest bit (position 1). In bit position 2,
we add 1 1, generating a sum of zero and a carry of 1. In bit position 3, we add the carry bit to
0 0, producing 1. The rest of the bits are zeros. You can verify the addition by adding the
decimal equivalents shown on the right side of the figure (4 7 11).

23
Sometimes a carry is generated out of the highest bit position. When that happens, the
size of the storage area set aside becomes important. If we add 11111111 to 00000001, for
example, a 1 carries out of the highest bit position, and the lowest 8 bits of the sum equal all
zeros. If the stor age location for the sum is at least 9 bits long, we can represent the sum as
100000000. But if the sum can only store 8 bits, it will equal to 00000000, the lowest 8 bits of
the calculated value.
Hexadecimal Integers
Large binary numbers are cumbersome to read, so hexadecimal digits offer a convenient way
to represent binary data. Each digit in a hexadecimal integer represents four binary bits, and
two hexadecimal digits together represent a byte. A single hexadecimal digit represents decimal
0 to 15, so letters A to F represent decimal values in the range 10 through 15. Table 1.3 shows
how each sequence of four binary bits translates into a decimal or hexadecimal value.
Table 1.3: Binary, Decimal, and Hexadecimal Equivalents

Example: binary 0001.0110.1010.0111.1001.0100 is equivalent to hexadecimal 16A794

Converting Unsigned Hexadecimal to Decimal


In hexadecimal, each digit position represents a power of 16. This is helpful when calculating
the decimal value of a hexadecimal integer. Suppose we number the digits in a four-digit
hexadecimal integer with subscripts as D3D2D1D0. The following formula calculates the
integer’s decimal value:

The formula can be generalized for any n-digit hexadecimal integer:

24
NOTE:

For example, hexadecimal 1234 is equal to (1 X 163) (2 X 162) (3 X 161) (4 X 160), or decimal
4660. Similarly, hexadecimal 3BA4 is equal to (3 X 163) (11 X 162) (10 X 161) (4 X 160), or
decimal 15,268. The following Table 1.4 shows this last calculation:

Table 1.4: List the power of 16 from 160 to 167 which is Powers of 16 in Decimal

Converting Unsigned Decimal to Hexadecimal


To convert an unsigned decimal integer to hexadecimal, repeatedly divide the decimal value
by 16 and retain each remainder as a hexadecimal digit. For example, the following table lists
the steps when converting decimal 422 to hexadecimal:

The resulting hexadecimal number is assembled from the digits in the remainder column, start
ing from the last row and working upward to the top row. In this example, the hexadecimal rep
resentation is 1A6. The same algorithm was used for binary integers in Section 1.3.1. To
convert from decimal into some other number base other than hexadecimal, replace the divisor
(16) in each calculation with the desired number base.
Signed Integers

25
Signed binary integers are positive or negative. For x86 processors, the MSB indicates the sign:
0 is positive and 1 is negative. The following figure shows examples of 8-bit negative and
positive integers:

Two’s-Complement Notation
Negative integers use two’s-complement representation, using the mathematical
principle that the two’s complement of an integer is its additive inverse. (If you add a number
to its additive inverse, the sum is zero.)
Two’s-complement representation is useful to processor designers because it removes
the need for separate digital circuits to handle both addition and subtraction. For example, if
presented with the expression A B, the processor can simply convert it to an addition
expression: A B).
The two’s complement of a binary integer is formed by inverting (complementing) its
bits and adding 1. Using the 8-bit binary value 00000001, for example, its two’s complement
turns out to be 11111111, as can be seen as follows:

11111111 is the two’s-complement representation of 1. The two’s-complement operation is


reversible, so the two’s complement of 11111111 is 00000001.
Two’s Complement of Hexadecimal
To create the two’s complement of a hexadecimal integer, reverse all bits and add 1. An easy
way to reverse the bits of a hexadecimal digit is to subtract the digit from 15. Here are examples
of hexadecimal integers converted to their two’s complements:

Converting Signed Binary to Decimal: Use the following algorithm to calculate the decimal
equivalent of a signed binary integer:

26
✓ If the highest bit is a 1, the number is stored in two’s-complement notation. Create its
two’s complement a second time to get its positive equivalent. Then convert this new
number to decimal as if it were an unsigned binary integer.
✓ If the highest bit is a 0, you can convert it to decimal as if it were an unsigned binary
integer.
For example, signed binary 11110000 has a 1 in the highest bit, indicating that it is a
negative integer. First, we create its two’s complement, and then convert the result to
decimal. Here are the steps in the process

Because the original integer (11110000) was negative, we know that its decimal value is
−16.
Converting Signed Decimal to Binary: To create the binary representation of a signed
decimal integer, do the following:
i. Convert the absolute value of the decimal integer to binary.
ii. If the original decimal integer was negative, create the two’s complement of the
binary number from the previous step.
For example, −43 decimal is translated to binary as follows:
i. The binary representation of unsigned 43 is 00101011.
ii. Because the original value was negative, we create the two’s complement of
00101011, which is 11010101. This is the representation of −43 decimal.
Converting Signed Decimal to Hexadecimal: To convert a signed decimal integer to
hexadecimal, do the following:
i. Convert the absolute value of the decimal integer to hexadecimal.
ii. If the decimal integer was negative, create the two’s complement of the hexadecimal
number from the previous step.
Converting Signed Hexadecimal to Decimal: To convert a signed hexadecimal integer to
decimal, do the following:

27
i. If the hexadecimal integer is negative, create its two’s complement; otherwise,
retain the integer as is.
ii. Using the integer from the previous step, convert it to decimal. If the original value
was negative, attach a minus sign to the beginning of the decimal integer
NOTE:
If computers only store binary data, how do they represent characters? They use a character
set, which is a mapping of characters to integers. Until a few years ago, character sets used
only 8 bits. Even now, when running in character mode (such as MS-DOS), IBM-compatible
microcomputers use the ASCII (pronounced “askey”) character set. ASCII is an acronym for
American Standard Code for Information Interchange. In ASCII, a unique 7-bit integer is
assigned to each character. Because ASCII codes use only the lower 7 bits of every byte, the
extra bit is used on various computers to create a proprietary character set. On IBM-compatible
microcomputers, for example, values 128 through 255 represent graphics symbols and Greek
characters.

ASCII Strings: A sequence of one or more characters is called a string. More specifically, an
ASCII string is stored in memory as a succession of bytes containing ASCII codes. For
example, the numeric codes for the string “ABC123” are 41h, 42h, 43h, 31h, 32h, and 33h. A
null-terminated string is a string of characters followed by a single byte containing zero. The
C and C++ languages use null terminated strings, and many DOS and Windows functions
require strings to be in this format.

Using the ASCII Table: To find the hexadecimal ASCII code of a character, look along the top
row of the table and find the column containing the character you want to translate. The most
significant digit of the hexadecimal value is in the second row at the top of the table; the least
significant digit is in the second column from the left. For example, to find the ASCII code of
the letter a, find the column containing the a and look in the second row: The first hexadecimal
digit is 6. Next, look to the left along the row containing a and note that the second column
contains the digit 1. Therefore, the ASCII code of a is 61 hexadecimal. This is shown as follows
in simplified form:

28
ASCII Control Characters: Character codes in the range 0 through 31 are called ASCII control
characters. If a program writes these codes to standard output (as in C++), the control characters
will carry out predefined actions. Table 1.5 lists the most commonly used characters in this
range.
Table 1.5: ASCII Control Characters

Terminology for Numeric Data Representation: It is important to use precise terminology


when describing the way numbers and characters are represented in memory and on the display
screen. Decimal 65, for example, is stored in memory as a single binary byte as 01000001. A
debugging program would probably display the byte as “41,” which is the number’s
hexadecimal representation. If the byte were copied to video memory, the letter “A” would
appear on the screen because 01000001 is the ASCII code for the letter A. Because a number’s
interpretation can depend on the context in which it appears, we assign a specific name to each
type of data representation to clarify future discussions:
➢ A binary integer is an integer stored in memory in its raw format, ready to be used in a
calculation. Binary integers are stored in multiples of 8 bits (8, 16, 32, 48, or 64).
➢ An ASCII digit string is a string of ASCII characters, such as “123” or “65,” which is
made to look like a number. This is simply a representation of the number and can be
in any of the for mats shown for the decimal number 65 in Table 1.6
Table 1.6: Types of Numeric Strings

29
4. Logical Operations
AND: Outputs 1 if both inputs are 1.
OR: Outputs 1 if at least one input is 1.
NOT: Inverts the input (0 becomes 1, 1 becomes 0).
XOR: Outputs 1 if inputs are different.

5. Bitwise Operations
Bitwise AND: Performs AND operation on corresponding bits.
Bitwise OR: Performs OR operation on corresponding bits.
Bitwise NOT: Inverts each bit.
Bitwise XOR: Performs XOR operation on corresponding bits.
Bitwise Shifts:
✓ Left Shift (<<): Shifts bits to the left, adding zeros on the right.
✓ Right Shift (>>): Shifts bits to the right, discarding bits on the right.
6. Binary Data Structures
i. Bit Fields: Used to store multiple logical values in a single binary word. Useful in
situations where memory is a constraint.
ii. Binary Trees: Data structure in which each node has at most two children. Used in
searching and sorting algorithms.
iii. Heaps: Specialized binary trees used in priority queues.

7. Applications of Binary Representation


i. Digital Electronics: Binary is the basis of all digital electronics and computer systems.
ii. Networking: IP addresses and subnet masks are represented in binary.
iii. Cryptography: Binary operations are essential in encryption and decryption algorithms.
iv. Error Detection and Correction: Techniques like parity bits and Hamming codes use
binary operations to detect and correct errors in data transmission.

30

You might also like