CPE 102 Introduction To Computer Engineering NOTE
CPE 102 Introduction To Computer Engineering NOTE
The evolution of modern computing and the computer engineering profession is a fascinating
journey that spans centuries. From ancient tools for calculation to the sophisticated digital
systems we use today, this development has profoundly impacted society. The key milestones
in computing history and the emergence of the computer engineering profession as explain as
follows.
Early Beginnings
Ancient Tools: Early civilizations developed tools for arithmetic, such as the abacus used in
Mesopotamia and China. These devices, though primitive, laid the groundwork for future
computational methods.
Mechanical Calculators: In the 17th century, inventors like Blaise Pascal and Gottfried
Wilhelm Leibniz created mechanical calculators. Pascal's Pascaline could add and subtract,
while Leibniz's Step Reckoner could perform multiplication and division.
3
home appliances. Engineers develop prototypes to test new designs and ensure they meet
performance and reliability standards before mass production.
B. Software Development
Writing efficient and reliable code is a core responsibility. Computer engineers
often work with various programming languages, such as C, C++, Java, and Python, to
create software applications.
They design software architectures that define the structure and behavior of
software systems. This involves creating algorithms, data structures, and user
interfaces.
Engineers rigorously test software to identify and fix bugs. They use debugging
tools and techniques to ensure software operates correctly and efficiently.
C. Systems Integration
Computer engineers ensure that hardware and software components work seamlessly
together. This involves interfacing hardware with software applications and optimizing system
performance. Engineers configure and manage computer systems, including operating systems,
networks, and databases, to ensure they meet organizational needs.
D. Network Engineering
Designing and implementing networks that enable communication between computers
and other devices. This includes local area networks (LANs), wide area networks (WANs), and
wireless networks. Ensuring the security of data transmitted across networks. Computer
engineers develop and implement security protocols to protect against cyber threats.
Identifying and resolving network issues to maintain smooth and reliable network operations.
E. Cybersecurity
Conducting security assessments to identify vulnerabilities in systems and networks.
This includes penetration testing and risk analysis.
Creating and implementing security measures, such as firewalls, encryption, and
intrusion detection systems, to protect data and systems.
Responding to security breaches and cyber-attacks. Engineers develop strategies to
mitigate damage and prevent future incidents.
4
Project Management
Planning and Coordination: Computer engineers often lead or participate in project teams.
They plan and coordinate activities to ensure projects are completed on time and within budget.
Resource Management: Managing resources, including personnel, hardware, and software, to
achieve project goals.
Documentation: Creating detailed documentation for projects, including design
specifications, user manuals, and maintenance guides.
Impact on Society
Economic Growth: Computer engineers contribute to economic development by driving
technological innovation, creating new products, and improving productivity.
Quality of Life: Their work enhances the quality of life by developing technologies that
improve healthcare, education, communication, and entertainment.
5
Environmental Sustainability: Engineers design energy-efficient systems and develop
technologies that promote environmental sustainability, helping to address global challenges
such as climate change.
6
CAREER PATHS AND DEVELOPMENT
A. Public Sector
Roles and Opportunities
Government Agencies: Computer engineers can work in various government departments
such as defense, homeland security, public health, and transportation. They may develop secure
communication systems, manage IT infrastructure, or work on large-scale public projects.
Public Institutions: Working in public universities, research institutions, and hospitals.
Engineers can contribute to research projects, develop public health systems, or improve
educational technologies.
Non-Profit Organizations: Developing technologies to address social issues, improve
education in underserved areas, or enhance disaster response systems.
Skills and Development
Regulatory Knowledge: Understanding government regulations and compliance
requirements.
Public Policy: Knowledge of how technology impacts public policy and vice versa.
Project Management: Managing large-scale public projects with multiple stakeholders.
B. Private Sector
Roles and Opportunities
Tech Companies: Working for tech giants like Google, Apple, Microsoft, and Amazon in
roles such as software development, system architecture, network engineering, and
cybersecurity.
Startups: Developing innovative products and services in a fast-paced environment. Engineers
may work on emerging technologies such as AI, blockchain, and IoT.
Consulting Firms: Providing expert advice on IT infrastructure, cybersecurity, software
development, and digital transformation to various clients.
Skills and Development
Innovation: Staying current with the latest technological advancements and trends.
Business Acumen: Understanding business goals and aligning technical solutions to meet those
goals.
Agility: Adapting to rapid changes in technology and market demands.
C. Academic and Research
Roles and Opportunities
Professors and Lecturers: Teaching computer engineering courses at universities and
colleges. They also mentor students and guide their research projects.
7
Researchers: Conducting cutting-edge research in areas like artificial intelligence, quantum
computing, cybersecurity, and more. Researchers may work in academic institutions, research
labs, or industry R&D departments.
Academic Administration: Taking on leadership roles such as department head, dean, or
provost, and shaping the academic programs and policies.
Skills and Development
Teaching: Developing effective teaching methodologies and materials.
Research: Conducting rigorous research, publishing papers, and securing research funding.
Collaboration: Working with other researchers, institutions, and industry partners on joint
projects.
D. Industry
Roles and Opportunities
Healthcare: Developing medical software, electronic health records systems, telemedicine
solutions, and medical devices.
Finance: Working on financial technology (fintech) solutions, cybersecurity for financial
institutions, and high-frequency trading systems.
Telecommunications: Designing and managing communication networks, developing
protocols, and working on wireless technologies.
Automotive: Developing software for autonomous vehicles, infotainment systems, and
vehicle-to-everything (V2X) communication.
Entertainment: Creating video games, virtual reality (VR) experiences, and multimedia
systems.
Skills and Development
Domain Knowledge: Gaining expertise in the specific industry you work in, such as healthcare
regulations or financial compliance.
Interdisciplinary Skills: Combining knowledge from computer engineering with other fields
like biology, finance, or telecommunications.
Innovation: Developing new products and services that push the boundaries of the industry.
E. Professional Development
Certifications and Continuing Education
8
Certifications: Obtaining certifications such as Certified Information Systems Security
Professional (CISSP), Cisco-Certified Network Associate (CCNA), or Microsoft Certified:
Azure Solutions Architect Expert.
Workshops and Conferences: Attending industry conferences, workshops, and seminars to
stay updated on the latest trends and technologies.
Advanced Degrees: Pursuing advanced degrees such as a Master’s or Ph.D. in computer
engineering or related fields.
Networking and Professional Organizations
Professional Organizations: Joining organizations such as the Institute of Electrical and
Electronics Engineers (IEEE), Association for Computing Machinery (ACM), COREN, NSE
and relevant industry-specific groups.
Networking: Building a professional network through conferences, online forums, and local
tech meetups.
Mentorship: Finding mentors in the field and becoming a mentor to junior engineers.
11
COMPUTER DEVICES/HARDWARE IN THE AGE OF SMARTNESS AND
INTERNET OF THINGS AND PEOPLE (IOTS AND P)
Understanding Smart Devices and IoTs and P
Smart Devices: Electronic devices connected to other devices or networks via different
protocols (such as Bluetooth, Wi-Fi, or cellular), capable of interacting with users and other
devices.
Internet of Things (IoT): A network of physical objects embedded with sensors, software, and
other technologies to connect and exchange data with other devices and systems over the
internet.
Internet of People (IoP): Extends IoT to include not just things but also people, focusing on
human interactions and the data generated from these interactions.
Key Concepts
Connectivity: The ability of devices to connect to the internet and communicate with each
other.
Interoperability: The capacity for different devices and systems to work together seamlessly.
Automation: The use of technology to perform tasks with minimal human intervention.
Data Analytics: The process of examining data to uncover patterns and insights, critical for
IoT applications.
Components of Smart Devices and IoT Hardware
Sensors
Types of Sensors: Temperature, humidity, motion, light, pressure, and more.
Function: Sensors collect data from the environment and send it to other devices or systems
for processing.
Microcontrollers and Microprocessors
Microcontrollers: Small computers on a single integrated circuit containing a processor,
memory, and programmable input/output peripherals.
Microprocessors: The central processing unit (CPU) of a computer, performing computations
and controlling devices.
Communication Modules
Wi-Fi: Allows devices to connect to local networks and the internet wirelessly.
Bluetooth: Short-range wireless communication technology for exchanging data over short
distances.
Zigbee: A specification for a suite of high-level communication protocols using low-power
digital radios.
12
Cellular: Uses mobile networks (like 4G, 5G) to provide connectivity over wide areas.
Power Management
Battery Technologies: Various types of batteries (e.g., lithium-ion, nickel-metal hydride) used
to power smart devices.
Energy Harvesting: Techniques for collecting energy from external sources (such as solar
power, thermal energy) to power devices.
Design and Architecture of IoT Systems
Edge Computing: Processing data closer to the data source rather than in a centralized data-
processing warehouse. This type of computing reduces latency, saves bandwidth, and enhances
data security.
Cloud Computing: Using remote servers hosted on the internet to store, manage, and process
data. It offers scalability, flexibility, and extensive storage and processing capabilities.
Network Topologies
Star Topology: All devices are connected to a central hub.
Mesh Topology: Devices are interconnected, each node relays data for the network.
Hybrid Topology: Combines elements of both star and mesh topologies.
System Software: It is a software designed to provide a platform for other software. Examples
are Firmware (that is, embedded software in hardware devices) and Basic Input/Output System
or Unified Extensible Firmware Interface (BIOS/UEFI), which initializes and tests hardware
at startup.
Application Software: It is a program designed to perform specific tasks for users. Examples
are Word processors, web browsers, games, and databases.
15
Programming Languages: It is a formal language that comprising a set of instructions used
to produce various kinds of output.
Types:
➢ Low-Level Languages: Machine code and assembly language.
➢ High-Level Languages: More abstract, closer to human language. Examples include
Python, Java, C++, and JavaScript.
Functions:
✓ Compilation: Translating high-level code into machine code and then execute all code
at once.
✓ Interpretation: Translating and executing high-level code into machine code line-by-
line.
Operational Relationships
CPU and Memory: The CPU fetches instructions from memory, decodes them, executes them,
and stores the results back in memory.
Bus System: Data transfer between the CPU and memory occurs via the system bus, including
the data bus, address bus, and control bus.
CPU and I/O Devices: The CPU uses I/O ports and buses to communicate with input and
output devices. Create an Interrupts, which is the mechanism for I/O devices to signal the CPU
to handle an event (e.g., data ready to be read).
Operating System and Hardware: The OS provides a layer of abstraction between hardware
and application software, allowing programs to interact with hardware in a standardized way.
OS uses device drivers to communicate with hardware devices.
Operating System and Application Software: OS allocates resources such as CPU time,
memory, and I/O devices to applications. OS provides an environment for applications to run,
handling execution, memory management, and I/O operations.
Programming Languages and OS: OS provides tools and environments for software
development, including compilers, interpreters, and integrated development environments
(IDEs). High-level programs use system calls provided by the OS to perform low-level
operations such as file handling, process management, and networking.
Application Software
✓ Performs specific tasks for the user, utilizing the hardware and OS.
19
REPRESENTING AND MANIPULATING INFORMATION IN BINARY FORM
Introduction
In computer science, binary representation is fundamental. Computers operate using binary
(base-2) system, which consists of only two digits: 0 and 1. Understanding how to represent
20
and manipulate information in binary form is crucial for various applications, including data
storage, processing, and communication.
Often, binary numbers are used to describe the contents of computer memory; at other
times, decimal and hexadecimal numbers are used. You must develop a certain fluency with
number formats, so you can quickly translate numbers from one format to another.
Each numbering format, or system, has a base, or maximum number of symbols that
can be assigned to a single digit. Table 1.1 shows the possible digits for the numbering systems
used most commonly in hardware and software manuals. In the last row of the table,
hexadecimal numbers use the digits 0 through 9 and continue with the letters A through F to
represent decimal values 10 through 15. It is quite common to use hexadecimal numbers when
showing the contents of computer memory and machine-level instructions.
Table 1.1: Binary, Octal, Decimal, and Hexadecimal Digits.
1. Binary Integers
A computer stores instructions and data in memory as collections of electronic charges.
Representing these entities with numbers requires a system geared to the concepts of on and
off or true and false. Binary numbers are base 2 numbers, in which each binary digit (called a
bit) is either 0 or 1. Bits are numbered sequentially starting at zero on the right side and
increasing toward the left. The bit on the left is called the most significant bit (MSB), and the
bit on the right is the least significant bit (LSB). The MSB and LSB bit numbers of a 16-bit
binary number are shown in Figure 1.2:
Figure 1.2
Binary integers can be signed or unsigned. A signed integer is positive or negative. An unsigned
integer is by default positive. Zero is considered positive. When writing down large binary
numbers, many people like to insert a dot every 4 bits or 8 bits to make the numbers eas ier to
read. Examples are 1101.1110.0011.1000.0000 and 11001010.10101100.
(a) Unsigned Binary Integers
21
Starting with the LSB, each bit in an unsigned binary integer represents an increasing power
of 2. Table 1.2 contains an 8-bit binary number, showing how powers of two increase from
right to left:
Table 1.2: List the decimal values of 20 through 215 which shows Binary Bit Position Values
D indicates a binary digit. For example, binary 00001001 is equal to 9. We calculate this value
by leaving out terms equal to zero:
22
We can just concatenate the binary bits from the remainder column of the table in reverse order
(D5, D4, . . .) to produce binary 100101. Because x86 computer storage always consists of
binary numbers whose lengths are multiples of 8, we fill the remaining two digit positions on
the left with zeros, producing 00100101.
Binary Addition
When adding two binary integers, proceed bit by bit, starting with the low-order pair of bits
(on the right) and add each subsequent pair of bits. There are four ways to add two binary digits,
as shown here:
When adding 1 to 1, the result is 10 binary (think of it as the decimal value 2). The extra digit
generates a carry to the next-highest bit position. In the following figure, we add binary
00000100 to 00000111:
Beginning with the lowest bit in each number (bit position 0), we add 0+1, producing
a 1 in the bottom row. The same happens in the next highest bit (position 1). In bit position 2,
we add 1 1, generating a sum of zero and a carry of 1. In bit position 3, we add the carry bit to
0 0, producing 1. The rest of the bits are zeros. You can verify the addition by adding the
decimal equivalents shown on the right side of the figure (4 7 11).
23
Sometimes a carry is generated out of the highest bit position. When that happens, the
size of the storage area set aside becomes important. If we add 11111111 to 00000001, for
example, a 1 carries out of the highest bit position, and the lowest 8 bits of the sum equal all
zeros. If the stor age location for the sum is at least 9 bits long, we can represent the sum as
100000000. But if the sum can only store 8 bits, it will equal to 00000000, the lowest 8 bits of
the calculated value.
Hexadecimal Integers
Large binary numbers are cumbersome to read, so hexadecimal digits offer a convenient way
to represent binary data. Each digit in a hexadecimal integer represents four binary bits, and
two hexadecimal digits together represent a byte. A single hexadecimal digit represents decimal
0 to 15, so letters A to F represent decimal values in the range 10 through 15. Table 1.3 shows
how each sequence of four binary bits translates into a decimal or hexadecimal value.
Table 1.3: Binary, Decimal, and Hexadecimal Equivalents
24
NOTE:
For example, hexadecimal 1234 is equal to (1 X 163) (2 X 162) (3 X 161) (4 X 160), or decimal
4660. Similarly, hexadecimal 3BA4 is equal to (3 X 163) (11 X 162) (10 X 161) (4 X 160), or
decimal 15,268. The following Table 1.4 shows this last calculation:
Table 1.4: List the power of 16 from 160 to 167 which is Powers of 16 in Decimal
The resulting hexadecimal number is assembled from the digits in the remainder column, start
ing from the last row and working upward to the top row. In this example, the hexadecimal rep
resentation is 1A6. The same algorithm was used for binary integers in Section 1.3.1. To
convert from decimal into some other number base other than hexadecimal, replace the divisor
(16) in each calculation with the desired number base.
Signed Integers
25
Signed binary integers are positive or negative. For x86 processors, the MSB indicates the sign:
0 is positive and 1 is negative. The following figure shows examples of 8-bit negative and
positive integers:
Two’s-Complement Notation
Negative integers use two’s-complement representation, using the mathematical
principle that the two’s complement of an integer is its additive inverse. (If you add a number
to its additive inverse, the sum is zero.)
Two’s-complement representation is useful to processor designers because it removes
the need for separate digital circuits to handle both addition and subtraction. For example, if
presented with the expression A B, the processor can simply convert it to an addition
expression: A B).
The two’s complement of a binary integer is formed by inverting (complementing) its
bits and adding 1. Using the 8-bit binary value 00000001, for example, its two’s complement
turns out to be 11111111, as can be seen as follows:
Converting Signed Binary to Decimal: Use the following algorithm to calculate the decimal
equivalent of a signed binary integer:
26
✓ If the highest bit is a 1, the number is stored in two’s-complement notation. Create its
two’s complement a second time to get its positive equivalent. Then convert this new
number to decimal as if it were an unsigned binary integer.
✓ If the highest bit is a 0, you can convert it to decimal as if it were an unsigned binary
integer.
For example, signed binary 11110000 has a 1 in the highest bit, indicating that it is a
negative integer. First, we create its two’s complement, and then convert the result to
decimal. Here are the steps in the process
Because the original integer (11110000) was negative, we know that its decimal value is
−16.
Converting Signed Decimal to Binary: To create the binary representation of a signed
decimal integer, do the following:
i. Convert the absolute value of the decimal integer to binary.
ii. If the original decimal integer was negative, create the two’s complement of the
binary number from the previous step.
For example, −43 decimal is translated to binary as follows:
i. The binary representation of unsigned 43 is 00101011.
ii. Because the original value was negative, we create the two’s complement of
00101011, which is 11010101. This is the representation of −43 decimal.
Converting Signed Decimal to Hexadecimal: To convert a signed decimal integer to
hexadecimal, do the following:
i. Convert the absolute value of the decimal integer to hexadecimal.
ii. If the decimal integer was negative, create the two’s complement of the hexadecimal
number from the previous step.
Converting Signed Hexadecimal to Decimal: To convert a signed hexadecimal integer to
decimal, do the following:
27
i. If the hexadecimal integer is negative, create its two’s complement; otherwise,
retain the integer as is.
ii. Using the integer from the previous step, convert it to decimal. If the original value
was negative, attach a minus sign to the beginning of the decimal integer
NOTE:
If computers only store binary data, how do they represent characters? They use a character
set, which is a mapping of characters to integers. Until a few years ago, character sets used
only 8 bits. Even now, when running in character mode (such as MS-DOS), IBM-compatible
microcomputers use the ASCII (pronounced “askey”) character set. ASCII is an acronym for
American Standard Code for Information Interchange. In ASCII, a unique 7-bit integer is
assigned to each character. Because ASCII codes use only the lower 7 bits of every byte, the
extra bit is used on various computers to create a proprietary character set. On IBM-compatible
microcomputers, for example, values 128 through 255 represent graphics symbols and Greek
characters.
ASCII Strings: A sequence of one or more characters is called a string. More specifically, an
ASCII string is stored in memory as a succession of bytes containing ASCII codes. For
example, the numeric codes for the string “ABC123” are 41h, 42h, 43h, 31h, 32h, and 33h. A
null-terminated string is a string of characters followed by a single byte containing zero. The
C and C++ languages use null terminated strings, and many DOS and Windows functions
require strings to be in this format.
Using the ASCII Table: To find the hexadecimal ASCII code of a character, look along the top
row of the table and find the column containing the character you want to translate. The most
significant digit of the hexadecimal value is in the second row at the top of the table; the least
significant digit is in the second column from the left. For example, to find the ASCII code of
the letter a, find the column containing the a and look in the second row: The first hexadecimal
digit is 6. Next, look to the left along the row containing a and note that the second column
contains the digit 1. Therefore, the ASCII code of a is 61 hexadecimal. This is shown as follows
in simplified form:
28
ASCII Control Characters: Character codes in the range 0 through 31 are called ASCII control
characters. If a program writes these codes to standard output (as in C++), the control characters
will carry out predefined actions. Table 1.5 lists the most commonly used characters in this
range.
Table 1.5: ASCII Control Characters
29
4. Logical Operations
AND: Outputs 1 if both inputs are 1.
OR: Outputs 1 if at least one input is 1.
NOT: Inverts the input (0 becomes 1, 1 becomes 0).
XOR: Outputs 1 if inputs are different.
5. Bitwise Operations
Bitwise AND: Performs AND operation on corresponding bits.
Bitwise OR: Performs OR operation on corresponding bits.
Bitwise NOT: Inverts each bit.
Bitwise XOR: Performs XOR operation on corresponding bits.
Bitwise Shifts:
✓ Left Shift (<<): Shifts bits to the left, adding zeros on the right.
✓ Right Shift (>>): Shifts bits to the right, discarding bits on the right.
6. Binary Data Structures
i. Bit Fields: Used to store multiple logical values in a single binary word. Useful in
situations where memory is a constraint.
ii. Binary Trees: Data structure in which each node has at most two children. Used in
searching and sorting algorithms.
iii. Heaps: Specialized binary trees used in priority queues.
30