KEMBAR78
? COS 101 Manual | PDF | Computer Data Storage | World Wide Web
0% found this document useful (0 votes)
28 views12 pages

? COS 101 Manual

The COS 101 Manual provides a comprehensive overview of computers, including their definition, history, components, and types. It covers fundamental concepts such as number systems, data representation, problem-solving in computing, and ethical considerations. Additionally, it discusses the Internet's role and applications in various fields, highlighting key components and historical milestones.

Uploaded by

Sam osas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views12 pages

? COS 101 Manual

The COS 101 Manual provides a comprehensive overview of computers, including their definition, history, components, and types. It covers fundamental concepts such as number systems, data representation, problem-solving in computing, and ethical considerations. Additionally, it discusses the Internet's role and applications in various fields, highlighting key components and historical milestones.

Uploaded by

Sam osas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

📘📘 COS 101 Manual

Chapter 1: Introduction to Computers

1.0 What Is a Computer?

A computer is an electronic device that accepts input, processes data, stores data, and produces output according

to a set of instructions (software).

Basic Functional Units:

1. Input Unit – e.g., keyboard, mouse

2. Processing Unit (CPU) – Central Processing Unit (control + arithmetic units)

3. Memory/Storage Unit – RAM, ROM, hard disk

4. Output Unit – e.g., monitor, printer

1.1 Brief History of Computers

Pre-Computer Era (Before 1800s):

• Abacus (c. 3000 BC): First known calculating device, used in Mesopotamia.

• Napier’s Bones (1617): Manual device for calculation using rods – by John Napier.

• Slide Rule (1622): First analog computing device used for multiplication/division.

Mechanical Era (17th–19th Century):

• Pascaline (1642): First mechanical adding machine – Blaise Pascal.

• Leibniz Wheel (1671): Stepped Reckoner – performed all four arithmetic operations.

• Analytical Engine (1837): First programmable mechanical computer – Charles Babbage.

o Ada Lovelace is regarded as the first computer programmer.

Electromechanical Era (1930s–1940s):

• Z3 (1941): First electromechanical programmable computer – Konrad Zuse.

• Harvard Mark I (1944): General-purpose electromechanical computer.


Electronic Era (1940s–Present):

Generation Period Technology Example

1st Gen 1940–1956 Vacuum tubes ENIAC, UNIVAC

2nd Gen 1956–1963 Transistors IBM 7090

3rd Gen 1964–1971 Integrated Circuits IBM 360

4th Gen 1971–Present Microprocessors Intel 4004, PCs

5th Gen Present → Artificial Intelligence Quantum & AI Machines

1.2 Types of Computers

A. Based on Purpose

1. General Purpose: For diverse tasks (e.g., PCs, laptops).

2. Special Purpose: Designed for specific tasks (e.g., ATMs, weather forecasting systems).

B. Based on Size and Power

Type Description Example

Supercomputer Most powerful, high-speed, for complex science IBM Summit

Mainframe Large-scale business processing IBM Z Series

Minicomputer Mid-sized, multi-user systems PDP-11

Microcomputer Personal computers, general use Desktop, Laptop

Embedded Systems Built into devices for specific control Microwave, Car ECU

C. Based on Data Handling

1. Analog Computers: Work with continuous data (e.g., speedometers).

2. Digital Computers: Use binary (0s and 1s); most common today.

3. Hybrid Computers: Combine analog and digital (e.g., hospital machines).

1.3 Characteristics of a Computer

• Speed: Performs billions of operations per second.


• Accuracy: High precision (dependent on user input).

• Automation: Once instructed, it can execute tasks without further human input.

• Storage: Stores large amounts of data.

• Versatility: Can switch between multiple tasks.

• Diligence: Unlike humans, does not suffer fatigue or loss of concentration.

1.4 Limitations of Computers

• Cannot think or reason independently.

• Lack of emotion or judgment.

• Dependence on electricity and instructions.

1.5 Components of a Computer System

1. Hardware: Physical parts – CPU, keyboard, monitor.

2. Software: Set of instructions.

o System Software: OS, drivers.

o Application Software: Word processors, browsers.

3. Humanware: Users operating and managing systems.

✅ Flashcards: Chapter 1 Review

Q1: Who is known as the father of the computer?


A1: Charles Babbage.

Q2: What is the main difference between analog and digital computers?
A2: Analog computers process continuous data; digital computers use binary (0s and 1s).

Q3: Name the 3 main components of a computer system.


A3: Hardware, software, and humanware.

Q4: What generation of computers introduced microprocessors?


A4: Fourth generation.

Q5: Which computer is the most powerful and used for scientific computations?
A5: Supercomputer.

Q6: Define a general-purpose computer.


A6: A computer used for multiple tasks, like word processing, browsing, or gaming.

Q7: What’s the main role of the CPU?


A7: It processes data and controls other components.

Q8: What are embedded systems?


A8: Special-purpose systems integrated into other devices (e.g., washing machines).
Chapter 2: Number Systems and Data Representation

2.0 Introduction to Number Systems

In computing, data is represented using number systems, which are sets of symbols (digits) used to represent
quantities. The most commonly used number systems are:

Number System Base Digits Used Example

Binary 2 0, 1 1011₂

Octal 8 0–7 75₈

Decimal 10 0–9 137₁₀

Hexadecimal 16 0–9, A–F 2F₁₆

2.1 The Binary Number System (Base 2)

Used by all digital computers. Each digit (bit) represents an exponent of 2, starting from right to left.

Binary to Decimal Conversion

Example 1:
Convert 1011₂ to decimal:

= (1 × 2³) + (0 × 2²) + (1 × 2¹) + (1 × 2⁰)


= (8) + (0) + (2) + (1)
= **11₁₀**

Example 2:
Convert 11010₂ to decimal:

= (1 × 2⁴) + (1 × 2³) + (0 × 2²) + (1 × 2¹) + (0 × 2⁰)


= 16 + 8 + 0 + 2 + 0
= **26₁₀**

2.2 Decimal to Binary Conversion

Use repeated division by 2, recording remainders (bottom-up).

Example: Convert 25₁₀ to binary:

25 ÷ 2 = 12 remainder 1
12 ÷ 2 = 6 remainder 0
6 ÷ 2 = 3 remainder 0
3 ÷ 2 = 1 remainder 1
1 ÷ 2 = 0 remainder 1

→ Binary: **11001₂**

2.3 Octal and Hexadecimal

• Octal (Base 8) is useful for compact representation of binary.


• Hexadecimal (Base 16) is widely used in memory addresses, MAC addresses, and color coding.

Hex Digits: A=10, B=11, C=12, D=13, E=14, F=15


Example:
2F₁₆ = (2 × 16¹) + (15 × 16⁰) = 32 + 15 = 47₁₀

2.4 Data Representation in Computers


1. Bits and Bytes

• Bit: Smallest unit (binary digit: 0 or 1)


• Nibble: 4 bits
• Byte: 8 bits
• Word: 16, 32, or 64 bits (depends on processor)

2. Character Representation

• ASCII: American Standard Code for Information Interchange


o A = 65, a = 97
• Unicode: Extended set to support all global languages.

3. Images and Sounds

• Images: Stored using pixels (e.g., RGB values).


• Audio: Represented via sampling of sound waves.

✅ Flashcards: Chapter 2 Review

Q1: What base is used in the binary number system?


A1: Base 2.

Q2: Convert 1010₂ to decimal.


A2: 8 + 0 + 2 + 0 = 10₁₀

Q3: Convert 13₁₀ to binary.


A3: 1101₂

Q4: What is a byte?


A4: A group of 8 bits.

Q5: What are the hexadecimal digits after 9?


A5: A, B, C, D, E, F (representing 10 to 15)

Q6: What is the decimal value of 2F₁₆?


A6: 47

Q7: Define ASCII.


A7: A character encoding system where each character is represented by a unique 7- or 8-bit binary number.

Q8: How is a pixel used in digital imaging?


A8: It represents the smallest unit of a digital image, often stored using RGB color values.
Chapter 3: Computer Components and Memory

3.0 Overview of Computer System Components

A computer system consists of hardware, software, and humanware.

A. Hardware

These are the physical parts of a computer that you can see and touch.

1. Central Processing Unit (CPU)

• Known as the "brain" of the computer.


• Has three key parts:
o Control Unit (CU): Directs the operations of the processor.
o Arithmetic Logic Unit (ALU): Performs calculations and logic operations.
o Registers: Small, high-speed storage within the CPU.

2. Memory Unit

Stores data and instructions needed for processing.

3.1 Types of Memory

Memory in computing is divided into primary and secondary.

A. Primary Memory (Main Memory)

Type Description Volatile? Example


RAM Temporary memory for active programs Yes 8GB DDR4
ROM Permanent memory containing boot-up instructions No BIOS chip
Cache Fast memory close to CPU for frequent access data Yes L1, L2 cache
Registers Very small memory inside CPU Yes Accumulator

• Volatile: Data is lost when power is off (RAM, cache).


• Non-volatile: Data is retained (ROM).

B. Secondary Memory (Storage)

Used for long-term storage of data and software.

Type Description Example


Hard Disk Drive Magnetic, large storage 1TB HDD
Solid State Drive Faster, no moving parts 512GB SSD
Optical Disk Uses laser to read/write data DVD, CD
Flash Drive Portable and fast USB drive
Cloud Storage Online-based storage Google Drive

3.2 Input and Output Devices

A. Input Devices

Used to enter data and instructions into the computer.

Device Use
Keyboard Typing text/data
Mouse Pointing and clicking
Scanner Converts documents to digital
Microphone Audio input
Webcam Captures video

B. Output Devices

Used to display or communicate the result of data processing.

Device Use
Monitor Displays visual output
Printer Produces hard copy
Speakers Output sound/audio
Projector Enlarged visual display

3.3 Software and Humanware

A. Software

Set of instructions that direct the computer.

• System Software: Controls hardware (e.g., Windows, Linux, macOS)


• Application Software: Performs specific tasks (e.g., MS Word, web browsers)
• Utility Software: Supports system tasks (e.g., antivirus, file management)

B. Humanware

Humans who design, program, manage, and use the computer systems:

• End Users
• System Administrators
• Programmers

✅ Flashcards: Chapter 3 Review

Q1: What is the full meaning of CPU?


A1: Central Processing Unit.

Q2: What are the two main types of computer memory?


A2: Primary memory and secondary memory.

Q3: Is ROM volatile or non-volatile?


A3: Non-volatile.

Q4: Give two examples of secondary storage.


A4: Hard disk drive (HDD), Solid-state drive (SSD).

Q5: What does ALU stand for, and what does it do?
A5: Arithmetic Logic Unit; it performs calculations and logical operations.

Q6: Name three input devices.


A6: Keyboard, mouse, scanner.

Q7: What is an example of utility software?


A7: Antivirus software.

Q8: What is cache memory used for?


A8: Temporarily stores frequently accessed data for fast retrieval by the CPU.
Chapter 4: Problem Solving, Errors, and Ethics in Computing

4.0 Introduction to Problem Solving in Computing

Problem-solving is a core part of computer science. It involves designing algorithms and writing code that solve
real-world or theoretical problems efficiently and accurately.

4.1 Problem-Solving Methods

1. Understanding the Problem

• Clearly define the input, output, and expected behavior.


• Example: Create a program that adds two numbers.

2. Designing a Solution

• Use tools like:


o Flowcharts: Visual diagrams of steps
o Pseudocode: Plain English instructions that resemble code

3. Developing the Algorithm

An algorithm is a finite set of instructions to solve a problem.

Example Pseudocode (Adding two numbers):

START
Input A
Input B
Sum = A + B
Display Sum
END

4. Coding

• Translate the algorithm into a programming language (e.g., Python, Java, C++)

5. Testing and Debugging

• Run the program with test data


• Identify and correct errors (debugging)

6. Documentation and Maintenance

• Comment your code


• Write user manuals
• Update software when necessary

4.2 Types of Errors in Computing

Error Type Description Example


Syntax Error Violation of language grammar rules Missing semicolon in C++
Runtime Error Occurs during execution (e.g., division by zero) 10 / 0
Logic Error Program runs, but gives wrong output due to incorrect logic Adding instead of multiplying
Semantic Error Correct syntax but does not reflect programmer’s intention Using wrong variable name

4.3 Ethics in Computing

Computer Ethics are principles that regulate proper behavior in the use of computer systems.
Key Issues in Computer Ethics:

Topic Description
Privacy Respecting users' personal data
Intellectual Property Avoiding plagiarism and piracy
Cybersecurity Preventing unauthorized access and data theft
Digital Divide Promoting equal access to technology
AI and Bias Ensuring fairness in automated systems
Environmental Impact Reducing e-waste and energy consumption

Laws & Regulations:

• Cybercrime Act
• Data Protection Laws
• Copyright Laws

4.4 Good Practices in Ethical Computing

• Use licensed software


• Respect privacy and personal data
• Avoid spreading malware or fake information
• Credit original creators
• Report unethical activities

✅ Flashcards: Chapter 4 Review

Q1: What is an algorithm?


A1: A step-by-step procedure to solve a problem.

Q2: Name three types of errors in programming.


A2: Syntax error, runtime error, logic error.

Q3: What is pseudocode?


A3: A way of writing algorithms using structured English resembling code.

Q4: What is a flowchart used for?


A4: To visually represent the steps of an algorithm.

Q5: Give an example of a runtime error.


A5: Division by zero.

Q6: What is the digital divide?


A6: The gap between those who have access to digital technology and those who do not.

Q7: Mention one law related to ethical computing.


A7: Cybercrime Act.

Q8: How can programmers ensure fairness in AI systems?


A8: By removing bias from training data and testing for discrimination.
Chapter 5: The Internet, Applications of Computing, and Career Paths

5.0 Introduction to the Internet

The Internet is a global network of interconnected computers that communicate using standard protocols. It enables

information sharing, communication, and services on a global scale.

Key Dates in Internet History

Year Event

1969 ARPANET (precursor to the Internet) launched in the USA

1983 TCP/IP became standard Internet protocol

1990 Tim Berners-Lee invented the World Wide Web (WWW)

1993 Mosaic – first web browser developed

2004–Present Rise of social media and cloud computing

5.1 Components of the Internet

Component Description
ISP (Internet Service Provider) Companies that provide Internet access (e.g., MTN, Spectranet)
IP Address A unique address assigned to each device online
Web Browser Software used to access websites (e.g., Chrome)
Web Server Stores and delivers web pages to browsers
DNS (Domain Name System) Converts website names to IP addresses

5.2 Applications of the Internet

Area Applications
Education eLearning platforms (e.g., Coursera, Moodle)
Communication Email, instant messaging, video calls
Commerce Online shopping, banking, and digital payments
Entertainment Streaming music/videos, gaming (e.g., YouTube, Netflix)
Social Media Facebook, Twitter, Instagram
Healthcare Telemedicine, electronic health records
Government e-Governance, online portals
5.3 The Computing Disciplines

Field Description Examples


Computer Science Theory and algorithms of computing Software design, AI
Information Technology Use of computers for storing and transmitting Networking, database admin
(IT) data

Software Engineering Building reliable and scalable software App development


Information Systems Managing business and organizational IT ERP systems, IT management
Cybersecurity Protecting systems from attacks Ethical hacking, security
auditing

Data Science Analyzing large sets of data Machine learning, BI

5.4 Careers in Computing

Role Description Tools/Skills Needed


Software Developer Writes and tests code Python, Java, Git
Network Administrator Manages computer networks Cisco, TCP/IP
Database Administrator Maintains and secures databases SQL, Oracle
Cybersecurity Analyst Detects and prevents threats Firewalls, penetration tools
AI Engineer Builds intelligent systems Python, TensorFlow
Web Developer Designs websites and web apps HTML, CSS, JavaScript
Data Analyst/Scientist Extracts insight from data Excel, Python, R, SQL
IT Support Specialist Helps users solve technical problems Hardware/software knowledge

5.5 The Future of Computing

Trend Impact

Artificial Intelligence (AI) Automating tasks, personalization, robotics

Quantum Computing Solving complex problems faster


Cloud Computing Scalable storage and services

5G Networks Faster, more reliable Internet

Internet of Things (IoT) Smart homes, wearable devices, connected cars

Blockchain Technology Secure, decentralized transactions


Edge Computing Data processing closer to source (e.g., sensors)

✅ Flashcards: Chapter 5 Review

Q1: Who invented the World Wide Web and in what year?
A1: Tim Berners-Lee in 1990.

Q2: What does DNS stand for?


A2: Domain Name System.

Q3: Give two examples of Internet applications in education.


A3: Coursera, Google Classroom.

Q4: What is an ISP?


A4: Internet Service Provider — provides access to the Internet.
Q5: Mention three disciplines in computing.
A5: Computer Science, Software Engineering, Cybersecurity.

Q6: What is the role of a data scientist?


A6: Analyze and interpret complex data for decision-making.

Q7: What is the Internet of Things (IoT)?


A7: A network of physical devices connected to the Internet.

Q8: What is the use of cloud computing?


A8: Storing and accessing data over the Internet instead of local storage.

You might also like