KEMBAR78
ICT 131 Module | PDF | Central Processing Unit | Computer Data Storage
0% found this document useful (0 votes)
147 views89 pages

ICT 131 Module

Uploaded by

mpasomukobe2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
147 views89 pages

ICT 131 Module

Uploaded by

mpasomukobe2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 89

ICT131 Introduction to Computer Science

Mulungushi University
Centre for Information Communication Technology and Education
Copyright
© 2009 by Mulungushi University. All rights reserved

Mulungushi University
Centre for Information Communication Technology and Education
P. Box. No: 80415,
Great North Road Campus,
Kabwe,
Zambia.
Fax: +26 0215 224637
E-mail: academic@mu.ac.zm
Website: www.mu.ac.zm
Acknowledgements
The Insitute of Distance Education of Mulungushi University wishes to thank the persons whose
names are listed below for their contribution in the preparation of this ICT131 Module.

Dr. Douglas Kunda Director, CICTE


Ms. Mutinta O Mweembe HOD, IDD
Ms. Leena Jaganathan Lecturer, CICTE
Mr. Christopher Chembe SDF, CICTE
Mr. Mutinta Mwananimbwe Clerical Officer
ii Contents

Contents
About ICT131 Module 1
Structure of the Module ....................................................................................................... 1

Course overview 3
Welcome to ICT131 Introduction to Computer Science ..................................................... 3
ICT131 Introduction to Computer Science —is this course for you? ................................. 3
Course outcomes .................................................................................................................. 4
Timeframe ............................................................................................................................ 4
Study skills ........................................................................................................................... 5
Need help? ........................................................................................................................... 6
Assignment .......................................................................................................................... 6
Assessment ........................................................................................................................... 6

Getting around ICT131 Module 7


Margin icons ........................................................................................................................ 7

Unit 1 8
Introduction to Computers ................................................................................................... 8
Introduction ................................................................................................................ 8
Unit summary..................................................................................................................... 19
Assignment ........................................................................................................................ 19
Assessment ......................................................................................................................... 19

Unit 2 20
Input and Output Devices .................................................................................................. 20
Introduction .............................................................................................................. 20
Unit summary..................................................................................................................... 28
Assignment ........................................................................................................................ 28
Assessment ......................................................................................................................... 28

Unit 3 29
Memory .............................................................................................................................. 29
Introduction .............................................................................................................. 29
ICT131 Introduction to Computer Science

Unit summary..................................................................................................................... 39
Assignment ........................................................................................................................ 39
Assessment ......................................................................................................................... 39

Unit 4 40
Software ............................................................................................................................. 40
Introduction .............................................................................................................. 40
Unit summary..................................................................................................................... 62
Assignment ........................................................................................................................ 62
Assessment ......................................................................................................................... 62

Unit 5 63
Hardware and Networking ................................................................................................. 63
Introduction .............................................................................................................. 63
Octal to Decimal conversion .................................................................................... 66
Octal to Binary Conversion ...................................................................................... 66
Binary to Octal conversion ....................................................................................... 66
The three elements of a basic telecommunication system ....................................... 80
Unit summary..................................................................................................................... 82
Assignment ........................................................................................................................ 83
Assessment ......................................................................................................................... 83

References 83
ICT131 Introduction to Computer Science

About ICT131 Module


ICT 131: Introduction to Computer Science has been produced by
Mulungushi University. All modules produced by the University are
structured in the same way, as outlined below.

Structure of ICT131 Module

Module overview
This module is meant for all Mulungushi University students. It offers a
bottom-up introduction to the computer, beginning with bits and
moving up the conceptual hierarchy to higher-level languages. The
aim of the module is to equip you with knowledge of computing
background.

We strongly recommend that you read the overview carefully before


starting your study.

Module content
The module is broken down into five units as follows:
 Introduction to computer science;
 Input and Output Devices;
 Memory;
 Software; and
 Hardware and Networking.
Each unit contains some assignment and/or assessment exercises, as
applicable.

Resources
As you might be interested in learning more on the topics covered in this
module, we have provided you with a list of additional resources at the
end of the module in form of books, articles and web sites.

Your comments
After you have completed the module we would highly appreciate it if
you could take a few moments to give us your feedback on any aspect of
this course. Your feedback might include comments on:

1
About ICT131 Module

 Content and structure;


 Reading materials and resources;
 Assignments;
 Assessment exercises;
 Duration; and
 Support (assigned tutors, technical help, etc.).
Your constructive feedback will help us to improve and enhance this
module.

2
ICT131 Introduction to Computer Science

Module overview

Welcome to ICT131 Introduction


to Computer Science

ICT131 Introduction to Computer


Science —is this module for you?
This module is meant for all Mulungushi University Students. It offers a
bottom-up introduction to the computer, beginning with bits and
moving up the conceptual hierarchy to higher-level languages. The
aim of the module is to equip you with knowledge of computing
background.

3
Module overview

Module outcomes
By the end of the module, you should be able to:

 Demonstrate familiarity with the history of computing;


Outcomes  Demonstrate understanding of the basic principles of
computing;
 Demonstrate understanding of programming languages;
 Demonstrate understanding of system development
methodology;
 Describe a variety of data representation systems;
 Manipulate numbers in the various data representations;
 Relate the data representations to the storage of numbers
and other data;
 Explain the role of the principal functional components of a
computer;
 Demonstrate awareness of computer network and
telecommunication technology; and
 Demonstrate knowledge of the positive and negative impact
of computers on people.

Timeframe
This is a one semester module and you are required to spend twenty hours
(20) on each unit and a total of 100 hours to cover all the units.

How long?

4
Study skills
ICT131 Introduction to Computer Science

As an adult learner your approach to learning will be different from that


which you applied during your school days. You will choose what you
want to study. You will have professional and/or personal motivation for
doing so and you will most likely be fitting your study activities around
other professional or domestic responsibilities.
Essentially you will be taking control of your learning environment. As a
consequence, you will need to consider performance issues related to
time management, goal setting, stress management and others. Perhaps
you will also need to reacquaint yourself in areas such as essay planning,
coping with exams and using the web as a learning resource.
Your most significant considerations will be time and space i.e. the time
you dedicate to your learning and the environment in which you engage
in that learning.
We recommend that you take time now—before starting your self-
study—to familiarize yourself with these issues. There are a number of
excellent resources on the web. A few suggested links are:

 http://www.how-to-study.com/
The “How to study” web site is dedicated to study skills resources.
You will find links to study preparation (a list of nine essentials for a
good study place), taking notes, strategies for reading text books,
using reference sources, test anxiety.

 http://www.ucc.vt.edu/stdysk/stdyhlp.html
This is the web site of the Virginia Tech, Division of Student Affairs.
You will find links to time scheduling (including a “where does time
go?” link), a study skill checklist, basic concentration techniques,
control of the study environment, note taking, how to read essays for
analysis, memory skills (“remembering”).

 http://www.howtostudy.org/resources.php
Another “How to study” web site with useful links to time
management, efficient reading, questioning/listening/observing skills,
getting the most out of doing (“hands-on” learning), memory building,
tips for staying motivated, developing a learning plan.
The above links are our suggestions to start you on your way. At the time
of writing this module these web links were active. If you want to look
for more go to www.google.com and type “self-study basics”, “self-study
tips”, “self-study skills” or similar.

5
Module overview

Need help?
For help you can contact the module lecturer, Mrs. Leena Jaganathan at
Mulungushi University. Email id: neyleena@yahoo.co.in,
lkumar@mu.ac.zm .
Help

Assignments
There are three assignments for this module. Once completed, the
assignments should be submitted by either Email or Post.

Assignments

Assessment
There are five assessment exercises in this module comprising:
(i) Assignment assessment made by the lecturer; and
Assessment (ii) Self-assessment which should be done by you at the
completion of each unit.

6
ICT131 Introduction to Computer Science

Getting around ICT131 Module

Margin icons
While working through ICT131 Module you will notice the frequent use
of margin icons. These icons serve to “signpost” a particular piece of text,
a new task or change in activity. They have been included to help you to
find your way around the module.
A complete icon set is shown below. We suggest that you familiarize
yourself with the icons and their meaning before starting your study.

Activity Assessment Assignment Case study

Discussion Group activity Help Note it!

Outcomes Reading Reflection Study skills

Summary Terminology Time Tip

7
Unit 1 Introduction to Computers

Unit 1

Introduction to Computers
Introduction
Welcome to Unit 1 Introduction to Computers. This unit provides a
definition of the term 'computer' and presents information on a brief
history of computers, types of computers and components of a computer
system.
Upon completion of this unit, you should be able to:

 Define the term ‘computer’;


 Describe the five generations of computer;
 List the main types of computers; and
Outcomes
 Identify the components of a computer system.

IC: Integrated Circuit

VLSI: Very Large Scale Integrated circuit

Terminology CPU: Central Processing Unit

NC: Network Computer

WBT: Windows Based Terminal

ALU: Arithmetic Logic Unit

CU: Control Unit

GaA: Gallium Arsenide

8
ICT131 Introduction to Computer Science

Introduction to Computers
A Computer is a device that accepts data in one form and processes it to produce data in
another form. The forms in which data is accepted or produced by the computer vary
enormously from simple words or numbers to signals sent from or received by other items
of technology. So, when the computer processes data it actually performs a number of
separate functions as follows:

 Input: The computer accepts data from outside for processing within.
 Storage: The computer holds data internally before, during and after
processing.
 Processing: The computer performs operations on the data it holds within.
 Output: The computer produces data from within for external use.
Figure 1 below illustrates the basic functions of a computer system.

Process
Data Input Output Data

Storage

Figure1: The basic functions of a computer system

Brief History of Computing


The first electronic computers were produced in the 1940s. Since then, a series of radical
breakthroughs in electronics occurred. With major breakthrough computers based upon the older
form of electronics have been replaced by a new “generation” of computers based upon the newer
form of electronics. These “generations” are classified as follows:

(i) First generation;

(ii) Second generation;

(iii) Third generation;

9
Unit 1 Introduction to Computers

(iv) Fourth generation; and

(v) Fifth generation.

First Generation Computers


The first generation computers, produced between 1946 and 1956, used vacuum tubes to store and
process information. These tubes consumed large amounts of power, generated a lot of heat. In
addition, they had limited memory and processing capability. In view of these limitations, first
generation computers were short-lived. Examples of first generation computers are EDSAC,
EDVAC, LEO and UNIVAC1.

Second Generation Computers


The second generation computers, produced between 1957 and 1963, used transistors for storage
and processing information. Transistors consumed less power than vacuum tubes, produced less
heat, and were cheaper, more stable and more reliable. Second generation computers, with increased
processing and storage capabilities, began to be more widely used for scientific and business
purposes. Examples of second generation computers are LEO mark III, ATLAS and the IBM 7000
series.

Third Generation Computers


Third generation computers, produced between 1964 and 1979 used integrated circuits
(IC) for storing and processing information. Integrated circuits are made by printing
numerous small transistors on silicon chips. These devices are called semiconductors. Third
generation computers employed software that could be used by non-technical people, thus
enlarging the role of computers in business. Examples of third generation computers are the
ICL 1900 series and the IBM 360 series.

Fourth Generation Computers


Fourth generation computers, produced from 1980 to the present, use very large scale
integrated circuits (VLSI) to store and process information. The VLSI technique allows the
installation of hundreds of thousands of circuits (transistors and other components) on a
small chip. With ultra-large-scale integration, 10 million transistors could be placed on a
chip. These computers are inexpensive and widely used in business and everyday life.

10
ICT131 Introduction to Computer Science

Fifth Generation Computers


The first four generations of computer hardware are based on the Von Neumann
architecture, which processed information sequentially, one instruction at a time. The fifth
generation of computers uses massively parallel processing to process multiple instructions
simultaneously. Massively parallel computers use flexibly connected networks linking
thousands of inexpensive, commonly used chips to address large computing problems,
attaining supercomputer speeds. With enough chips networked together, massively parallel
machines can perform more than a trillion floating point operations per second-a teraflop. A
floating point operation (flop) is a basic computer arithmetic operation, such as addition or
subtraction, on numbers that include a decimal point.

Types of Computers
Computers are distinguished on the basis of their processing capabilities. Computers with
the most processing power are also the largest and most expensive.

Super Computers

Super computers are those with the most processing power. The primary application of
supercomputers has been in scientific and military work, but their use is growing rapidly in
business as their prices decrease. Super computers are especially valuable for large
simulation models of real-world phenomena, where complex mathematical representations
and calculations are required, or for image creation and processing. Super computers are
used to model the weather for better weather prediction, to test weapons non-destructively,
to design aircraft, for more efficient and less costly production, and to make sequences in
motion pictures. Super computers are generally operated at 4 to 10 times faster than the next
most powerful computer class, the mainframe.

Super computers use the technology of parallel processing. However, in contrast


with neural computing, which uses massively parallel processing, super-computers use non-
interconnected CPUs.

11
Unit 1 Introduction to Computers

Mainframes

Mainframes are not as powerful and generally not as expensive as supercomputers. Large
corporations, where data processing is centralized and large databases are maintained, most
often use mainframe computers. Applications that run on a mainframe can be large and
complex, allowing for data and information to be shared throughout the organization. In
1998, a mainframe system had anywhere from 50 megabytes to several gigabytes of
primary storage. Online secondary storage may use high-capacity magnetic and optical
storage media with capacities in the gigabyte to terabyte range. Several hundreds or even
thousands of online computers can be linked to a mainframe.

Minicomputers

Minicomputers, which are also called midrange computers, are smaller and less expensive
than mainframe computers. Minicomputers are usually designed to accomplish specific
tasks such as process control, scientific research and engineering applications.

Larger companies gain greater corporate flexibility by distributing data processing


with minicomputers in organizational units instead of centralizing computing at one
location. These minicomputers are connected to each other and often to a mainframe
through telecommunication links. The minicomputer is also able to meet the needs of
smaller organizations that would rather not utilize scarce corporate resources by purchasing
larger computer systems.

Personal Computers

Personal computers are the smallest and least expensive category of general purpose
computers. In general, modern personal computers have between 1 and 2 gigabytes of
primary storage, one 3.5 inch floppy drive, a CD-ROM drive, and 10 gigabytes or more of
secondary storage. They may be subdivided into four classifications based on size:
desktops, laptops, notebooks, and palmtops.

The desktop personal computer is the typical, familiar microcomputer system. It is


usually modular in design, with separate but connected monitor, keyboard, and CPU.
Laptop computers are small, easily transportable, light-weight microcomputers that fit into
the briefcase. Notebooks are smaller laptops (also called mini-laptops), but sometimes are
used interchangeably with laptops. Laptops and notebooks are designed for maximum

12
ICT131 Introduction to Computer Science

convenience and transportability, allowing users to have access to processing power and
data without being bound to an office environment

Palmtop computers are hand-held microcomputers, small enough to carry in one


hand. Although still capable of general-purpose computing, palmtops are usually
configured for specific applications and limited in the number of ways they can accept user
input and provide output.

Network computers and Terminals

The computers described so far are considered “smart” computers. However, mainframe
and midrange computers use “dumb” terminals, which are basically input/output devices.
However, in the past, these terminals, which are called X terminals, were also used for
limited processing. Two extensions of these terminals are discussed here: network computer
and windows-based terminal computer.

Network computer

A network computer (NC) is a desktop terminal that does not store software
programs or data permanently. Similar to a “dumb” terminal, the NC is simpler and cheaper
than a PC and easy to maintain. Users can download software or data they need from a
server or a mainframe over an intranet or internet. There is no need for hard disks, floppy
disks, CD-ROMs and their drives. The central computer can save any material for the user.
The NCs provide security as well. However, users are limited in what they can do with the
terminals.

Windows-based terminal

Windows-Based Terminal (WBTs) are a subset of NC. Although they offer less
functionality than PCs, WBTs reduce maintenance and support costs and maintain
compatibility with windows operating systems. WBT users access windows applications on
central servers as if those applications were running locally. As with the NC, the savings are
not only in the cost of the terminals, but mainly from the reduced support and maintenance
cost.

13
Unit 1 Introduction to Computers

Components of Computers
Computer hardware is composed of the following components: central processing unit
(CPU), input devices, output devices, primary storage, secondary storage, and
communication devices. Each of the hardware components plays an important role in
computing. The input devices accept data and instructions and convert them to a form
which the computer can understand. The output devices present data in a form people can
understand. The CPU manipulates the data and controls the tasks done by the other
components. The primary storage (internal storage) temporarily stores data and program
instructions during processing. It also stores intermediate results of the processing. The
secondary storage facility (external storage) stores data and programs for future use. These
components are illustrated in Figure 2 below.

Figure 2: Components of a computer system

Input/output Devices

The input/output (I/O) devices of a computer are not part of the CPU, but are channels for
communicating between the external environment and the CPU. Data and instructions are
entered into the computer through input devices, and processing results are provided

14
ICT131 Introduction to Computer Science

through output devices. Widely used I/O devices are the cathode-ray tube (CRT) or visual
display unit (VDU), magnetic storage media, printers, keyboards, “mice,” and image-
scanning devices. I/O devices are controlled directly by the CPU or indirectly through
special processors dedicated to input and output processing. Generally speaking, I/O
devices are subclassified into secondary storage devices (primarily disk and tape drives)
and peripheral devices (any input/output device that is attached to the computer).

Central processing Unit


The central processing unit (CPU) is also referred to as a microprocessor because of its
small size. The CPU is the center of all computer-processing activities, where all
processing is controlled, data are manipulated, arithmetic computations are performed, and
logical comparisons are made. The CPU consists of the control unit, the arithmetic-logic
unit (ALU), and the primary storage (or main memory). The arithmetic-logic unit
performs required arithmetic and comparisons, or logic, operations. The ALU adds,
subtracts, multiplies, divides, compares, and determines whether a number is positive,
negative or zero. All computer applications are achieved through these six operations. The
ALU operations are performed sequentially, based on instructions from the control unit. For
these operations to be performed, the data must first be moved from the storage to the
arithmetic registers in the ALU. Registers are specialized, high-speed memory areas for
storing temporary results of ALU operations as well as for storing certain control
information.

Primary Storage
Primary storage, or main memory, stores data and program statements for the CPU. It has
four basic purposes:

1. To store data that have been input until they are transferred to the ALU for
processing;

2. To store data and results during intermediate stages of processing;

3. To hold data after processing until they are transferred to an output device; and

4. To hold program statements or instructions received from input devices and from
secondary storage

15
Unit 1 Introduction to Computers

Primary storage in today’s microcomputers utilizes integrated circuits. These


circuits are interconnected layers of etched semiconductor materials forming electrical
transistor memory units with “on-off” positions that direct the electrical current passing
through them. The on-off states of the transistors are used to establish a binary 1 or 0 for
storing one binary digit or bit.

Buses
Instructions and data move between computer subsystems and the processor via
communications channels called buses. A bus is a channel (or shared data path) through
which data are passed in electronic form. Three types of buses link the CPU, primary
storage, and the other devices in the computer system. The data bus moves data to and
from primary storage. The address bus transmits signals for locating a given address in
primary storage. The control bus transmits signals specifying whether to “read” or “write”
data to or from a given primary storage address, input device, or output device.

The capacity of a bus, called bus width, is defined by the number of bits it can carry
at one time. Bus speed is also important, currently averaging about 133 megahertz (MHz).

Control Unit
The control unit reads instructions and directs the other components of the computer
system to perform the functions required by the program. It interprets and carries out
instructions contained in computer programs, selecting program statements from the
primary storage, moving them to the instruction registers in the control unit, and then
carrying them out. It controls input and output devices and data-transfer processes to and
from memory. The control unit does not actually change or create data. It merely directs the
data flow within the CPU. The control unit can process only one instruction at a time, but it
can execute instructions so quickly (millions per second) that it can appear to do many
different things simultaneously.

The series of operations required to process a single machine instruction is called a


machine cycle. Each machine cycle consists of the instruction cycle, which sets up circuitry
to perform a required operation, and the execution cycle, during which the operation is
actually carried out.

16
ICT131 Introduction to Computer Science

Arithmetic-Logic Unit
Instructions are obeyed and the necessary arithmetic operations are carried out on the data.
The part of the processor that does this is sometimes called the Arithmetic-Logical Unite
(ALU), although in reality, as for the “control unit”, there is often a physically separate
component that performs this function. In addition to arithmetic the processor also performs
so-called “logical” operations. These operations take place at incredibly high speeds. For
example 10 million numbers may be totalled in one second.

Processor Speed
The speed of a chip depends on four factors: the clock speed, the word length, the data bus
width, and the design of the chip. The clock, located within the control unit, is the
component that provides the timing for all processor operations. The beat frequency of the
clock (measured in megahertz [MHz] or millions of cycles per second) determines how
many times per second the processor performs operations. In 2008, PCs with Intel duo core,
Pentium IV, Pentium 4HT or Alpha chips were running at 1.5 MHz–3.0 GHz. All things
being equal, a processor that uses an 800 MHz clock operates at twice the speed of one that
uses a 400 MHz clock. A more accurate processor speed within a computer is millions of
instructions per second (MIPS). Word length is the number of bits (detail of bits you will be
studying in unit-5) that can be processed at one time by a chip. Chips are commonly
labelled as 8-bit, 16-bit, 32-bit, 64-bit, and 128-bit devices. A 64-bit chip, for example, can
process 64 bits of data in a single cycle. The larger the word length, the faster the chip
speed. The width of the buses determines how much data can be moved at one time. The
wider the data bus (e.g., 64 bits), the faster the chip. Matching the CPU to its buses can
affect performance significantly. In some personal computers, the CPU is capable of
handling 64 bits at a time, but the buses are only 32 bits wide. In this case, the CPU must
send and receive each 64-bit word in two 32-bit chunks, one at a time. This process makes
data transmission times twice as long.

Microprocessor
Over the last thirty seven years, microprocessors have become dramatically faster,
more complex, and denser with increasing numbers of transistors embedded in the silicon
wafer. As the transistors are packed closer together and the physical limits of silicon are

17
Unit 1 Introduction to Computers

approached, scientists are developing new technologies that increase the processing power
of chips. Chips are now being manufactured from gallium arsenide (GaAs), a
semiconductor material inherently much faster than silicon because electrons can move
through GaAs five times faster than they can move through silicon. GaAs chips are more
difficult to produce than silicon chips, resulting in higher prices. However, chip producers
are perfecting manufacturing techniques that will result in a decrease in the cost of GaAs
chips. Intel has incorporated MMX (multimedia extension) technology in its Pentium
microprocessors. MMX technology improves video compression/decompression, image
manipulation, encryption, and input/output processing, all of which are used in modern
office software suites and advanced business media, communications and Internet
capabilities. The Intel duo core processor introduces a new generation of processing power
with Intel NetBurst microarchitecture. It maximizes the performance of cutting-edge
technologies such as digital video and online 3-D gaming. In addition, it has an innovative
design capable of taking full advantage of emerging Web technologies. The chip’s all-new
internal design includes a rapid execution engine and a 400 MHz system bus in order to
deliver a higher level of performance.

A prototype plasma-wave chip, which transmits signals as waves, not as packets


of electrons as current chips do, has been developed. Developers use an analogy with
sound. Sound travels through the air as waves, not as batches of air molecules that leave
one person’s mouth and enter another’s ear. If sound worked in that fashion, there would be
a long delay while the sound-carrying molecules negotiated their way through the other air
molecules. The new chips have the potential of operating speeds in the gigahertz range, or
billions of cycles per second.

18
ICT131 Introduction to Computer Science

Unit summary
In this unit you learned about the computer, the history of computers,
types of computers and components of a computer system

Summary

Assignment
Describe each of the components of a computer system

Assignment

Assessment
(i) What is a computer?
(ii) Distinguish between the third and the fourth generation of computers
Assessment (iii) Describe the composition of a network computer
(iv) Write short notes on the central processing unit

19
Unit 2 Input and Output Devices

Unit 2

Input and Output Devices


Introduction
Welcome to Unit 2. This unit gives the details of input devices and output
devices.
Upon completion of this unit, you should be able to:

 Describe computer system input devices; and


 Describe computer system output devices.

Outcomes

POS: Point Of Sale Terminal

OMR: Optical Mark Readers

Terminology MICR: Magnetic Ink Character Readers

ATM: Automatic Teller Machine

OCR: Optical Character Reader

CRT: Cathode Ray Tube

LCD: Liquid Crystal Display

Input/output Devices
The input/output (I/O) devices of a computer are not part of the CPU, but are channels for
communicating between the external environment and the CPU. Data and instructions are
entered into the computer through input devices, and processing results are provided
through output devices. Widely used I/O devices are the cathode-ray tube (CRT) or visual
display unit (VDU), magnetic storage media, printers, keyboards, “mice,” and image-
scanning devices. I/O devices are controlled directly by the CPU or indirectly through

20
ICT131 Introduction to Computer Science

special processors dedicated to input and output processing. Generally speaking, I/O
devices are sub classified into secondary storage devices (primarily disk and tape drives)
and peripheral devices (any input/output device that is attached to the computer).

Input Devices
Users can command the computer and communicate with it by using one or more input
devices. Each input device accepts a specific form of data. For example, keyboards transmit
typed characters and handwriting recognizers “read” handwritten characters. Users want
communication with computers to be simple, fast and error free. Therefore, a variety of
input devices fits the needs of different individuals and applications

Categories Examples

Keying devices Punched card reader

Keyboard

Point-of-sale terminal

Pointing devices Mouse

(Devices that point to objects Touch screen

on the computer screen) Touchpad (or track pad)

Light pen
Joysticks

Optical character recognition Bar code scanner

(Devices that scan characters) Optical character reader

Optical mark reader

Wand reader

Handwriting recognizers Pen

Voice recognizers Microphone

(Data are entered by voice)

Other devices MICR

ATM

Digitizers

21
Unit 2 Input and Output Devices

Direct Data Entry

The captured data may be stored in some intermediate form for later entry into the main
computer in the required form. The method of direct input in a terminal or workstation is
known as Direct Data Entry.

Remote Job Entry

It refers to batch processing where jobs are entered at a terminal remote computer and
transmitted into the computer.

Keyboard

The most common input device is the keyboard. The keyboard is designed like a typewriter
but with many additional special keys.

Mouse

The computer mouse is a handheld device used to point a cursor at a desired place on the
screen, such as an icon, a cell in a table, an item in a menu, or any other object. Once the
arrow is placed on an object, the user clicks a button on the mouse, instructing the computer
to take some action. The use of the mouse reduces the need to type in information or use the
slower arrow keys. Special types of mouses are rollerballs and trackballs used in many
portable computers. A new technology, called glide-and-tap, allows fingertip cursor control
in laptop computers.

Key-to-diskette systems

The name “key-to-diskette” comes from the days when these were special purpose systems.
Today the term may be used for PC applications where data is entered onto magnetic
media. This most commonly occurs in organizations where data is generated at a number of
different places.

22
ICT131 Introduction to Computer Science

Punched card reader

A card reader such as the IBM 3505 is an electronically mechanized input device used to
read Hollerith Cards. Sometimes, combined with card punches such as the IBM 2540 card
reader-punch, such devices were almost always attached to a computer but in earlier days
could be found as stand-alone duplication or serialization devices.

Punchedcards

Punched card Reader

Joystick

Joysticks are used primarily at workstations that can display dynamic graphics. They are
also used in playing video games. The joystick moves and positions the cursor at the
desired object on the screen.

Automated Teller Machine

Automated teller machines (ATMs) are interactive input/output devices that enable people to
obtain cash, make deposits, transfer funds, and update their bank accounts instantly from
many locations. ATMs can handle a variety of banking transactions, including the transfer

23
Unit 2 Input and Output Devices

of funds to specified accounts. One drawback of ATMs is their vulnerability to computer


crimes and to attacks made on customers as they use outdoor ATMs.

Point-of-Sale Terminals

Many retail organizations utilize point of sale (POS) terminals. The POS terminal has a
specialized keyboard. For example, the POS terminals at fast-food restaurants include all
the items on the menu, sometimes labeled with the picture of the item. POS terminals in a
retail store are equipped with a scanner that reads the bar-coded sales tag. POS devices
increase the speed of data entry and reduce the chance of errors. POS terminals may include
many features such as scanner, printer, voice synthesis (which pronounces the price by
voice) and accounting software.

Bar Code Scanner

Bar code scanners scan the black-and-white bars written in the Universal Product Code
(UPC). This code specifies the name of the product and its manufacturer (product ID). Then
a computer finds in the database the price equivalent to the product’s ID. Bar codes are
especially valuable in high volume processing where keyboard energy is too slow and/or
inaccurate. Applications include supermarket checkout, airline baggage stickers, and
transport company’s packages (Federal Express, United Parcel Service, and the U.S. Postal
Service). The wand reader is a special handheld bar code reader that can read codes that are
also readable by people.

24
ICT131 Introduction to Computer Science

Optical Character Reader

With an optical character reader (OCR), source documents such as reports, typed
manuscripts and books can be entered directly into a computer without the need for keying.
An OCR converts text and images on paper into digital form and stores the data on disk or
other storage media. OCRs are available in different sizes and for different types of
applications. The publishing industry is the leading user of optical scanning equipment.
Publishers scan printed documents and convert them to electronic databases that can be
referenced as needed. Similarly, they may scan manuscripts instead of retyping them in
preparation for the process that converts them into books and magazines. Considerable time
and money are saved and the risk of introduction of typographical errors is reduced.

Magnetic Ink Character Reader

Magnetic ink character readers (MICRs) read information printed on checks in magnetic
ink. This information identifies the bank and the account number. On a cancelled check, the
amount is also readable after it is added in magnetic ink.

Turnaround Document

A turnaround document is one that has been output from a computer, some extra
information has been added to it and then returned to become an input document. For
example, meter cards are produced for collecting readings from gas meters, photocopiers
and water meters. These are filled in by the customer and then returned to the company for
scanning using ICR (Intelligent Character Recognition) so that the system can produce the
bills for the customer.

25
Unit 2 Input and Output Devices

Output devices
The output generated by a computer can be transmitted to the user via several devices and
media. The presentation of information is extremely important in encouraging users to
embrace computers.

Monitors

The data entered into a computer can be visible on the computer monitor, which is basically
a video screen that displays both input and output. Monitors come in different sizes, ranging
from inches to several feet and in different colors. The major benefit is the interactive
nature of the device. Monitors employ the cathode ray tube (CRT) technology, in which an
electronic “gun” shoots a beam of electrons to illuminate the pixels on the screen. CRTs
typically are 21-inch or smaller. The more pixels on the screen, the higher the resolution.
Portable computers use a float screen consisting of a liquid crystal display (LCD). Gas
plasma monitors offer larger screen sizes (more than 36 inches) and higher display quality
than LCD monitors but are much more expensive. In 2001, we have seen more and more
thin and flat monitors on the market, but the cost is still high ($650 to $5,000).

Impact Printers

Like typewriters, impact printers use some form of striking action to press a carbon or
fabric ribbon against paper to create a character. The most common impact printers are the
dot matrix, daisy wheel and line. Line printers print one line at a time. Therefore, they are
faster than one-character type printers. Impact printers are slow and noisy, cannot do high-
resolution graphics and are often subject to mechanical breakdowns. They have largely
been replaced by non-impact printers.

26
ICT131 Introduction to Computer Science

Non-impact Printers

Inkjet printer Laser printer

Non-impact printers overcome the deficiencies of impact printers. Laser printers are of
higher speed, containing high-quality devices that use laser beams to write information on
photosensitive drums, whole pages at a time. Then the paper passes over the drum and picks
up the image with toner. Because they produce print-quality text and graphics, laser printers
are used in desktop publishing and in reproduction of artwork. Thermal printers create
whole characters on specially treated paper that responds to patterns of heat produced by
the printer. Ink-jet printers shoot tiny dots of ink onto paper. Sometimes called bubble jet,
they are relatively inexpensive and are especially suited for low-volume graphical
applications when different colors of ink are required.

Plotters

Plotters are printing devices using computer-driven pens for creating high-quality black- and-
white or color graphic images—charts, graphs, and drawings. They are used in complex,
low-volume situations such as engineering and architectural drawing and they come in
different types and sizes.

Voice Output

Some devices provide output via voice—synthesized voice. This term refers to the
technology by which computers “speak.” The synthesis of voice by computer differs from a
simple playback of a pre-recorded voice by either analog or digital means. As the term
“synthesis” implies, the sounds that make up words and phrases are electronically

27
Unit 2 Input and Output Devices

constructed from basic sound components and can be made to form any desired voice
pattern. The quality of synthesized voice is currently very good, and relatively inexpensive.

Unit summary
In this unit you learned about the various types of input and output
devices of a computer system.

Summary

Assignment
Distinguish between the input and output devices of a computer syayem.

Assignment

Assessment
(i) Describe the input devices which are used for document reading
methods;
(ii) Explain the difference between impact and non-impact printers; and
Assessment
(iii) Explain how the voice output device differs from a simple playback
of pre-recorded voice.

28
ICT131 Introduction to Computer Science

Unit 3

Memory
Introduction
This unit gives you more details about of bit, byte, word, chip, RAM,
ROM and secondry storage devices like magnetic tapes, magnetic drums,
magnetic diskette, magnetic disks, optical disk, DVD and flash memory.
Upon completion of this unit you should be able to:

 Demonstrate understanding of the components of the Main


Memory of a computer system; and
 Demonstrate understanding of the components of the Secondary
Outcomes Storage Memory of a computer system.

RAM: Random Access Memory

DRAM: Dynamic Random Access Memory

Terminology SDRAM: Synchronous Random Access Memory

SRAM: Static Random Access Memory

ROM: Read Only Memory

PROM: Programmable Read Only Memory

EPROM: Erasable Programmable Read Only Memory

RAID: Redundant Array Inexpensive Disk

29
Unit 3 Memory

Main Memory
Bit
In computing and telecommunications, a bit is a basic unit of information storage and
communication. It is the maximum amount of information that can be stored by a device or
other physical system that can normally exist in only two distinct states. These states are
often interpreted (especially in the storage of numerical data) as the binary digits 0 and 1.
They may be interpreted also as logical values, either "true" or "false"; or two settings of a
flag or switch, either "on" or "off".
In information theory, "one bit" is typically defined as the uncertainty of a binary
random variable that is 0 or 1 with equal probability, or the information that is gained when
the value of such a variable becomes known.
Byte
A byte is a basic unit of measurement of information storage in computer science. In many
computer architectures it is a unit of memory addressable. There is no standard, but a byte
most often consists of eight bits.
A byte is an ordered collection of bits, with each bit denoting a single binary value
of 1 or 0. The byte most often consists of 8 bits in modern systems. However, the size of a
byte can vary and is generally determined by the underlying computer operating system or
hardware. Historically, byte size was determined by the number of bits required to represent
a single character from a Western character set. Its size was generally determined by the
number of possible characters in the supported character set and was chosen to be a divisor
of the computer's word size.

Words

A group of bytes is called a word. Word size is dependent on the architecture platform
(eg., 32-bit machine, 64-bit machine). A 32-bit word contains 4 bytes; a 64-bit word
contains 8 bytes. In a 32-bit machine, everything will be 32-bits wide (all registers, buses,
numbers). The wider the word, the higher the processing power.

30
ICT131 Introduction to Computer Science

Memory

There are two categories of memory: the register, which is part of the CPU and is
very fast and the internal memory chips, which reside outside the CPU and are slower. A
register is circuitry in the CPU that allows for the fast storage and retrieval of data and
instructions during the processing. The control unit, the CPU, and the primary storage all
have registers. Small amounts of data reside in the register for very short periods, prior to
their use. The internal memory comprises two types of storage space: RAM and ROM and
is used to store data just before they are processed by the CPU.

Random Access Memory (RAM)

RAM is the place in which the CPU stores the instructions and data it is processing. The
larger the memory area, the larger the programs that can be stored and executed. With the
newer computer operating system software, more than one program may be operating at a
time, each occupying a portion of RAM. Most personal computers as of 2001 needed 64 to
128 megabytes of RAM to process “multimedia” applications, which combine sound,
graphics, animation, and video, thus requiring more memory.

The advantage of RAM is that it is very fast in storing and retrieving any type of data,
whether textual, graphical, sound, or animation-based. Its disadvantages are that it is
relatively expensive and volatile. This volatility means that all data and programs stored in
RAM are lost when the power is turned off. To lessen this potential loss of data, many of
the newer application programs perform periodic automatic “saves” of the data. Many
software programs are larger than the internal, primary storage (RAM) available to store
them. To get around this limitation, some programs are divided into smaller blocks, with
each block loaded into RAM only when necessary. However, depending on the program,
continuously loading and unloading blocks can slow down performance considerably,
especially since secondary storage is so much slower than RAM. As a compromise, some
architectures use high-speed cache memory as a temporary storage for the most frequently
used blocks. Then the RAM is used to store the next most frequently used blocks, and
secondary storage (described later) for the least used blocks. Since cache memory operates
at a much higher speed than conventional memory (i.e., RAM), this technique greatly
increases the speed of processing because it reduces the number of times the program has to
fetch instructions and data from RAM and secondary storage.

31
Unit 3 Memory

Dynamic random access memories (DRAM) are the most widely used RAM chips. These
are known to be volatile since they need to be recharged and refreshed hundreds of times
per second in order to retain the information stored in them.

Synchronous DRAM (SDRAM) is a relatively new and different kind of RAM. SDRAM
is rapidly becoming the new memory standard for modern PCs. The reason is that its
synchronized design permits support for the much higher bus speeds that have started to
enter the market.

Static Random Access Memory (SRAM)

This is random-access memory in which each bit of storage is a bistable flip-flop, commonly
consisting of cross-coupled inverters. It is called "static" because it will retain a value as long as
power is supplied, unlike dynamic random-access memory (DRAM) which must be regularly
refreshed. It is however, still volatile. It will lose its contents when the power is switched off, in
contrast to ROM.

SRAM is usually faster than DRAM but since each bit requires several transistors (about six) you
can get less bits of SRAM in the same area. It usually costs more per bit than DRAM and so is used
for the most speed-critical parts of a computer (e.g. cache memory) or other circuit.

Read Only Memory (ROM)

ROM is that portion of primary storage that cannot be changed or erased. ROM is
nonvolatile; that is, the program instructions are continually retained within the ROM,
whether power is supplied to the computer or not. ROM is necessary to users who need to
be able to restore a program or data after the computer has been turned off or, as a
safeguard, to prevent a program or data from being changed. For example, the instructions
needed to start, or “boot,” a computer must not be lost when it is turned off.

Programmable read-only memory (PROM) is a memory chip on which a program can be


stored. But once the PROM has been used, you cannot wipe it clean and use it to store
something else. Like ROMs, PROMs are nonvolatile.

Erasable programmable read-only memory (EPROM) is a special type of PROM that


can be erased by exposing it to ultraviolet light.

32
ICT131 Introduction to Computer Science

Stored Program concept


The von Neumann architecture is a design model for a stored-program digital computer
that uses a processing unit and a single separate storage structure to hold both instructions
and data. It is named after the mathematician and early computer scientist John von
Neumann. Such computers implement a universal Turing machine and have a sequential
architecture.
A stored-program digital computer is one that keeps its programmed instructions, as well
as its data, in read-write, random-access memory (RAM). Stored-program computers were
advancement over the program-controlled computers of the 1940s, such as the Colossus and
the ENIAC, which were programmed by setting switches and inserting patch leads to route
data and to control signals between various functional units. In the vast majority of modern
computers, the same memory is used for both data and program instructions.

Bus System Instructions and data move between computer subsystems and the processor
via communications channels called buses. A bus is a channel (or shared data path) through
which data are passed in electronic form. Three types of buses link the CPU, primary
storage, and the other devices in the computer system. The data bus moves data to and
from primary storage. The address bus transmits signals for locating a given address in
primary storage. The control bus transmits signals specifying whether to “read” or “write”
data to or from a given primary storage address, input device, or output device.

The capacity of a bus, called bus width, is defined by the number of bits they carry at one
time. (The most common PC in 2001 was 64 bits.) Bus speeds are also important, currently
averaging about 133 megahertz (MHz).

Secondary Storage Memory


Secondary storage is separate from primary storage and the CPU, but directly connected to
it. An example would be the 3.5-inch disk you place in your PC’s A drive. It stores the data
in a format that is compatible with data stored in primary storage, but secondary storage
provides the computer with vastly increased space for storing and processing large
quantities of software and data. Primary storage is volatile, contained in memory chips and
very fast in storing and retrieving data. In contrast, secondary storage is non-volatile, uses
many different forms of media that are less expensive than primary storage, and is relatively

33
Unit 3 Memory

slower than primary storage. Secondary storage media include magnetic tape, magnetic
disk, magnetic diskette, optical storage, and digital videodisk.

Magnetic tape

Magnetic Tape is kept on a large reel or in a small cartridge or cassette. Today, cartridges
and cassettes are replacing reels because they are easier to use and access. The principal
advantages of magnetic tapes are that it is inexpensive, relatively stable, and long lasting,
and that it can store very large volumes of data. A magnetic tape is excellent for backup or
archival storage of data and can be reused. The main disadvantage of magnetic tape is that it
must be searched from the beginning to find the desired data. The magnetic tape itself is
fragile and must be handled with care. Magnetic tape is also labor intensive to mount and
dismount in a mainframe computer.

Magnetic Drums

Drum memory is a magnetic data storage device which constituted an early form of computer
memory widely used in the 1950s and into the 1960s. For many machines, a drum formed the main
working memory of the machine, with data and programs being loaded on to or off the drum using
media such as paper tape or punch cards. Drums were so commonly used for the main working
memory that these computers were often referred to as drum machines. Drums were later replaced
as the main working memory by memory such as core memory and a variety of other systems which
were faster as they had no moving parts, and which lasted until semiconductor memory entered the
scene.

34
ICT131 Introduction to Computer Science

A drum is a large metal cylinder that is coated on the outside surface with a ferromagnetic
recording material. It could be considered the precursor to the hard disk platter, but in the
form of a drum rather than a flat disk. A row of read-write heads runs along the long axis of
the drum, one for each track.

Magnetic disks, also called hard disks, alleviate some of the problems associated with
magnetic tape by assigning specific address locations for data, so that users can go directly
to the address without having to go through intervening locations looking for the right data
to retrieve. This process is called direct access. Most computers today rely on hard disks for
retrieving and storing large amounts of instructions and data in a nonvolatile and rapid
manner. The hard drives of 2008 microcomputers provided from 80 to 250 gigabytes of
data storage.

A hard disk is like a phonograph containing a stack of metal-coated platters (usually


permanently mounted) that rotate rapidly. Magnetic read/write heads, attached to arms,
hover over the platters. To locate an address for storing or retrieving data, the head moves
inward or outward to the correct position, then waits for the correct location to spin
underneath.

The speed of access to data on hard-disk drives is a function of the rotational speed
of the disk and the speed of the read/write heads. The read/write heads must position
themselves, and the disk pack must rotate until the proper information is located. Advanced
disk drives have access speeds of 8 to 12 milliseconds.

Magnetic disks provide storage for large amounts of data and instructions that can
be rapidly accessed. Another advantage of disks over reel is that a robot can change them.
This can drastically reduce the expenses of a data center. Storage Technology is the major
vendor of such robots. The disks’ disadvantages are that they are more expensive than
magnetic tape and they are susceptible to “disk crashes.”

In contrast to large, fixed disk drives, one current approach is to combine a large
number of small disk drives each with 10- to 50-gigabyte capacity, developed originally for

35
Unit 3 Memory

microcomputers. These devices are called redundant arrays of inexpensive disks


(RAID). Because data are stored redundantly across many drives, the overall impact on
system performance is lessened when one drive malfunctions. Also, multiple drives provide
multiple data paths, improving performance. Finally, because of manufacturing efficiencies
of small drives, the cost of RAID devices is significantly lower than the cost of large disk
drives of the same capacity.

Magnetic Diskette

Hard disks are not practical for transporting data of instructions from one personal
computer to another. To accomplish this task effectively, developers created the magnetic
diskette.

(These diskettes are also called “floppy disks,” a name first given to the very
flexible 5.25-inch disks used in the 1980s and early 1990s.) The magnetic diskette used
today is a 3.5-inch, removable, somewhat flexible magnetic platter encased in a plastic
housing. Unlike the hard disk drive, the read/write head of the diskette drive actually
touches the surface of the disk. As a result, the speed of the drive is much slower, with an
accompanying reduction in data transfer rate. However, the diskettes themselves are very
inexpensive, thin enough to be mailed, and able to store relatively large amounts of data. A
standard high-density disk contains 1.44 megabytes. Zip disks are larger than conventional
floppy disks, and about twice as thick. Disks formatted for zip drives contain 250
megabytes.

Optical storage devices have extremely high storage density. Typically, much more
information can be stored on a standard 5.25-inch optical disk than on a comparably sized
floppy (about 400 times more). Since a highly focused laser beam is used to read/write
information encoded on an optical disk, the information can be highly condensed. In

36
ICT131 Introduction to Computer Science

addition, the amount of physical disk space needed to record an optical bit is much smaller
than that usually required by magnetic media.

Another advantage of optical storage is that the medium itself is less susceptible to
contamination or deterioration. First, the recording surfaces (on both sides of the disk) are
protected by two plastic plates, which keep dust and dirt from contaminating the surface.
Second, only a laser beam of light, not a flying head, comes into contact with the recording
surface. The head of an optical disk drive comes no closer than 1 mm from the disk surface.

Optical drives are also less fragile, and the disks themselves may easily be loaded
and removed. Three common types of optical drive technologies are CD-ROM, WORM,
and rewritable optical.

Compact disk read-only memory (CDROM)

CD-ROM disks have high capacity and low cost. CD-ROM technology is very effective
and efficient for mass producing many copies of large amounts of information that does not
need to be changed, for example, encyclopedias, directories, and online databases.

CD-ROMs are generally too expensive for mastering and producing unique, one-of- a-
kind applications. For these situations, write once, read many (WORM) technology is more
practical. The storage capacity and access for WORMs is virtually the same as for CD-ROMs.
Companies use CD-ROM/WORM for the following applications: publishing their corporate
manuals, disseminating training information and sending samples of products or their
description to customers. Once you write on a WORM it becomes a regular CD-ROM.
You cannot change anything on the disk.

When information needs to be changed or updated, and companies do not want to


incur the expense of mastering and producing a new CD-ROM, rewritable optical disks
are needed (also called floptical, or magneto-optic). The surface of the disk is coated with a
magnetic material that can change magnetic polarity only when heated. To record data, a high-
powered laser beam heats microscopic areas in the magnetic medium that allows it to accept
magnetic patterns. Data can be read by shining a lower-powered laser beam at the magnetic
layer and reading the reflected light.

37
Unit 3 Memory

Digital Video Disk (DVD)

DVD is a new storage disk that offers higher quality and denser storage capabilities. In
2001, the disk’s maximum storage capacity was 17 Gbytes, which was sufficient for storing
about five movies. It includes superb audio (six-track vs. the two-track stereo). Like CDs,
DVD comes as DVD-ROM (read-only) and DVD-RAM. Rewritable DVD-RAM systems
are already on the market, offering a capacity of 4.7 GB

Flash memory

Flash memory is a non-volatile computer memory that can be electrically erased and
reprogrammed. It is a technology that is primarily used in memory cards and USB flash
drives for general storage and transfer of data between computers and other digital
products. It is a specific type of Electrically Erasable Programmable Read-Only Memory
Electrically Erasable Programmable Read-Only Memory) (EEPRO Mthat is erased and
programmed in large blocks. In the earlier models of the flash, the entire chip had to be
erased at once. Flash memory costs far less than byte-programmable EEPROM and
therefore has become the dominant technology wherever a significant amount of non-
volatile, solid state storage is needed. Example of applications include PDAs (personal
digital assistants), laptop computers, digital audio players, digital cameras and mobile
phones. It has also gained popularity in the game console market, where it is often used
instead of EEPROMs or battery-powered SRAM for game save data.

38
Unit summary
ICT131 Introduction to Computer Science

In this unit you learned about the main memory and the secondary storage
memory.

Summary

Assignment
Describe the media which is used for secondary storage devices.

Assignment

Assessment
(i) Distinguish between the terms ‘bit’ and ‘byte’
(ii) Describe the types of RAM available in a computer system
Assessment (iii) Distinguish between magnetic tape and magnetic drums
(iv) Explain how the optical disk storage facility operates

39
40 Hardware and Networking

Unit 4

Software
Introduction
This unit describes the types of software (system and application),
licensing and upgrading of software, operating systems,
programming languages and system development methodology.
Upon completion of this unit you should be able to:

 Identify types of software.


 Distinguish between application software and system software.
 Identify the types and functions of operating systems.
Outcomes
 Demonstrate understanding of programming languages.

DBMS: Data Base Management System

FORTRAN: Formula Translator

Terminology COBOL: Common Business Oriented Language

OOP: Object-Oriented Programming

HTML: Hyper Text Mark-up Language

XML: eXtensible Mark-up Language

ASP: Active Server Page

40
ICT131 Introduction to Computer Science

Software
Software is a term used (in contrast to hardware) to describe all programs that are used in a
particular computer installation. The term is often used to mean not only the program
themselves but also their associated documentation.

Types of software
Computer hardware cannot perform a single act without instructions. These instructions are
known as software or computer programs. Software is at the heart of all computer
applications. Computer hardware is, by design, general purpose. Software, on the other
hand, enables the user to tailor a computer to provide specific business value. There are two
major types of software: application software and systems software.

Systems software acts primarily as an intermediary between computer hardware and


application programs, and knowledgeable users may also directly manipulate it. Systems
software provides important self-regulatory functions for computer systems, such as loading
itself when the computer is first turned on, as in Windows Professional; managing hardware
resources such as secondary storage for all applications and providing commonly used sets
of instructions for all applications to use. Systems programming is either the creation or
modification of systems software.

Application software is a set of computer instructions, written in a programming language.


The instructions direct computer hardware to perform specific data or information
processing activities that provide functionality to the user. This functionality may be broad,
such as general word processing, or narrow, such as an organization’s payroll program. An
application program applies a computer to a need, such as increasing productivity of
accountants or improved decisions regarding an inventory level. Application
programming is either the creation or the modification and improvement of application
software.

41
42 Hardware and Networking

System Software
Operating Systems and control programs

Computers are required to give efficient and relative service without the need for continual
intervention by the user. Control programs and operating systems are the means by which
this is achieved. Such program controls the way in which hardware is used.

On smaller computer systems, such as micro computers, the control program may also
accept commands typed in by the user. Such a control program is often called monitor

On larger computer systems it is common for only part of the monitor to remain in main
storage. Other parts of the monitor are brought into memory when required, ie, there are
“resident” and “transient” parts of the monitor.

The resident part of the monitor on large systems is often called the Kernel, Executive or
supervisor program.

An operating system is a suite of programs that takes over the operation of the computer to
the extent of being able to allow a number of programs to be run on the computer without
human intervention by an operator.

Translators

The earliest computer programs were written in the actual language of the computer.
Nowadays, however, programmers write their programs in programming languages which
are relatively easy to learn. Each program is translated into the language of the machine
before being used for operational purposes. This translation is done by computer programs
called translators.

Utilities and Service programs

Utilities, also called service programs, are system programs that provide a useful service to
the user of the computer by providing facilities for performing tasks of a routine nature.

Common types of utility programs are:

 Sort: This is a program designed to arrange records into a predetermined


sequence. A good example of the requirement for this service program is the need

42
ICT131 Introduction to Computer Science

for sorting transaction files into the sequence of the master file before carrying out
updating. Sorting is done by reference to a record key.
 Editors: This is used at a terminal and provides facilities for the creation or
amendment of programs. The editing may be done by the use of a series of
commands or special edit keys on the keyboard. If, for example, a source program
needs correction because it failed to compile properly, editors may be used to
make necessary changes.
 File copying: This is a program that simply copies a file on the same media to
make duplicate or from one media to another, for example, from one disk to
another (also called media conversion).
 Dump: The term “dump” means copying the contents of main storage onto an
output device. And this program is useful when a error occurs during running
application programs. “Dump” is sometime use to mean “copy the contents of one
line storage onto an off line media” such as dumping magnetic disk onto magnetic
tape for backup processes.
 File maintenance: A program is designed to carry out the process of
insertion/deletion of records in any file. It can also make amendments to the
standing data contained in the records. File maintenance may also include such
task as the reorganization of index sequential file on disk.
 Tracing and debugging: Used in conjunction with debugging and testing over
application programs on the computer. Tracing involves dumping details of
internal storage, such as the value of a variable, after obeying specified
instructions so that the cycle of operations can be traced and errors located.
Debugging is the term given to the process of locating and eliminating errors from
the program.

Database Management System (DB MS)

The database management system is a complex software system that constructs, expands
and maintains the database. It also provides the controlled interface between the user and
data in the database.

The DBMS allocates storage to data. It maintains indices so that any required data can be
retrieved, and so that separate items of data in the database can be cross-referenced. The

43
44 Hardware and Networking

structure of a database is dynamic and can be changed as needed. The DBMS provides an
interface with user programs. These may be written in a number of different programming
languages.

Application Software
User application programs

User application programs are written by the user in order to perform specific jobs for the
user. Such programs are written in a variety of programming languages according to
circumstances, but all should be written in a systematic way as prescribed. For many
applications, it is necessary to produce sets of programs that are used in conjunction with
one another and that may also be used in conjunction with service programs such as sort
utilities. For some applications it may be necessary to compare available software or
hardware before writing the applications software.

Application programs

There are so many different uses for computers. There are a correspondingly large number
of different application programs, some of which are special-purpose or “packages” tailored
for a specific purpose such as inventory control or payroll. A package is a commonly used
term for a computer program (or group of programs) that has been developed by a vendor
and is available for purchase in a pre-packaged form.

If a package is not available for a certain situation, it is necessary to build the


application using programming languages or software development tools. There are also
general-purpose application programs that are not linked to any specific business task, but
instead support general types of information processing. The most widely used general-
purpose application packages are spreadsheet, data management, word processing, desktop
publishing, graphics, multimedia and communications.

Some of these general-purpose tools are actually development tools. That is, you use
them for building applications. For example, you can use Excel to build decision support
applications such as resource allocation, scheduling, or inventory control. You can use these
and similar packages for doing statistical analysis, for conducting financial analysis, and for
supporting marketing research.

44
ICT131 Introduction to Computer Science

Many decision support and business applications are built with programming
languages rather than with general-purpose application programs. This is especially true for
complex, unstructured problems. Information systems applications can also be built with a
mix of general-purpose programs and/or with a large number of development tools ranging
from editors to random number generators. Of special interest are the software suites, such
as Microsoft Office. These are integrated sets of tools that can expedite application
development. Also of special interest are CASE tools and integrated enterprise software,
which are described later in the module.

Licensing and upgrading of software

The importance of software in computer systems has led to the emergence of several
issues and trends. These issues include software licensing, software upgrades, shareware
freeware and software selection.

Software Licensing

Vendors spend time and money in software development, so they must protect their
software from being copied and distributed by individuals and other software companies.
Vendors can copyright software, but this protection is limited. Another approach is for
firms to use existing patent laws, but this approach has not been fully accepted by the
courts.

The Software & Information Industry Association (formerly the Software Publishers
Association) enforces software copyright laws in corporations through a set of guidelines.
These guidelines state that when information systems (IS) managers cannot find proof of
purchase for software, they should get rid of the software or purchase new licenses. IS
managers need to take inventory of their software assets to ensure that they have the
appropriate number of software licenses.

As the number of desktop computers continues to increase and businesses continue to


decentralize, it becomes more and more difficult for IS managers to manage their software
assets. As a result, many firms specialize in tracking software licenses for a fee.

Software Upgrading

Software firms constantly revise their programs and sell new versions. The revised software
may offer valuable enhancements, but, on the other hand, may offer little in terms of

45
46 Hardware and Networking

additional capabilities. In addition, the revised software may contain bugs. Deciding
whether or when to purchase the newest software can be a major challenge for companies.

Shareware and Freeware

Shareware is software where the user is expected to pay the author a modest amount
for the privilege of using it. Freeware is software that is free. Both help to keep software
costs down. Shareware and freeware are often not as powerful (do not have the full
complement of features) as the professional versions, but some users get what they need at
a good price. These are available now on the Internet in large quantities.

Software Selection

There are dozens or even hundreds of software packages to choose from for almost
any topic. Software selection becomes a major issue in systems development.

Operating Systems
An operating system, or OS, is a software program that enables the computer hardware to
communicate and operate with the computer software. An OS is responsible for the
management and coordination of activities and the sharing of the resources of the computer.
The operating system acts as a host for computing applications run on the machine. As a
host, one of the purposes of an operating system is to handle the details of the operation of
the hardware. This relieves application programs from having to manage these details and
makes it easier to write applications.

Functions of the Operating System

Today most operating systems perform the following important functions:

1. Processor management - that is, assignment of processor to different tasks being


performed by the computer system;
2. Memory management - that is, allocation of main memory and other storage areas to
the system programmes as well as user programmes and data;
3. Input/output management - that is, co-ordination and assignment of the different
output and input devices while one or more programmes are being executed;

46
ICT131 Introduction to Computer Science

4. File management - that is, the storage and transfer of files of various storage devices
to another. It also allows all files to be changed easily and modified through the use
of text editors or some other files manipulation routines;
5. Establishment and enforcement of a priority system - that is, it determines and
maintains the order in which jobs are to be executed in the computer system;
6. Automatic transition from job to job as directed by special control statements;
7. Interpretation of commands and instructions;
8. Coordination and assignment of compilers, assemblers, utility programs, and other
software to the various user of the computer system; and
9. Facilitates easy communication between the computer system and the computer
operator (human). It also establishes data security and integrity.

Types of Operating System

Single-program systems: The majority of small microcomputer-based systems have


operating systems which allow a single user to operate the machine in an interactive
conversational mode but normally only allows one user program to be in main storage and
processed at a time. This means that there is no multiprogramming of user programs.

Simple batch system: These are systems that provide multiprogramming of batch
programs but have few facilities for interaction or multi-access. Many commercial
computer systems in use during the 1960’s and early 1970’s were of this type.

Multi-access and Time-sharing systems: The majority of operating systems fall into this
category but there is a wide range of complexity in systems.

Real-time systems: The operating system has to cater for the type of real-time system
being used. The three types are given here in order of increasingly fast response time.

 A more complex multi-access time- sharing system where each user has a largely
independent choice of system facilities, for example, each using a different
language.
 Commercial real-time systems in which there is essentially one job, such as
handling booking, and the multi-access user has a clerical rather than programming
function. These systems often make use of extensive databases.

47
48 Hardware and Networking

 Process control system, such as one used to control the operations of a chemical-
factory plant. Response to changes must be as fast as possible and reliability is
essential. Real-time process control systems vary greatly in size.

Overview of Programming Languages


Programming languages are the basic building blocks for all systems and application
software. Programming languages allow people to tell computers what to do and are the
means by which systems are developed. Programming languages are basically a set of
symbols and rules used to write program code. Each language uses a different set of rules
and the syntax that dictates how the symbols are arranged so they have meaning.

The characteristics of the languages depend on their purpose. For example, if the programs
are intended to run batch processing, they will differ from those intended to run real-time
processing. Languages for Internet programs differ from those intended to run mainframe
applications. Languages also differ according to when they were developed. Today’s
languages are more sophisticated than those written in the 1950s and the 1960s.

Levels of Programming Language


We can choose any language for writing a program according to need. But a computer
executes programs only after they are represented internally in binary form (sequences of
1’s and 0's). Programs written in any other language must be translated to the binary
representation of the instructions before they can be executed by the computer. Programs
written for a computer may be in one of the following categories of languages: machine
language, assembly language and high level language.

Machine Language

This is a sequence of instructions written in form of binary numbers consisting of


1's, 0's to which the computer responds directly. Machine language was initially referred to
as code, although now the term code is used more broadly to refer to any program text.

An instruction prepared in any machine language will have at least two parts. The
first part is the Command or Operation, which tells the computer the type of function to be
performed. Every computer has an operation code for each of its functions. The second part

48
ICT131 Introduction to Computer Science

of the instruction is the operand or it tells the computer where to find or store the data that
has to be manipulated.

Just as hardware is classified into generations based on technology, computer


languages also have a generation classification based on the level of interaction with the
machine. Machine language is considered to be the first generation language.

Advantage of Machine Language

It is faster in execution since the computer directly starts executing it.

Disadvantage of Machine Language

It is difficult to understand and develop a program using machine language. Anybody


going through this program for checking will have a difficult task understanding what will
be achieved when this program is executed. Nevertheless, the computer hardware
recognises only this type of instruction code.

The following program is an example of a machine language program for adding two
numbers.

0011 110 Load A register with

0000 0111 value 7

0000 0110 Load B register with 10

0000 1010 A ¬A+B

1000 0000 Store the result

0011 1010 into the memory location

0000 0000 whose address is 100 (decimal)

0111 0110 Halt processing

49
50 Hardware and Networking

Assembly Language

When we employ symbols (letter, digits or special characters) for the operation part, the
address part and other parts of the instruction code, this representation is called an assembly
language program. This is considered to be the second generation language.

Machine and Assembly languages are referred to as low level languages since the coding
for a problem is at the individual instruction level.

Each machine has got its own assembly language which is dependent upon the internal
architecture of the processor. An assembler is a translator which takes its input in the form
of an assembly language program and produces machine language code as its output.

The following program is an example of an assembly language program for adding two
numbers X and Y and storing the result in some memory location.

LD A, 7 ; Load register A with 7

LD B, 10 ; Load register B with 10

ADD A, B ; A ¬ A+B

LD (100), A ; Save the result in the location 100

HALT ; Halt process

From this program, it is clear that usage of mnemonics (in our example LD, ADD, HALT
are the mnemonics) has improved the readability of our program significantly.

An assembly language program cannot be executed by a machine directly as it is not in a


binary form. An assembler is needed in order to translate an assembly language program
into the object code executable by the machine.

50
ICT131 Introduction to Computer Science

Advantage of Assembly Language

Writing a program in assembly language is more convenient than in machine language.


Instead of binary sequence, as in machine language, it is written in the form of symbolic
instructions. Therefore, it gives a little more readability.

Disadvantages of Assembly Language

Assembly language (program) is specific to particular machine architecture. Assembly


languages are designed for specific make and model of a microprocessor. It means that
assembly language programs written for one processor will not work on a different
processor if it is architecturally different. That is why the assembly language program is not
portable.

Assembly language program is not as fast as machine language. It has to be first translated
into machine (binary) language code.

High-level Language

We have talked about programming languages as COBOL, FORTRAN and BASIC. They
are called high level programming languages. The program shown below is written in
BASIC to obtain the sum of two numbers.

10 LET X = 7

20 LET Y = 10

30 LET SUM = X+Y

40 PRINT SUM

50 END

The time and cost of creating machine and assembly languages was quite high. And this
was the prime motivation for the development of high level languages.

Since a high level source program must be translated first into a form the machine can
understand, this is done by software called Compiler which takes the source code as input

51
52 Hardware and Networking

and produces as output the machine language code of the machine on which it is to be
executed.

During the process of translation, - the Compiler reads the source programs statement-wise
and checks the syntactical errors. If there is any error, the computer generates a print-out of
the errors it has detected. This action is known as diagnostics.

There is another type of software which also does the translation. This is called an
Interpreter. The Compiler and Interpreter have different approaches to translation. The
following table lists the differences between a Compiler and an Interpreter.

Compiler Interpreter

1) Scans the entire program Translates the program line by

first and then translates it Line

into machine code.

2) Converts the entire program Each time the program is executed,

to machine code; when all every line is checked for syntax error

the syntax errors are removed and then converted to equivalent

execution takes place. machine code.

3) Slow for debugging Good for fast debugging.

4) Execution time is less Execution time is more.

Advantages of High-level Programming Language:

There are four main advantages of high-level programming languages. These are:

 Readability: Programs written in these languages are more readable than assembly
and machine language.
 Portability: Programs could be run on different machines with little or no change.
We can, therefore, exchange software leading to creation of program libraries.
 Easy debugging: Errors could easily be removed (debugged).

52
ICT131 Introduction to Computer Science

 Easy Software development: Software could easily be developed. Commands of


programming language are similar to natural languages (ENGLISH).

High level languages are also called Third generation languages.

The Evolution of Programming Languages

The different stages of programming languages over time are called “generations.” The
term generation may be misleading. In hardware generation, older generations are
becoming obsolete and are not used. All software generations are still

in use.

1st 2nd 3rd 4th 5th

Machine Assembly Procedural Nonprocedural Intelligent


Language languages languages Languages
Language

O–1 Assemble Include Application Natural

Long, repetitive commands, generators, language

difficult instructions, shorter commands processing

programming shorter code specify

code results

The evolution of programming languages. With each generation progress is made toward
human-like natural language.

Machine Language

Machine language is the lowest-level computer language, consisting of the internal


representation of instructions and data. This machine code—the actual instructions
understood and directly executable by the CPU— is composed of binary digits. A program

53
54 Hardware and Networking

using this lowest level of coding is called a machine language program and represents the
first generation of programming languages. A computer’s CPU is capable of executing only
machine language programs, which are machine dependent. That is, the machine language
for one type of central processor may not run on other types. Machine language is
extremely difficult to understand and use by programmers. As a result, increasingly more
user-oriented languages have been developed. These languages make it much easier for
people to program, but they are impossible for the computer to execute without first
translating the program into machine language. The set of instructions written in a user-
oriented language is called a source program. The set of instructions produced after
translation into machine language is called the object program.

Assembly Language

An assembly language is a more user-oriented language that represents instructions and


data locations by using mnemonics, or memory aids, which people can more easily use.
Assembly languages are considered the second generation of computer languages.
Compared to machine language, assembly language eases the job of the programmer
considerably. However, one statement in an assembly language is still translated into one
statement in machine language. Because machine language is hardware dependent and
assembly language programs are translated mostly on a one-to-one statement basis,
assembly languages are also hardware dependent. A systems software program called an
assembler accomplishes the translation of an assembly language program into machine
language. An assembler accepts a source program as input and produces an object program
as output.

High-Level Languages

High-level languages are the next step in the evolution of user-oriented programming
languages. High-level languages are much closer to natural language and therefore easier to
write, read, and alter. Moreover, one statement in a highlevel language is translated into a
number of machine language instructions, thereby making programming more productive.

Procedural Languages: Third Generation: Procedural languages require the programmer to


specify—step by step—exactly how the computer will accomplish a task. A procedural
language is oriented toward how a result is to be produced. Because computers understand
only machine language (i.e., 0’s and 1’s), higher-level languages must be translated into

54
ICT131 Introduction to Computer Science

machine language prior to execution. This translation is accomplished by systems software


called language translators. A language translator converts the high-level program, called
source code, into machine language code, called object code. There are two types of
language translators—compilers and interpreters.

Compiler: The translation of a high-level language program to object code is accomplished


by a software program called a compiler. The translation process is called compilation.

Interpreters: An interpreter is a compiler that translates and executes one source program
statement at a time. Therefore, interpreters tend to be simpler than compilers. This
simplicity allows for more extensive debugging and diagnostic aids to be available on
interpreters.

Examples of Procedural Languages: FORTRAN (Formula Translator) is an algebraic,


formula-type procedural language. FORTRAN was developed to meet scientific processing
requirements.

COBOL (Common Business-Oriented Language) was developed as a programming


language for the business community. The original intent was to make COBOL instructions
approximate the way they would be expressed in English. As a result, the programs would
be “self-documenting.” There are more COBOL programs currently in use than any other
computer language.

The C programming language experienced the greatest growth of any language in the
1990s. C is considered more transportable than other languages, meaning that a C program
written for one type of computer can generally be run on another type of computer with
little or no modification. Also, the C language is easily modified. Other procedural
languages are Pascal, BASIC, APL, RPG, PL/1, Ada, LISP, and PROLOG.

Nonprocedural Languages: Fourth Generation

Another type of highlevel language, called nonprocedural or fourth-generation language


(4GL), allows the user to specify the desired results without having to specify the detailed
procedures needed to achieve the results. A nonprocedural language is oriented toward what
is required. An advantage of nonprocedural languages is that they may be manipulated by
nontechnical users to carry out specific functional tasks. 4GLs, also referred to as command

55
56 Hardware and Networking

languages, greatly simplify and accelerate the programming process as well as reduce the
number of coding errors.

The term fourth-generation language is used to differentiate these languages from machine
languages (first generation), assembly languages (second generation), and procedural
languages (third generation). For example, application (or program) generators are
considered to be 4GLs, as are query (e.g., MPG’s RAMIS), report generator (e.g., IBM’s
RPG), and data manipulation languages (e.g., ADABASE’s Natural) provided by most
database management systems (DBMSs). DBMSs allow users and programmers to
interrogate and access computer databases using statements that resemble natural language.
Many graphics languages (PowerPoint, Harvard Graphics, Corel Draw, and Lotus Freelance
Graphics) are considered 4GLs. Other 4GLs are FOCUS, Powerhouse, Unifare, Centura,
Cactus, and Developer/2000.

Fifth-Generation Languages: Natural language programming languages (NLPs) are the next
evolutionary step and are sometimes known as fifth-generation languages. Translation
programs translate natural languages into a structured, machine-readable form are
extremely complex and require a large amount of computer resources. Examples are
INTELLECT and ELF. These are usually frontends, to 4GLs (such as FOCUS), improving
the user interface with the 4GLs. Several procedural artificial intelligence languages (such
as LISP) are labelled by some as 5GLs. Initial efforts in artificial intelligence in Japan were
called the Fifth Generation Project.

New Programming Languages


Several new languages have been developed in the last 10 to 15 years. These languages
were designed to fit new technologies such as multimedia, hypermedia, document
management and the Internet.

Object-oriented programming (OOP) models a system as a set of cooperating objects.


Like structured programming, object-oriented programming tries to manage the behavioral
complexity of a system, but it goes beyond structured programming, also trying to manage
the information complexity of a system. The object-oriented approach involves
programming, operating systems environment, object-oriented databases, and a new way of
approaching business applications.

56
ICT131 Introduction to Computer Science

C++: is a direct extension of the C language, with 80 to 90 percent of C++ remaining pure
C.

Visual Programming Languages

Programming languages that are used within a graphical environment are often referred to
as visual programming languages. Visual programming allows developers to create
applications by manipulating graphical images directly, instead of specifying the visual
features in code. These languages use a mouse, icons, symbols on the screen, or pull-down
menus to make programming easier and more intuitive. Visual Basic, DELPHI, CA Visual,
Power Object, and Visual C++ are examples of visual programming languages.

Hypertext Mark-up Language

The standard language the Web uses for creating and recognizing hypermedia documents is
the Hypertext Mark-up Language (HTML)

XML:

XML (eXtensible Mark-up Language) is optimized for document delivery across the Net. It
is built on the foundation of Standard Generalized Markup Language (SGML). XML is a
language for defining, validating, and sharing document formats. It permits authors to
create, manage, and access dynamic, personalized, and customized content on the Web—
without introducing proprietary HTML extensions. XML is especially suitable for
electronic commerce applications.

Java

Java is an object-oriented programming language developed by Sun Microsystems. The


language gives programmers the ability to develop applications that work across the
Internet. Java is used to develop small applications, called applets, which can be included in
an HTML page on the Internet. When the user uses a Java-compatible browser to view a
page that contains a Java applet, the applet’s code is transferred to the user’s system and
executed by the browser.

JavaScript

JavaScript is an object-oriented scripting language developed by Netscape


Communications for client/server applications. It allows users to add some interactivity to
their Web pages. Many people confuse JavaScript with the programming language known

57
58 Hardware and Networking

as Java. There is no relationship between these two programming languages. JavaScript is a


very basic programming language and bears no relationship to the sophisticated and
complex language of Java.

JavaBean

This is the platform-neutral component architecture for Java. It is used for developing or
assembling network-aware solutions for heterogeneous hardware and operating system
environments, within the enterprise or across the Internet. JavaBeans extends Java’s “write
once, run anywhere” capability to reusable component development. JavaBeans runs on any
operating system and within any application environment.

Active

ActiveX is a set of technologies from Microsoft that combines different programming


languages into a single, integrated Web site. Before ActiveX, Web content was static, two-
dimensional text and graphics. With ActiveX, Web sites come alive using multimedia
effects, interactive objects, and sophisticated applications that create a user experience
comparable to that of high-quality CDROM titles. ActiveX is not a programming language
as such, but rather a set of rules for how applications should share information.

ASP (Active Server Pages)

ASP is a Microsoft CGI-like (common gateway interface) technology that allows you to
create dynamically generated Web pages from the server side using a scripting language.
Because ASP can talk to ActiveX controls and other OLE programs, users can take
advantage of many report writers, graphic controls, and all the ActiveX controls that they
may be used to. ASP can also be programmed in VBScript or JavaScript, enabling users to
work in the language that they are most comfortable with.

The role of language translation in the programming process


A computer can only understand machine language like 0s and 1s. In this regard, we need
translators to translate our programming language into machine language. A translator is a
program that converts statements written in one language to statements in another language,
for example converting assembly language into machine code. The assembly language
program would be called the source program and the machine-code program would be
called the object program. There are three types of translator:

58
ICT131 Introduction to Computer Science

 Assemblers
 Compilers
 Interpreters

Assembler

 Translates mnemonic operation codes into machine code, and symbolic


addresses into machine addresses.
 Includes the necessary linkages for closed subroutines and insert appropriate
machine code for macros
 Allocates areas for storage
 Detects and indicates invalid source-language instructions.
 Produces the object program on tape or disk as required.
 Produces a printed listing of the source and object program with comments. The
listing may also include error codes if appropriate.

Compiler

 Translates the source-program statements into machine code.


 Includes linkages for subroutines.
 Allocates area for main storage
 Generates the object program on cards, tape or disc as required.
 Produces a printed listing of the source and object programs when required.
 Tabulates a list of errors found during compilation, such as the use of “words” or
statements not included in the language vocabulary, or violating the rules of
syntax.

Interpreter

 Interpreters are more easily understood by comparing them with compilers


 Both compilers and interpreters are commonly used for the translation of high-
level language programs, but they perform the translation in two completely
different ways.
 Compiler translates the whole source program into a machine-code program
prior to the object program being loaded into main memory and executed.
Contrast this with interpreter, which deals program one instruction at a time,

59
60 Hardware and Networking

completely translating and executing each instruction before it goes on to the


next.
 If a compiler is used, the same program need only be translated once. Thereafter
the object program loaded directly into main storage and executed while
interpreter will be translated every time the program is executed.

System Development Methodology


New computer systems frequently replace existing manual systems, and the new systems
may themselves be replaced after sometime. The process of replacing the old system by the
new is often organized into a series of stages. The whole process is called “system life
cycle”. The system life cycle may be regarded as a “framework” upon which the work can
be organized and methods applied. The system life cycle is the traditional framework for
developing new systems. This process id diagrammatically presented in Figure 3 below.

PRELIMINARY SURVEY/STUDY

FEASIBILITY STUDY

INVESTIGATION AND FACT RECORDING

ANALYSIS

DESIGN

IMPLEMENTATION

MAINTENANCE AND REVIEW

Figure 3: System life cycle

60
ICT131 Introduction to Computer Science

Preliminary survey or study


The purpose of this survey is to establish whether there is need for a new system and if so to
specify the objectives of the system.
Feasibility study
The purpose of the feasibility study is to investigate the project in sufficient depth to be able
to provide information that either justifies the development of the new system or shows
why the project should not continue.
Investigation and fact recording
At this stage in the life cycle a detailed study is conducted. This study is far more detailed
and comprehensive than the feasibility study. The purpose of this study is to fully
understand the existing system and to identify the basic information requirements. This
requires a contribution from the users of both of the existing system and the proposed
system.
Analysis
Analysis of the full description of the existing system and of the objectives of the proposed
system should lead to the full specification of users’ requirements. This requirements
specification can be examined and approved before system design.
Design
The analysis may lead to a number of possible alternative designs. For example, different
combinations of manual and computerized elements may be considered. Once one
alternative has been selected the purpose of the design stage is to work from the
requirements specification to produce a system specification. The system specification will
be a detailed set of documents that provides details of all features of the system.
Implementation
Implementation involves following the details set out in the system specification. Three
particularly important tasks are hardware provision, programming and staff training. It is
worth observing in passing that the programming task has its own “life cycle” in the form
of the various stages in programming, such that analysis and design occur at many different
levels.
Maintenance and review
Once a system is implemented and in full operation it is examined to see if it has met the
objectives set out in the original specification. From time to time the requirements of the

61
62 Hardware and Networking

organization will change and the system will have to be examined to see if it can cope with
the changes. At some stage the system life cycle will be repeated.

Unit summary
In this unit you learned about the types of software, operating
systems, functions and types of operating systems, programming
languages, levels and history of programming languages and
Summary system development methodology.

Assignment
Trace the history of programming languages.

Assignment

Assessment
(i) Explain in detail what you know about system software
(ii) Explain the functions of operating systems
Assessment (iii) Describe each of the three levels of programming languages
(iv) Describe system development methodology

62
ICT131 Introduction to Computer Science

Unit 5

Hardware and Networking


Introduction
Welcome to hardware and networking. This unit is covered with
machine representation of data, organization of von Neumann
machine, control unit. The second part of the unit is about network
and telecommunication. In this part we cover advantages and types
of network, telecommunication concepts, and communication
media
Upon completion of this unit you should be able to:

 Demonstrate familiarity with machine representation of data.


 Demonstrate familiarity with assembly level of organization.
 Demonstrate familiarity with different types of network.
Outcomes
 Demonstrate familiarity with communication concepts and
communication media.

MDR: Memory Data Register

MAR: Memory Address Register

Terminology IR: Instruction Register

SCR: Sequence Control Register

CIR: Current Instruction Register

LAN: Local Area Network

MAN: Metropolitan Area Network

WAN: Wide Area Network

63
64 Hardware and Networking

Machine level representation of data


Bit
In computing and telecommunications, a bit is a basic unit of information storage and
communication. It is the maximum amount of information that can be stored by a device or
other physical system that can normally exist in only two distinct states. These states are
often interpreted (especially in the storage of numerical data) as the binary digits 0 and 1.
They may be interpreted also as logical values, either "true" or "false"; or two settings of a
flag or switch, either "on" or "off".
In information theory, "one bit" is typically defined as the uncertainty of a binary random
variable that is 0 or 1 with equal probability, or the information that is gained when the
value of such a variable becomes known.
Byte
A byte is a basic unit of measurement of information storage in computer science. In many
computer architectures it is a unit of memory addressing. There is no standard, but a byte
most often consists of eight bits.
A byte is an ordered collection of bits, with each bit denoting a single binary value of 1 or
0. The byte most often consists of 8 bits in modern systems. However, the size of a byte can
vary and is generally determined by the underlying computer operating system or hardware.
Historically, byte size was determined by the number of bits required to represent a single
character from a Western character set. Its size was generally determined by the number of
possible characters in the supported character set and was chosen to be a divisor of the
computer's word size
Words
Computer words consist of two or more adjacent bytes that are sometimes addressed and
almost always are manipulated collectively. The word size represents the data size that is
handled most efficiently by a particular architecture.
Number Bases
Numbers in base two are called binary numbers, which are used in digital computers when
they store and process data. There are several other number bases also used in computing,
and so the general idea of number bases is knit together with the methods we will need for
converting from one base to another

64
ICT131 Introduction to Computer Science

Decimal Numbers are the numbers in everyday use, and are also known as denary
numbers, or numbers to base 10, because ten is the basis of the number system. To write a
number in decimal we make use of the ten digit symbols 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9 and
also use the method of place value.
Binary Numbers
Binary Numbers are numbers to base 2. The binary number system uses just two
symbols, 0 and 1, and place values increasing in powers of two.
Examples: (You should memorize these first 8 values for future use)
Binary Decimal
Number 23=8 22=4 21=2 20=1 Equivalent

0 0 0 1 =(0 x 23) + (0 x 22) + (0 x 21) + (1 x 20) =1


3 2 1 0)
0 0 1 0 =(0 x 2 ) + (0 x 2 ) + (1 x 2 ) + (0 x 2 =2
0 0 1 1 =(0 x 23) + (0 x 22) + (1 x 21) + (1 x 20) =3
3 2 1 0)
0 1 0 0 =(0 x 2 ) + (1 x 2 ) + (0 x 2 ) + (0 x 2 =4
0 1 0 1 =(0 x 23) + (1 x 22) + (0 x 21) + (1 x 20) =5
0 1 1 0 =(0 x 23) + (1 x 22) + (1 x 21) + (0 x 20) =6
0 1 1 1 =(0 x 23) + (1 x 22) + (1 x 21) + (1 x 20) =7
1 0 0 0 =(1 x 23) + (0 x 22) + (0 x 21) + (0 x 20) =8

Conversion from binary to decimal


Example: Convert 1101012 to Decimal
Place Values 26 25 24 23 22 21 20 Decimal
(64) (32) (16) (8) (4) (2) (1) Value
Binary
Numbers 1 1 0 1 1 0 1
Conversion (1 x 64)+(1 x 32)+(0 x 16)+(1 x 8)+(1 x 4)+(0 x 2) +(1 x 1) =109

Octal Numbers

The octal numeral system, or oct for short, is the base-8 number system and uses the
digits 0 to 7. Numerals can be made from binary numerals by grouping consecutive binary

65
66 Hardware and Networking

digits into groups of three (starting from the right). For example, the binary representation
for decimal 74 is 1001010, which can be grouped into (00)1 001 010 — so the octal
representation is 112.

In decimal systems each decimal place is a base of 10. For example:

In octal numerals each place is a power with base 8. For example:

By performing the calculation above in the familiar decimal system we see why 112 in octal
is equal to 64+8+2 = 74 in decimal.

Octal to Decimal conversion


To convert a number k to decimal, use the formula that defines its base-8 representation:

Example: Convert 7648 to decimal:


7648 = 7 x 8² + 6 x 8¹ + 4 x 8° = 448 + 48 + 4 = 50010
For double-digit octal numbers this method amounts to multiplying the lead digit by 8 and
adding the second digit to get the total.

Example: 658 = 6x8 + 5 = 5310

Octal to Binary Conversion


To convert octal to binary, replace each octal digit by its binary representation.
Example: Convert 518 to binary:
58 = 1012
18 = 0012
Thus: 518 = 101 0012

Binary to Octal conversion

The process is the reverse of the previous algorithm. The binary digits are grouped by
threes, starting from the decimal point and proceeding to the left and to the right. Add
leading 0s (or trailing zeros to the right of decimal point) to fill out the last group of three if
necessary. Then replace each trio with the equivalent octal digit.

66
ICT131 Introduction to Computer Science

Conversion Table:

Octal 0 1 2 3 4 5 6 7

Binary Equivalents 000 001 010 011 100 101 110 111

For instance, convert binary 1010111100 to octal:

001 010 111 100

1 2 7 4

Thus 10101111002 = 12748

Convert binary 11100.01001 to octal:

011 100 . 010 010

3 4 . 2 2

Thus 11100.010012 = 34.228

Hexadecimal Numbers
Hexadecimal numbers usually abbreviated to Hex are numbers to base 16. The sixteen
symbols used in the Hex system are 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F and place
values increase in powers sixteen. We need to remember that A, B, C, D, E, F are
equivalent to 10, 11, 12, 13, 14, 15 DECIMAL. Hex numbers are used as a shorthand for
binary.

Hexadecimal to Decimal conversion


Convert 10910 to Hex and back again

Working Details

109 13 = D16 109 / 16 = 6 Remainder 13

6 6 = 616 6 / 16 = 0 Remainder 6

67
68 Hardware and Networking

Thus 10910 = 6D16


Decimal to Hexadecimal Conversion

162 161 160


Place Values (256) (16) (1) Decimal Values

Hex Number 0 6 D

Conversion (0 x 256) + (6 x 16) + (13 x 1) =109

Thus 6D16 = 10910


Hexadecimal to Binary Conversion
Conversion Table

Hexadecimal 0 1 2 3 4 5 6 7

Binary 0000 0001 0010 0011 0100 0101 0110 0111

Hexadecimal 8 9 A B C D E F

Binary 1000 1001 1010 1011 1100 1101 1110 1111

Convert 6D16 to Binary

Convert 11101110000102 to Hex


We group binary digits into fours from the right, and then use the conversion table

Binary 0001 1101 1100 0010


Digits

Hexadecimal 1 D C 2
Equivalent

Thus 11101110000102 to 1DC216

68
ICT131 Introduction to Computer Science

Representation of Non-character Data

Not all data is represented by characters. Two non-character based forms of data deserving
a particular mention are images (e.g. pictures and diagrams) and sounds. These non-character
forms of data can nevertheless be represented by means of binary codes.

Bit-mapped images

An image such as photograph can be given a binary coded representation suitable for
storage as “data” in a computer. A special input device called an image scanner can perform
this task. The image is said to be digitized. The digitized image is formed from a grid of tiny
dots. The fineness of the grid’s mesh determines how well the image’s details are
represented and therefore places a limit on the quality of image. The grid squares are called
pixels. Each pixel is black or white and may therefore be represented by a bit.

The bits used to represent an image have to be organized into some well-defined order
corresponding to the positions they take in grid. If an image is created or displayed from
such a grid of BITs, it is said to be a bit-mapped image.

Digitized sound

Sounds can also be given a binary coded representation suitable for storage as “data” in a
computer. In simple cases the input device is a combination of a microphone and a
“digitizing sound sampler”. It is the latter that produces a binary coded representation of
the sound picked up by the microphone.

Sounds can be depicted in diagrammatic form as waveforms. The height of waveform is


measured at regularly spaced intervals. Each height measurement is represented as a binary
code corresponding to its position

Logic Gates

Data and instructions are transmitted between various parts of the processor or between the
processor and peripherals by means of pulse trains. Various tasks are performed by passing
pulse trains through “electronic switches” called gates. Each gate is an electronic circuit
that may have provision for receiving or sending several pulses at once.
Each gate normally performs some simple function, e.g., AND, OR, NOT.

69
70 Hardware and Networking

A logic gate performs a logical operation on one or more logic inputs and produces a single
logic output. The logic normally performed is Boolean logic which is most commonly
found in digital circuits. Logic gates are primarily implemented electronically using diodes
or transistors, but can also be constructed using electromagnetic relays, fluidics, optics,
molecules, or even mechanical elements.
In electronic logic, a logic level is represented by a voltage or current, (which depends on
the type of electronic logic in use). Each logic gate requires power so that it can source and
sink currents to achieve the correct output voltage. In logic circuit diagrams the power is
not shown, but in a full electronic schematic, power connections are required.

Symbol Gates Boolean Truth Table


Algebra

AND A.B A B A.B

0 0 0

0 1 0

1 0 0

1 1 1

OR A+B A B A+B

0 0 0

0 1 1

1 0 1

1 1 1

NOT A A NOT A

1 0

0 1

70
ICT131 Introduction to Computer Science

Truth Table

A truth table is a table that describes the behavior of a logic gate. It lists the value of the
output for every possible combination of the inputs and can be used to simplify the number
of logic gates and level of nesting in an electronic circuit.

“1” means a high signal and “0” means a low signal. The AND gate will only give “high”
when two inputs are “high”. The OR gate will give “high” when any one of the inputs is
“high”. The convention “1” for high and “0” for low is called positive logic. Negative logic
may also be used, ie, “0” for high and “1” for low.

Assembly level machine organization


Basic organization of Von Neumann machine
The style of construction and organization of the many parts of a computer are its
“architecture”. This architecture is often said to be as “Von Neumann architecture”.
Although the basic elements of the computers are essentially the same for almost all digital
computers, there are variations in construction that reflect the differing ways in which
computers are used.
Levels within computer architecture
The simplest distinction between levels is that between hardware and software. We may
view hardware as the lowest and most basic level of the computer onto which a “layer” of
software is added. The software sits above the hardware, using it and controlling it. The
hardware supports the software by providing the operations the software requires.
The idea is easily extended by viewing the whole computer system as a “multi-layered
machine” consisting of several layers of software on top of several layers of hardware.
7 Applications Layer
6 Software level Higher Order Software Layer
5 Operating System Layer
4 Machine Layer
3 Hardware level Microprogrammed Layer
2 Digital Logic Layer
1 Physical Device Layer

71
72 Hardware and Networking

The Physical Device Layer which is in practice an electrical and electronic component
layer is very important. We need to be aware that even the most sophisticated modern
computer devices are built from simple electronic components such as transistors,
capacitors and resistors which rely on suitable power supplies and operating environments.

The Digital Logic Layer is the most basic operation of the machine provided at this level.
The basic elements at this level can store, manipulate and transmit data in the form of
simple binary representations. These digital logic elements are called gates. A gate is
normally constructed from a small number of transistors and other electronic components.
The standard digital logic devices are combined together to form computer processors,
computer memories and major components of the units used for input and output.

The Microprogrammed Layer interprets the machine language instructions from the
machine layer and directly causes the digital logic elements to perform required operations.
It is, in effect, a very basic inner processor and is driven by its own primitive control
program instructions held in its own private inner ROM. These program instructions are
called microcode and the control program is called microprogram.

The Machine Layer is the lowest level at which a program can be written and indeed it is
only machine language instructions which can be directly interpreted by the hardware.

The Operating System Layer controls the way in which all software uses the underlying
hardware. It also hides the complexities of the hardware from other software by providing
its own facilities which enable software to use the hardware more simply. It also prevents
other software from bypassing its facilities so that the hardware can only be accessed
directly by the operating system. It therefore provides orderly environment in which
machine language instructions can be executed safely and effectively.

The Higher Order Software Layer covers all programs in languages other than machine
language which require translation into machine code before they can be executed. Such
programs, when translated, rely upon the underlying operating system facilities as well as
their own machine instructions.

72
ICT131 Introduction to Computer Science

The Application Layer is the language of the computer as seen by the end-user.
The underlying computer, as viewed from each layer, is sometimes called a “virtual
machine”. For example the operating system is a virtual machine to the software above it
because, for practical purpose, it is the “machine” the software uses.

Physical organization of the computer


Designing and building a new computer from scratch is an expensive process. In addition,
the unit costs of individual components are high unless the components are mass-produced.
These factors cause most computer manufacturers to construct their computers from
varying combinations of standard components. For example, many different
microcomputers contain the same microprocessors.
This principle of modular construction applies to different levels of design. At one level it
might be a matter of plugging in one peripheral device instead of another. At a lower level
it might be a matter of using one type of memory chip instead of another.
Standard components are much easier to interconnect if the means of interconnection is also
standardized. One important method for doing this is using buses.
A bus is a collection of parallel electrical conductors called lines on to which a number of
components may be connected. Connections are made at points along the length of the bus
by means of connectors with multiple electrical contacts.

There are two basic type of bus:


 Internal buses: These are used within the processor and constitute an integral part
of its construction.
 External buses: These are used to connect separate hardware elements together, for
example connecting the processor to the main memory.

Buses are used to convey data signals, data address signals, control signals and power.

In terms of size, there are three types of construction ranging from the smallest to the largest as
follows:

 Single – chip computers


 Single – board computers
 Multiple – board bus – based computers

73
74 Hardware and Networking

Single – chip computers are those found in such devices as watches and cameras. The processors
are specialized and programmed to do a specific task. Apart from the remarkable operations some
of these devices do, they are not immediately recognizable as computers.

Single – board computers are usually much bigger than single – chip computers but still relatively
small. They are constructed on thin flat sheets of electrical insulator on to which the components
can be fixed and interconnected.

Multiple – board bus – based computers are usually general purpose computers. They are
normally too large to fit on to a single board. Instead, each board has a particular function and all
the boards are interconnected by plugging them into individual slots on one or more general
purpose buses. One board may contain the processor, another may contain the main storage, and so
on. Minicomputers and mainframes are based upon this type of construction. Sometimes there is a
primary board, called a mother board, for the processor and the other main components into which
other boards may be slotted.

Control unit
Function: The Control unit is the nerve center of the computer. It co-ordinates and controls
all hardware operations, i.e., those of the peripheral units, main memory and the processor
itself. It deals with each instruction in turn in a two-stage operation called the fetch-execute
cycle.
How it Operates
 The control unit causes the requisite instruction to be fetched from the main storage
via the Memory Data Register (MDR) and placed into the Instruction Register (IR).
When the main storage receives an appropriate signal from the control unit it
transfers the instruction, whose address is specified in the Memory Address Register
(MAR), into the processor’s MDR via the data bus.
 The control unit interprets the instruction in the IR and causes the instruction to be
executed by sending common signals to the appropriate hardware devices. For
example, it might cause main storage to transfer data to the MDR or it might cause
the ALU to perform some operation on the data in the data registers. The cycle is
then repeated with the next instruction being fetched.

Instructions fetch

74
ICT131 Introduction to Computer Science

Fetching the required instruction takes three steps

 The MAR is loaded with the address of the instruction to be performed. This address
is copied from the Sequence Control Register (SCR).
 The controls of the SCR are then increased by 1 so it is ready to be referenced for
the next fetch.
 The fetch is completed by loading the instruction into the Current Instruction
Register (CIR) via the MDR.

Decoding and execution

The opcode (operation code) or function part of the instruction is decoded by the control
unit. What happens next depends on the type of instruction.

Instruction Sets

The set of machine instructions that a computer can perform is called its instruction set. Any
operation not provided by the hardware, and therefore not in the instruction set, can only be
provided by using more than one instruction.

The size of the instruction set will affect all the following items:

 The cost of the machine


 Speed and efficiency.
 Choice of word size and instruction format.

Instruction Format
A machine instruction has several components. The instruction format is the size and arrangement
of these components. Two major components are the function code (opcode), which specifies the
function or operation performed, and the operand addresses, which specify the locations of the
operand used.

Types of Instructions
There are many ways or grouping or classifying the members of an instruction set and the choice
of method will depend on the machine.

 Arithmetic and logic operations (instructions).


 Transfer of control or branch instruction.

75
76 Hardware and Networking

 Load (fetch) and store instructions.


 Input/output instructions.
 Memory reference instructions.
 Processor reference instructions.

Network
A computer network is any computer system that links two or more computers together.
This can be a direct connection or remote access through dial-up. For computers to
communicate, a Network Interface Card (NIC) must be impeded in the computer or
externally provided. For distance communication, computers must have a Modem (convert
digital to analog and vice versa), modem speed is bits per second (bps).

Only computers that are connected can share what they have. Computer A that is connected
to computer B can access what computer B has. When two or more computers are
connected, the idea is to let them share and exchange what they contain. If the computers
are small, like regular desktop computers, they may become overwhelmed and they may not
have enough to share. The next step is to have a "bigger" central computer that holds even
more things that other small computers would need (the word ‘big’ here does not
necessarily mean that this computer is physically big. It implies that this computer can
perform more functions. For example it can perform more and faster calculations than the
other small computers). Such a central computer is called a server, because its job is to
serve other computers (these other small computers are then called workstations).

76
ICT131 Introduction to Computer Science

Advantages of a Network
Following are some of the advantages of computer networks.

 File Sharing: The major advantage of a computer network is that is allows


file sharing and remote file access. A person sitting at one workstation of a
network can easily see the files present on the other workstation, provided
he is authorized to do so. This arrangement saves the time which is wasted
in copying a file from one system to another, by using a storage device. In
addition to that, many people can access or update the information stored
in a database, making it up-to-date and accurate.
 Resource Sharing: Resource sharing is also an important benefit of a
computer network. For example, if there are four people in a family, each
having their own computer, they will require four modems (for the Internet
connection) and four printers, if they want to use the resources at the same
time. A computer network, on the other hand, provides a cheaper
alternative by the provision of resource sharing. In this way, all the four
computers can be interconnected using a network and just one modem and
printer can efficiently provide the services to all four members. The
facility of shared folders can also be availed by family members.
 Increased Storage Capacity: As there is more than one computer on a
network which can easily share files, the issue of storage capacity gets
resolved to a great extent. A stand-alone computer might fall short of
storage memory. However, when many computers are on a network,
memory of different computers can be used in such case. One can also
design a storage server on the network in order to have a huge storage
capacity.
 Increased Cost Efficiency: There are many types of software available in
the market which are costly and take time for installation. Computer
networks resolve this challenge as the software can be stored or installed
on a system or a server and can be used by the different workstations.

77
78 Hardware and Networking

Types of Networks

There are basically three types of networks commonly used in the


telecommunication world. These are: the Local Area Network (LAN), the Metropolitan
Area Network (MAN) and the Wide Area Network (WAN).

Local Area Network

A network is said to be a Local Area Network if it is confined to a relatively small


area. It is generally limited to a building or a geographical area, expanding not more than a
mile apart from the server to other computers.

LAN configuration usually consists of:

 A File Server: This stores all the software that controls the network, as well as
the software that can be shared by the computers that are attached to the network.
 Workstation-computers: These are attached to the server, usually not as
powerful as the server.
 Cables: These are cables that connect network interface card in each computer.

Metropolitan Area Network (MAN)

Metropolitan Area Network (MAN) usually covers a large geographical area such as a city.
It is usually used by institutions to connect libraries or by government agencies to connect
cities.

78
ICT131 Introduction to Computer Science

Wide Area Network

Wide Area Network (WAN) connects wide geographical area such as countries and the
continents. With WAN, dedicated transoceanic cabling or satellite uplink is used.

Telecommunication
A telecommunication system is a collection of compatible hardware and software facilities
arranged to communicate information from one location to another. These systems can
transmit text, graphics, voice, documents, or full motion video information.

79
80 Hardware and Networking

Communication Concepts
A number of key concepts reoccur throughout the literature on modern telecommunication
systems. Some of these concepts are discussed below.

A basic telecommunication system consists of three elements:

 a transmitter that takes information and converts it to a signal;

 a transmission medium that carries the signal; and

 a receiver that receives the signal and converts it back into usable information.

A typical telecommunication system is shown in Figure…. below. Such systems have two
sides: the transmitter of information and the receiver of information.

Host Host
Computer Computer H
Hardware H

Telecommunication Media
PC or Front end
Terminal Multiplex Modem Modem Multiplex processor F

Front end processor Receiver


Figure 4: Typical telecommunication system

Modems
Modems convert digital signals coming from a computer into analog signals that can be
transmitted over ordinary phone lines. A modem at the other end of the communication line
converts the analog signal back to digital. Modems also support a variety of
telecommunications functions, such as transmission error control, automatic dialing and
answering, and faxing capabilities.
Multiplexers
These are communication processors that allow a single communication channel to carry
simultaneous data transmissions from many terminals.

80
ICT131 Introduction to Computer Science

 Frequency Division Multiplexing (FDM): This facility divides a high-speed channel


into multiple slow-speed channels.
 Time Division Multiplexing (TDM): Divides the time each terminal can use the high-
speed line into time slots. Statistical Time Division Multiplexing allows for dynamic
allocation of time slots to active terminals only.

Front-end Processor

With most computers, the CPU has to communicate with several devices or terminals at the
same time. Routine communication tasks can absorb a large proportion of the CPU’s
processing time, leading to degraded performance on more important jobs. In order to
enhance performance, many computer systems have a small secondary computer dedicated
solely to communication known as the front-end processor. This specialized computer
manages all routing communications with peripheral devices.

The functions of a front-end processor include coding and decoding data, error detection,
recovery, recording, interpreting, and processing the control information that is transmitted.

Concentrator

A concentrator is a telecommunications computer that collects and temporarily stores


messages from terminals until enough messages are ready to be sent economically. The
concentrator then transmits the message to a host computer in a burst.

Communication Media

For data to be communicated from one location to another, some form of path-way or
medium must be used. These pathways are called communications media (channels). The
essentials of these communications media are as follows:

Twisted-Pair Wire
This is the traditional phone line used throughout the world. It is the most widely
distributed telecommunications media but is limited in the amount of data that can be
transmitted and the speed of transmission.
Coaxial Cable
This is a sturdy copper or aluminum wire wrapped in spacers to insulate and protect it.
Coaxial cable can carry more information and transmit it at higher speeds than twisted-pair
wires. It is also a higher-quality carrier, with little interference.

81
82 Hardware and Networking

Fiber Optics
These are hair-thin glass filaments which are spun into wires and wrapped in a protective
jacket. Fiber optics transmit light pulses as carriers of information and so are extremely fast
and produce no electromagnetic radiation. This makes them extremely reliable channels,
although splicing cables for connections is difficult.
Terrestrial Microwave
Earthbound microwave radiation transmits high-speed radio signals in line-of-sight paths
between relay stations.

Communications Satellites
Satellites in geosynchronous orbit are used to transmit microwave signals to any place on
earth using dish antennas for sending and receiving.
Cellular and PCS Systems
Low power transmitters on each cell of the system allow users to take advantage of several
frequencies for communications. Newer Personal Communication Services (PCS) use
digital technologies that provide greater capacity, security and additional services like voice
mail and paging. The growth in web-enabled information appliances like PDAs, smart
phones and pagers has sparked the interest in developing a wireless application protocol
(WAP). This standard will allow these devices access to the Web.
Wireless LANs
Using radio or infrared transmission, some LANs are completely wireless, thus eliminating
the cost of installing wire in existing structures.

Unit summary
In this unit you learned about the machine level representation of
data, basic organization of von Neumann architecture, control unit,
network, types and advantages of network, telecommunication,
Summary communication concepts and communication media.

82
Assignment
ICT131 Introduction to Computer Science

A telecommunication system “is a collection of compatible hardware and


software facilities arranged to communicate information from one
location to another”. Discuss the validity of this statement giving clear
and relevant examples.

Assignment

Assessment
(i) Describe the representation of non-character data;
(ii) Identify the levels in the organization of a computer system;
Assessment (iii) Identify and illustrate the advantages of network;
(iv) List and illustrate the types of communication media

References
Reference Books:
 C. S. French, “Computer Science”, 5th Edition.
 Turban, E, McLean E, & Wetherbe, “Information Technology for Management –
Transforming the Business in the Digital Economy”.
 Brookshear. J. G, “Computer Science”, 10th Edition.

83

You might also like