KEMBAR78
Computer Science Text | PDF | Ascii | Analog To Digital Converter
0% found this document useful (0 votes)
11 views38 pages

Computer Science Text

The document discusses the representation of text and sound in digital formats, focusing on ASCII and its evolution to Unicode for diverse language support. It explains the process of analog to digital conversion, including sampling, quantization, and encoding, as well as the implications of bit depth and sampling rate on audio quality. Additionally, it covers data storage basics, including bits, bytes, and the impact of file size on storage and transfer times.

Uploaded by

yatin.sathish
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views38 pages

Computer Science Text

The document discusses the representation of text and sound in digital formats, focusing on ASCII and its evolution to Unicode for diverse language support. It explains the process of analog to digital conversion, including sampling, quantization, and encoding, as well as the implications of bit depth and sampling rate on audio quality. Additionally, it covers data storage basics, including bits, bytes, and the impact of file size on storage and transfer times.

Uploaded by

yatin.sathish
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

Representation of Text

Introduction to Character
Real world impact
Set

What is ASCII?
Why it was invented and ASCII Vs UNICODE
Who?

Extended ASCII ASCII to UNICODE


Representing in data in binary 0 and 1
26 characters in an
alphabet

We would need minimum 5 bits


to represent each letter in the
alphabet.

But will that be enough for


representing small case and
upper case?

The scope
Character sets are
standard representation
and is universally agreed.
ASCII
The ASCII (American Standard Code for Information Interchange) system was invented to
create a standardized way to represent text and control characters in computers and
communication systems. Before ASCII, different manufacturers used their own unique
character encoding systems, leading to compatibility issues between different machines.
Who Invented ASCII?

ASCII was developed by Robert W. Bemer, an American computer scientist, in the early
1960s while working at IBM. He was instrumental in promoting the adoption of ASCII as a
universal standard. The system was officially published in 1963 by the American National
Standards Institute (ANSI).
Why Was ASCII Invented?
1. Standardization: It provided a common way to encode text across different devices
and systems.

2. Simplicity: It used a 7-bit binary system, making it efficient for early computers with
limited memory.

3. Compatibility: ASCII allowed computers, teleprinters, and other communication


devices to exchange information easily.

4. Expansion: It included control characters for text formatting (e.g., newline, tab) and
communication protocols.
ASCII vs. Unicode

ASCII was a crucial first step in digital text representation, but Unicode was needed to
support the diversity of human languages and symbols. Today, UTF-8 is the most widely
used encoding format, as it efficiently stores text while maintaining compatibility with
ASCII.
Representation of Sound

Analog vs Digital
Analog to Digital Converter

An Analog to Digital Converter (ADC) is an electronic device that changes real-world


signals (like sound, temperature, or light) into digital numbers that a computer can
understand.

Think of an ADC like a translator between the real world and a computer. Real-world signals
are usually analog, meaning they change smoothly, like the volume of your voice or the
temperature outside. But computers only understand digital signals, which are numbers (0s
and 1s). The ADC helps convert these smooth changes into step-by-step numbers.
Sample and Hold Circuit

● Takes small "snapshots" of the analog signal at regular time intervals.


● Imagine taking a video but only keeping certain frames to create a step-by-step picture.

Quantizer

● Divides the analog signal into levels (steps).


● Like a staircase—each step represents a possible value of the signal.

Encoder

● Converts the stepped values into binary numbers (0s and 1s).
● Example: If the temperature is measured as 25°C, the ADC might turn it into something like 11001 in binary.

Clock

● Keeps timing consistent so the ADC samples at regular intervals.


● Like a metronome in music, keeping a steady beat.

Comparator (for some types of ADCs)

● Compares the input signal to reference voltages to help find the right digital value.
Real-World Example
When you speak into a microphone connected to a computer, the microphone
picks up your voice as an analog wave. The ADC inside the computer
samples your voice, turns it into numbers, and allows the computer to record
or process it digitally.
1. Sampling - Turning continuous sound into discrete
data
Sound waves are continuous in nature, meaning they exist at all points in time. But
computers can only store discrete data, so we need to take samples at specific time
intervals.

Sampling Rate (Hz or kHz) -

● This determines how many times per second the sound is measured.
● A higher sampling rate means more data points, which captures the sound more
accurately.
● However, increasing the sample rate also increases the file size.
● Graph shows a sound wave
being sampled
● X & Y axis
● Amplitude -> loudness
● Approximate amplitude
ranges from 0-10, can be
represented with 4-bits
● Increase the number of
possible values for sound
amplitude -> increases
accuracy of sampled
sound
2. Quantization - Assigning numeric values to Sound
levels
Each sample’s amplitude (loudness) must be stored as a number, but computers can
only use a limited number of values.
Bit Depth -
● Number of bits used to represent smallest unit of in a sound file
● This determines how many possible values can be used to store each sample.
● More bits mean a greater range of values and better sound quality.
Ex:
● 8-bit audio: 256 levels (low quality, noticeable noise)
● 16-bit audio: 65,536 levels (CD quality)
● 24-bit audio: 16.7 million levels (studio quality)
Sampling
Sampling Rate
resolution

Number of bits
Number of
per sound
sound samples
samples

Sampling Resolution - number of bits per sample (bit depth)

Ex: 9 -> 4 is bit depth/sampling resolution

Sampling Rate - number of sound samples taken per second.


Measured in hertz where 1 hz = one sample per second
How is sampling used to record a sound clip?

1. The amplitude of sound wave is first


determined at time intervals (the
sampling rate)
Faithful
representation
2. This gives an approximate of original
representation of the sound wave Higher the sound
sampling rate or
larger resolution
3. Each sample of the sound wave is then Greater the file
encoded as a series of bits size
3. Encoding and Storage - How digital sound is saved

Once sampled and quantized, the data is stored in digital audio formats.

● Uncompressed formats:
○ Exact recordings, large file sizes, best quality
● Lossless Compressed Formats:
○ reduces size without losing quality
● Lossy Compressed Formats:
○ Removes some data to reduce file size, but can affect quality
The number of bits used to represent each colour is called the colour depth

An 8 bit colour depth means that each pixel can be one of 256 colours (as 2^8 = 256)

Modern computers have a 24 bit colour depth, which means over 16 million colours can be
represented.

With x pixels, 2^x colours can be represented as a generalisation. Increasing the colour depth also
increases the size of the file when storing an image.
Drawback of using high resolution images

Size of the file increases - consumes more space in the hard drive

Takes time to download an image from the internet

Takes time to transfer images from one device to another


Data storage

Bit - basic unit of all computing memory storage terms and either 1 or 0. The word
comes from Binary Digit.

The byte is the smallest unit of memory in a computer.

1 byte = 8 bits

A 4-bit number is called a nibble or half a byte.


Example - How a CD stores Audio

A standard CD audio file uses:

- 44.1 kHz sampling rate = 44,100 samples per second.


- 16-bit depth = Each sample can be one of 65536 values.
- Stereo (2 channels) = separate data for left and right audio.

File size calculation:

44100 x 16 x 2 = 1,411,200 bits per second

So, 3 minute song in uncompressed CD quality is about 32MB.

You might also like