www.rejinpaul.
com
EC6501
DIGITAL COMMUNICATION
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
OBJECTIVES:
To know the principles of sampling &
quantization
To study the various waveform coding
schemes
To learn the various baseband transmission
schemes
To understand the various Band pass signaling
schemes
To know the fundamentals of channel coding
Get useful study materials from www.rejinpaul.com
SYLLABUS www.rejinpaul.com
UNIT I SAMPLING & QUANTIZATION 9
Low pass sampling Aliasing- Signal Reconstruction-Quantization - Uniform & non-uniform
quantization - quantization noise - Logarithmic Companding of speech signal- PCM - TDM 56
UNIT II WAVEFORM CODING 9
Prediction filtering and DPCM - Delta Modulation - ADPCM & ADM principles-Linear Predictive
Coding
UNIT III BASEBAND TRANSMISSION 9
Properties of Line codes- Power Spectral Density of Unipolar / Polar RZ & NRZ Bipolar NRZ Manchester- ISI Nyquist criterion for distortionless transmission Pulse shaping Correlative
coding - Mary schemes Eye pattern - Equalization
UNIT IV DIGITAL MODULATION SCHEME 9
Geometric Representation of signals - Generation, detection, PSD & BER of Coherent BPSK,
BFSK & QPSK - QAM - Carrier Synchronization - structure of Non-coherent Receivers - Principle
of DPSK.
UNIT V ERROR CONTROL CODING 9
Channel coding theorem - Linear Block codes - Hamming codes - Cyclic codes - Convolutional
codes - Vitterbi Decoder
TOTAL: 45 PERIODS
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
OUTCOMES
Upon completion of the course, students will be
able to
Design PCM systems
Design and implement base band transmission
schemes
Design and implement band pass signaling
schemes
Analyze the spectral characteristics of band
pass signaling schemes and their noise
performance
Design error control coding schemes
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
EC6501
DIGITAL COMMUNICATION
UNIT - 1
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
INTRODUCTION
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
UNIT I
SAMPLING & QUANTIZATION (9)
Low pass sampling
Aliasing
Signal Reconstruction
Quantization
Uniform & non-uniform quantization
Quantization Noise
Logarithmic Companding of speech signal
PCM
TDM
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Digital communication system
Input
Signal
Analog/
Digital
Low
Pass
Filter
Sampler
Quantizer
Source
Encoder
Channel
Encoder
Multiplexer
Carrier
To Channel
Modulator
Pulse
Shaping
Filters
From Channel
DeModulator
Receiver
Filter
Line
Encoder
Detector
Carrier Ref.
Signal
at the
user end
Digital-to-Analog
Converter
Channel
Decoder
DeMultiplexer
Get useful study materials from www.rejinpaul.com
12
www.rejinpaul.com
Key Questions
How can a continuous wave form be
converted into discrete samples?
How can discrete samples be converted back
into a continuous form?
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Low Pass Sampling
Sampling (in time) is
Measure amplitude at regular intervals
How many times should we sample?
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Nyquist Theorem
For lossless digitization, the sampling rate
should be at least twice the maximum
frequency of the signal to be sampled.
In mathematical terms:
fs > 2*fm
where fs is sampling frequency and fm is the
maximum frequency in the signal
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Limited Sampling
But what if one cannot sample fast
enough?
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Limited Sampling
Reduce signal frequency to half of
maximum sampling frequency
low-pass filter removes higher-frequencies
(e.g.) If max sampling frequency is 22kHz, the it
is a must to low-pass filter a signal down to
11kHz
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Aliasing effect
LP filter
Nyquist rate
aliasing
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Three different sampling methods
Practical Sampling Methods are Natural Sampling
and Flat-top Sampling
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Natural Sampling
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Pulse-Amplitude Modulation
Pulse-Amplitude Modulation (PAM)
The amplitude of regularly spaced pulses are
varied in proportion to the corresponding sample
values of a continuous message signal.
Two operations involved in the generation of the
PAM signal
Instantaneous sampling of the message signal m(t)
every Ts seconds,
Lengthening the duration of each sample, so that it
occupies some finite value T.
Fig. 5
Get useful study materials from www.rejinpaul.com
28
Fig.5
www.rejinpaul.com
Back
Next
Get useful study materials from www.rejinpaul.com
29
Fig.6
www.rejinpaul.com
Back
Next
Get useful study materials from www.rejinpaul.com
30
Fig.7
www.rejinpaul.com
Back
Next
Get useful study materials from www.rejinpaul.com
31
The advantages offered by digital pulse modulation www.rejinpaul.com
Performance
Digital pulse modulation permits the use of regenerative repeaters,
when placed along the transmission path at short enough distances,
can practically eliminate the degrading effects of channel noise and
signal distortion.
Ruggedness
A digital communication system can be designed to withstand the
effects of channel noise and signal distortion
Reliability
Can be made highly reliable by exploiting powerful error-control
coding techniques.
Security
Can be made highly secure by exploiting powerful encryption
algorithms
Efficiency
Inherently more efficient than analog communication system in the
tradeoff between transmission bandwidth and signal-to-noise ratio
System integration
To integrate digitized analog signals with digital computer data
Get useful study materials from www.rejinpaul.com 32
Quantization Processwww.rejinpaul.com
Amplitude quantization
The process of transforming the sample amplitude m(nTs) of a
baseband signal m(t) at time t=nTs into a discrete amplitude
v(nTs) taken from a finite set of possible levels.
I k : {mk < m mk +1}, k = 1,2,..., L (17)
Fig. 9
Representation level (or Reconstruction level)
The amplitudes vk , k=1,2,3,,L
Quantum (or step-size)
The spacing between two adjacent representation levels
v = g (m ) (18)
Fig. 10
Get useful study materials from www.rejinpaul.com
33
Fig.9
www.rejinpaul.com
Back
Next
Get useful study materials from www.rejinpaul.com
34
Fig.10
www.rejinpaul.com
Back
Next
Two types of quantization are
a) Mid-tread
b) Mid-rise Get useful study materials from www.rejinpaul.com
35
www.rejinpaul.com
Linear Quantization
Applicable when the signal is in a
finite range (fmin, fmax)
The entire data range is divided
into L equal intervals of length Q
(known as quantization interval or
quantization step-size)
Q=(fmax-fmin)/L Interval i is
mapped to the middle value of this
interval
We store/send only the index of
quantized value min
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Signal Range is Symmetric
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Quantization Noise
Get useful study materials from www.rejinpaul.com
Non-Uniform Quantization
www.rejinpaul.com
Many signals such as speech have a nonuniform distribution.
The amplitude is more likely to be close to zero than to be at higher levels.
Nonuniform quantizers have unequally spaced levels
The spacing can be chosen to optimize the SNR for a particular type of signal.
Output sample
XQ
6
4
-8
-6
-4
-2
Example: Nonuniform 3 bit quantizer
2
-2
Input sample
X
-4
-6
Get useful study materials from www.rejinpaul.com
39
www.rejinpaul.com
Non-Linear Quantization
The quantizing intervals are not of equal size
Small quantizing intervals are allocated to small
signal values (samples) and large quantization
intervals to large samples so that the signal-toquantization distortion ratio is nearly independent of
the signal level
S/N ratios for weak signals are much better but are
slightly less for the stronger signals
Companding is used to quantize signals
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Function representation
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Uniform and Non-uniform Quantization
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Companding
Formed from the words compressing and
expanding.
A PCM compression technique where analogue
signal values are rounded on a non-linear scale.
The data is compressed before sent and then
expanded at the receiving end using the same
non-linear scale.
Companding reduces the noise and crosstalk
levels at the receiver.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
u-LAW and A-LAW definitions
A-law and u-law are companding schemes used in
telephone networks to get more dynamics to the
8 bit samples that is available with linear coding.
Typically 12..14 bit samples (linear scale) sampled
at 8 kHz sample are companded to 8 bit
(logarithmic scale) for transmission over 64 kbit/s
data channel.
In the receiving end the data is then converted
back to linear scale (12..14 bit) and played back.
converted back
Get useful study materials from www.rejinpaul.com
Compressor
A particular form of compression law : -law
v=
www.rejinpaul.com
log(1 + m )
(5.23)
log(1 + )
d m log(1 + )
(1 + m ) (5.24)
=
dv
-law is neither strictly linear nor strictly logarithmic
A-law :
Am
1
m
,
0
1 + log A
A
(5.25)
v =
1 + log( A m ) , 1 m 1
1 + log A
A
1
1 + log A
,
0
m
d m A
A
=
(5.26)
Fig. 5.11
1
dv
(1 + log A) m , A m 1
Get useful study materials from www.rejinpaul.com 45
Fig.11
www.rejinpaul.com
Back
Next
Get useful study materials from www.rejinpaul.com
46
www.rejinpaul.com
Example: -law Companding
1
x[n]=speech /song/
0.5
0
-0.5
-1
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
y[n]=C(x[n])
Companded Signal
0.5
0
-0.5
-1
Segment of x[n]
0.5
0
Close View of the Signal
-0.5
-1
2200
2300
2400
2500
2600
2700
2800
2900
3000
2300
2400
2500
2600
2700
2800
2900
3000
Segment of y[n]
Companded Signal
0.5
0
-0.5
-1
2200
Eeng 360
Get useful study materials from www.rejinpaul.com
47
A-law and law Companding
www.rejinpaul.com
These two are standard companding methods.
u-Law is used in North America and Japan
A-Law is used elsewhere to compress digital telephone signals
Eeng 360
Get useful study materials from www.rejinpaul.com
48
www.rejinpaul.com
Quantization - why do we need such classification ?! - (3)
Comparison Uniform Vs. Non-Uniform Usage
Speech signals doesnt require high quantization resolution for
high amplitudes (50% Vs. 15%).
wasteful to use uniform quantizer ?
The goal is decrease the SQNR, more levels for low amplitudes, less levels for
high ones.
Maybe use a Non-uniform quantizer ?
Technical
Presentation
Page 49
Get
useful
study materials
from www.rejinpaul.com
Concepts
www.rejinpaul.com
Quantization
More About Non-Uniform Quantizers (Companding)
Uniform quantizer = use more levels when you need it.
The human ear follows a logarithmic process in which high amplitude sound doesnt
require the same resolution as low amplitude sounds.
One way to achieve non-uniform quantization is to use what is called as Companding
Companding = Compression + Expanding
Compressor
Function
Uniform
Quantization
Expander
Function
(-1)
Technical
Presentation
Page 50
Get
useful
study materials
from www.rejinpaul.com
www.rejinpaul.com
Pulse-Code Modulation
PCM (Pulse-Code Modulation)
A message signal is represented by a sequence of coded pulses, which is
accomplished by representing the signal in discrete form in both time and
amplitude
The basic operation
Transmitter : sampling, quantization, encoding
Receiver : regeneration, decoding, reconstruction
Operation in the Transmitter
1. Sampling
1. The incoming message signal is sampled with a train of rectangular pulses
2. The reduction of the continuously varying message signal to a limited
number of discrete values per second
2. Nonuniform Quantization
1. The step size increases as the separation from the origin of the inputoutput amplitude characteristic is increased, the large end-step of the
quantizer can take care of possible excursions of the voice signal into the
large amplitude ranges that occur relatively infrequently.
Get useful study materials from www.rejinpaul.com
51
Fig.11
www.rejinpaul.com
Back
Next
Get useful study materials from www.rejinpaul.com
52
www.rejinpaul.com
3. Encoding
1.To translate the discrete set of sample vales to a
more appropriate form of signal
Fig. 11
2.A binary code
The maximum advantage over the effects of noise in a
transmission medium is obtained by using a binary
code, because a binary symbol withstands a relatively
high level of noise.
The binary code is easy to generate and regenerate
Table. 2
Get useful study materials from www.rejinpaul.com
53
Regeneration Along the Transmission Path
www.rejinpaul.com
The ability to control the effects of distortion and noise produced by
transmitting a PCM signal over a channel
Equalizer
Shapes the received pulses so as to compensate for the effects of
amplitude and phase distortions produced by the transmission
Timing circuitry
Provides a periodic pulse train, derived from the received pulses
Renewed sampling of the equalized pulses
Fig. 13
Decision-making device
The sample so extracted is compared o a predetermined threshold
ideally, except for delay, the regenerated signal is exactly the same as the
information-bearing signal
1. The unavoidable presence of channel noise and interference causes
the repeater to make wrong decisions occasionally, thereby
introducing bit errors into the regenerated signal
2. If the spacing between received pulses deviates from its assigned
value, a jitter is introduced into the regenerated pulse position,
thereby causing distortion.
Get useful study materials from www.rejinpaul.com
54
Fig.13
www.rejinpaul.com
Back
Next
Get useful study materials from www.rejinpaul.com
55
www.rejinpaul.com
Operations in the Receivers
1. Decoding and expanding
1.Decoding : regenerating a pulse whose amplitude is
the linear sum of all the pulses in the code word
2.Expander : a subsystem in the receiver with a
characteristic complementary to the compressor
1. The combination of a compressor and an expander is a
compander
2. Reconstruction
1.Recover the message signal : passing the expander
output through a low-pass reconstruction filter
Get useful study materials from www.rejinpaul.com
56
www.rejinpaul.com
Categories of multiplexing
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Time Division Multiplexing (TDM)
TDM is a technique used for
transmitting several message signals
over a single communication channel
by dividing the time frame into slots,
one slot for each message signal
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Time Division Multiplexing
Entire spectrum is allocated for a channel (user) for a limited time.
The user must not transmit until its
next turn.
k1
k2
k3
k4
k5
Used in 2nd generation
c
k6
Frequency
f
t
Time
Advantages:
Only one carrier in the medium at any given time
High throughput even for many users
Common TX component design, only one power amplifier
Flexible allocation of resources (multiple time slots).
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Time Division Multiplexing
Disadvantages
Synchronization
Requires terminal to support a much higher data
rate than the user information rate therefore
possible problems with intersymbolinterference.
Application: GSM
GSM handsets transmit data at a rate of 270 kbit/s in
a 200 kHz channel using GMSK modulation.
Each frequency channel is assigned 8 users, each
having a basic data rate of around 13 kbit/s
Get useful study materials from www.rejinpaul.com
Time Division Multiplexing
www.rejinpaul.com
At the Transmitter
Simultaneous transmission of several signals on a time-sharing basis.
Each signal occupies its own distinct time slot, using all frequencies, for
the duration of the transmission.
Slots may be permanently assigned on demand.
At the Receiver
Decommutator (sampler) has to be synchronized with the incoming
waveform Frame Synchronization
Low pass filter
ISI poor channel filtering
Feedthrough of one channel's signal into another channel -- Crosstalk
Applications of TDM: Digital Telephony, Data communications, Satellite
Access, Cellular radio.
Get useful study materials from www.rejinpaul.com
61
Time Division Multiplexing
www.rejinpaul.com
Conceptual diagram of multiplexing-demultiplexing.
PAM TDM System
Get useful study materials from www.rejinpaul.com
62
www.rejinpaul.com
TDM-PAM: Transmitter
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
TDM-PAM : Receiver
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Samples of Signal -1
g1(t)
time
0
Ts
2Ts
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Samples of signal - 2
g2(t)
Ts
Ts
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Multiplexing of TWO signals
Ts
2Ts
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
TDM-PAM for 4 signals.
4
4
4
1
1
2
1
2
2
3
Get useful study materials from www.rejinpaul.com
Time
www.rejinpaul.com
Problem
Two low-pass signals of equal bandwidth
are sampled and time division
multiplexed using PAM. The TDM signal is
passed through a Low-pass filter & then
transmitted over a channel with a
bandwidth of 10KHz.
Continued.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Problem (continued)
a) What is maximum Sampling rate for each
Channel?
b) What is the maximum frequency content
allowable for each signal?
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Problem: Solution
Channel Bandwidth = 10 KHz.
Number of samples that can be transmitted
through the channel = 20K
Maximum Sampling rate for each channel =
10K Samples/sec.
Maximum Frequency for each Signal = 5KHz
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
End of Unit-1
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Unit II
Waveform Coding
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Syllabus
Prediction filtering and DPCM - Delta
Modulation - ADPCM & ADM principles-Linear
Predictive Coding
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Introduction to Waveform Coding
Waveform coding is some kind of approximately
lossless coding, as it deals with speech signal as
any kind of ordinary data.
The resulting signal is close as possible as the
original one.
Codecs using this techniques have generally low
complexity and give high quality at rates 16 Kbps.
The simplest form of waveform coding is Pulse
Code Modulation (PCM).
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Pulse Code Modulation (PCM)
It involves sampling and quantizing the input
waveform.
PCM consists of three steps to digitize an
analog signal:
1. Sampling
2. Quantization
3. Binary encoding
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Prediction Filtering
Linear prediction is a mathematical operation
where future values of a discrete-time signal
are estimated as a linear function of previous
samples.
In digital signal processing, linear prediction is
often called linear predictive coding (LPC).
linear prediction can be viewed as a part of
mathematical modelling or optimization.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
The Prediction Model
The most common representation is
Where
is the predicted signal value, x(n-i)
the previous observed values, and the
predictor coefficients.
The error generated by this estimate is
Where x(n) is the true value.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Differential pulse-code modulation
(DPCM)
Differential pulse-code modulation (DPCM) is
a signal encoder that uses the baseline of
pulse-code modulation (PCM) but adds some
functionalities based on the prediction of the
samples of the signal.
The input can be an analog signal or a digital
signal.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Block Diagram of DPCM
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
DPCM code words represent differences between samples
unlike PCM where code words represented a sample value.
Basic concept of DPCM - coding a difference, is based on
the fact that most source signals show significant
correlation between successive samples so encoding uses
redundancy in sample values which implies lower bit rate.
Realization of basic concept (described above) is based on a
technique in which we have to predict current sample value
based upon previous samples (or sample) and we have to
encode the difference between actual value of sample and
predicted value.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Principle of DPCM
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Delta Modulation
A Delta modulation (DM or -modulation) is an analog-to-digital
and digital-to-analog signal conversion technique used for
transmission of voice information where quality is not of primary
importance.
To achieve high signal-to-noise ratio, delta modulation must use
oversampling techniques, that is, the analog signal is sampled at a
rate several times higher than the Nyquist rate.
Derived forms of delta modulation are continuously variable slope
delta modulation, delta-sigma modulation, and differential
modulation.
Differential pulse-code modulation is the super-set of DM.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Features
the analog signal is approximated with a series of
segments
each segment of the approximated signal is compared to
the original analog wave to determine the increase or
decrease in relative amplitude
the decision process for establishing the state of
successive bits is determined by this comparison
only the change of information is sent, that is, only an
increase or decrease of the signal amplitude from the
previous sample is sent whereas a no-change condition
causes the modulated signal to remain at the same 0 or 1
state of the previous sample.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Principle of delta modulation
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Differential Pulse Code Modulation
(DPCM)
What if we look at sample differences, not the
samples themselves?
dt = xt-xt-1
Differences tend to be smaller
Use 4 bits instead of 12, maybe?
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Differential Pulse Code Modulation
(DPCM)
Changes between adjacent samples small
Send value, then relative changes
value uses full bits, changes use fewer bits
E.g., 220, 218, 221, 219, 220, 221, 222, 218,.. (all values between 218
and 222)
Difference sequence sent: 220, +2, -3, 2, -1, -1, -1, +4....
Result: originally for encoding sequence 0..255 numbers need 8 bits;
Difference coding: need only 3 bits
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Adaptive Differential Pulse Code
Modulation (ADPCM)
Adaptive similar to DPCM, but adjusts the width of the
quantization steps
Encode difference in 4 bits, but vary the mapping of bits to
difference dynamically
If rapid change, use large differences
If slow change, use small differences
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Adaptive Delta Modulation (ADM)
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
A large step size is required when sampling those parts
of the input waveform of steep slope. But a large
step size worsens the granularity of the sampled
signal when the waveform being sampled is changing
slowly.
A small step size is preferred in regions where the
message has a small slope. This suggests the
need for a controllable step size the control
being sensitive to the slope of the sampled signal
Hence ADM is prefered.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Adaptive Delta Modulation
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
VCA
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Linear Predictive Coding (LPC)
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Basic Concepts of LPC
It is a parametric de-convolution algorithm
x(n) is generated by an unknown sequence e(n)
exciting a unknown system V(Z) which is supposed to
be a linear non time-variant system.
V(Z) = G(Z)/A(Z),
E(Z)V(Z) = X(Z)
G(Z) = j=0Q gjZ-j, A(Z) = i=0P aiZ-i
Where ai and gj are parameters, real and a0 = 1
If an algorithm could estimate all these parameters,
then V(Z) could be found, and E(Z) could be found
also. This finishes de-convolution.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
There are some limitations for the model
(1) G(Z) = 1 then V(Z) = 1/A(Z) this is so called Full
Poles odels and the parametric de-convolution
became coefficients(ai) estimation problem.
(2) e(n) sequence is of form Ge(n), where e(n) is a
periodic pulse or a Gaussian white noise sequence.
For the first case e(n) = (n-rNp) and for the second
case R(k) = E[e(n)e(n+k)] = (k) and the value of
e(n) satisfied with Normal distribution. G is a nonnegative real number controlling the amplitude.
The way is x(n)->V(Z)(P,ai)->e(n),G->type of e(n)
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Suppose x(n) and type of e(n) are known, what is
the optimized estimation of P and ai, e(n) and G? It is
the LMS algorithm.
Suppose x(n) is the predicted value of x(n), it is the
linear sum of previous P known values of x:
x(n) = i=1P ai x(n-i)
The predicted error
(n) = x(n)-x(n) = x(n) - i=1P ai x(n-i)
It is a stochastic sequence. The variance of it could
be used to evaluate the quality of prediction.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
2 = n 2(n) (time average replaced means)
It could be proved that if x(n) is generated by full
poles model : x(n) = -i=1P ai x(n-i) + Ge(n) and
optimized P = P, optimized ai = ai, 2 is minimal.
2 = n [x(n) -i=1P ai x(n-i)]2
={n x2(n)}-2i=1P ak{n x(n-k)x(n)}+
k=1Pi=1P akai{n x(n-k)x(n-i)}
By setting (2 )/ ak = 0 we can get
-2 {n x(n-k)x(n)}+2i=1P ai{n x(n-k)x(n-i)}=0
Or i=1P ai(k,i) = (k,0)
if (k,i) =n x(n-k)x(n-i) 1<=i<=P and 1<=k<=P
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
i=1P ai(k,i) = (k,0), k=1~P is called PC
canonical equations. There are some different
algorithms to deal with the solution.
[2]min = k=0P ak (k,0) with a0 = 1
So if we have x(n), (k,i) could be calculated,
and equations could be solved to get ai and
[2]min also could be obtained. For short-time
speech signal according to different lower and
upper limitation of the summary we could
have different types of equations. We will
discuss these different algorithms later.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Auto-Correlated Solution of LPC
Suppose windowed signal is xw(n)
(k,i) = n xw(n-k)xw(n-i)
If window length is N-1 then the summation
range will be 0~N+P-1
(k,i) = m xw(m+(i-k))xw(m) = R(i-k) if n-i
=m
(k,i) = R(i-k) = R(k-i) = R(|i-k|) <= R(0)
The equations became i=1P aiR(|i-k|)= - R(k)
1<=k<=P
These are Toplitz equations and have high
Get useful study materials from www.rejinpaul.com
efficient solution.
www.rejinpaul.com
|R(0) R(1) R(P-1)|
|R(1) R(0) R(P-2)|
|.|
|R(P-1) R(P-2) R(0) |
6.2.1 Durbin Algorithm
1. E(0) = R(0)
2. Ki = [ R(i) - aj(i-1)R(i-j)]/E(i-1)
1<=i<=p
3. ai(i) = Ki
4. aj(i) = aj(i-1) Kiai-j(i-j)
1<=j<=i-1
5. E(i) = (1-Ki2)E(i-1)
Final solution is aj = aj(p)
1<=j<=p
|a1|
|a2|
|...|
|ap|
| R(1) |
| R(2) |
=
...
| R(P) |
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
For iteration i we got a set of
coefficients for the predictor of i-th
order and the minimal predicted error
energy E(i). We also can get it by {R(k)}
:
E(i) = R(0) k=1i akR(k), 1<=i<=p
Ki is the reflect coefficient : -1<=Ki<=1
It is a sufficient and necessary condition
for stable H(z)
during
iteration.
Get useful
study materials
from www.rejinpaul.com
www.rejinpaul.com
6.2.2 Schur algorithm
At first an auxilary sequence is defined. Its properties
are :
(1) qi(j) = R(j) when i = 0
(2) qi(j) = 0
when i > 0, j=1~p
(3) qp(0) = E(p) is the predicted error energy.
(4) |qi(j)| <= R(0), it is equal only if i=j=0
The algorithm is as following:
1. r(j) = R(j)/R(0), r(-j) = r(j), j=0~p
2. a0 = 1, E(0) = 1
useful study materials from www.rejinpaul.com
3. q0(j) = r(j) Get -p<j<p
www.rejinpaul.com
4. i = 1, k1 = r(1)
5. For i-p<=j<=p
qi(j) = qi-1(j) + ki *qi-1 (i-j)
ki = qi-1(j)/qi(0)
aj(i) = qi-1(i-j)
E(i) = E(i-1)(1-ki2)
6. If i<p, back to step 5
7. Stop
If we only calculate ki, then only first two expressions
in step 5 are enough. It is suitable for fix-point
calculation (r<=1) or hardware implementation.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Covariance Solution of LPC
If not using windowing, but limiting the range of
summation, we could get :
2 = n=0N-1 2(n) n=0~N-1
(k,i) = n=0N-1 x(n-k)x(n-i)
k=1~p, i=0~p
= m=-iN-i-1 x(m+(i-k))x(m) let n-i=m, m=-i~N-i-1
The equations will be like following :
|(1,1) (1,2) (1,p)| |a1|
|(1,0)|
|(2,1) (2,2) (2,p)| |a2|
|(2,0)|
.=
|(p,1) (p,2) (p,p)| |ap|
|(p,0)|
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Covariance Solution of LPC
The matrix is a covariance matrix and it is positive
determined, but not Toplitz. There is no high efficient
algorithm to solve. Only common used LU algorithm
could be applied. Its advantage is not having big
predicted error on the two ends of window. So when
N~P the estimated parameters have more accuracy
than auto-correlated method. But in speech
processing very often N>>P, so the advantage is not
obvious.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
LPC parameters and their relationships
(1) Reflect Coefficients
Also known as PARCOR coefficients
If {aj} are known, ki could be found as following :
1<=j<=p
aj(p) = aj
ki = ai(i)
aj(i-1) = (aj(i) + aj(i) ai-j(i))/(1-ki2) 1<=j<=i-1
The inverse process :
aj(i) = ki
aj(i) = aj(I-1) - kj ai-j(i-1) at last aj= aj(p) 1<=j<=p
-1<=ki<=1 is the sufficient and necessary condition
Get useful
study materials from www.rejinpaul.com
for stable system
function
www.rejinpaul.com
(2) Coefficients of Logarithm Area Ratio
gi = log(Ai+1/Ai) = log[(1-ki)/1+ki]) i=1~p
Where A is the intersection area of i-th
segment of the lossless tube.
i=1~p
ki = (1-exp(gi))/(1+exp(gi))
(3) Cepstrum Coefficients
cn = an + k=1n kckan-k/n, 1<=n<=p+1
= an + k=n-pn-1 kckan-k/n, n>p+1
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
(4) The Roots of Predictor
A(z) = 1 k=1p akz-k = k=1p (1-zk z-1) = 0
Transfer to S-plane: zi = exp(siT)
Suppose si = i + ji , zi = zir + jzii , then
i = tan(zii/zir)/T andi= log(zii2 + zir2 )/(2T)
(5)The impulse response of full poles system
h(n) = k=1p akh(n-k)+(n) n>=0
= 0 Get useful
n<0study materials from www.rejinpaul.com
www.rejinpaul.com
(6) Auto-correlated Coefficients of impulse
response of the full poles system
H(z) = S(z)/U(z) = G/(1- k=1p akz-k)
The auto-correlated coefficients of h(n) is :
R(i) = n=0 h(n)h(n-i) = R(-i)
It could be proved that :
R(i) = k=1p ak R(|i-k|) 1<=i<=p
And R(0) = k=0p ak R(k) + G2
{ak} -> {R(i)} and {R(i)} -> {ak} are
equivalent
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
(7) Auto-correlated coefficients of impulse
response of the predicted error filter (inverse
filter)
A(z) = 1 - akz-k
The impulse response is :
a(n) = (n) - k=1p ak (n-k)
= 1, n = 0; an , 0<n<=p; 0, otherwise
Its auto-correlated function is :
R(i) = k=1p a(k)a(k+i)
0<=i<=p
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
(8)Line Spectrum Pair (LSP) or Line
Spectrum Frequency (LSF)
A(p)(z)=1-k=1p akz-k ( p is even)
Define P(z) = A(p)(z)+z-(p+1)A(p)(z-1)
Q(z) = A(p)(z)- z-(p+1)A(p)(z-1)
It could be proved that : All roots of P(z) and
Q(z) are on the unit circle and alternatively
arranged on it provided the roots of A(z) are
inside the unit
circle.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Replace z with expj:
P(expj)=|A(P)(expj)|expj()[1+exp[-j((p+1)+
2())]
Q(expj)=|A(P)(expj)|expj()[1+exp[-j((p+1)+
2()+)]
If the roots of A(P)(z) are inside the unit circle, whenis
0~, () changes from 0 and returns to 0, the amount
[(p+1)+2()] will be 0~(p+1)
P(expj)=0 : [(p+1)+2()]=k, k=1,3,P+1
Q(expj)=0 : [(p+1)+2()]=k, k=0,2,P
The roots of P and Q : Zk = expjk [(p+1)+2()]=k,
k=0,1,2,P+1
And 0 < 1 < 2 < < P < P+1
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
If a1(p)~ap(p) are known, LSP could be found by
A(z) -> P(z) -> p(w) -> f1, f2, fp
If f1~fp are known, ai(p) could be found by P(z)
and Q(z) -> A(z) = P(z) + Q(z) -> a1(p)~ap(p)
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Thank you
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Unit 3
Baseband Transmission
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Syllabus
Properties of Line codes- Power Spectral Density of Unipolar /
Polar RZ & NRZ Bipolar NRZ - Manchester- ISI Nyquist
criterion for distortionless transmission Pulse shaping
Correlative coding - Mary schemes Eye pattern - Equalization
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Baseband Transmission
The digital signal used in baseband
transmission occupies the entire bandwidth
of the network media to transmit a single data
signal.
Baseband communication is bidirectional,
allowing computers to both send and receive
data using a single cable.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Baseband Modulation
An information bearing-signal must conform to the limits of its channel
Generally modulation is a two-step process
baseband: shaping the spectrum of input bits to fit in a limited spectrum
passband: modulating the baseband signal to the system rf carrier
Most common baseband modulation is Pulse Amplitude Modulation
(PAM)
data amplitude modulates a sequence of time translates of basic pulse
PAM is a linear form of modulation: easy to equalize, BW is pulse BW
Typically baseband data will modulate in-phase [cos] and quadrature
[sine] data
streams to the carrier passband
Special cases of modulated PAM include
phase shift keying (PSK)
quadrature amplitude modulation (QAM)
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Need for Baseband Modulation
An analog signal has a finite bandwidth.
A digital stream or signal, with sharp transitions, has an infinite
bandwidth.
Due to the limited available system bandwidth, only the major
portion of
a digital signal spectrum can be transmitted and restored. Even if
there is
no loss or noise in the communication system, the received signal
will
have distortion due to the limited channel bandwidth.
To avoid or to reduce this signal distortion, we
use baseband modulation techniques
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Line Codes
In telecommunication, a line code (also called digital baseband
modulation or digital baseband transmission method) is a code
chosen for use within a communications system for baseband
transmission purposes.
Line coding is often used for digital data transport.
Line coding consists of representing the digital signal to be
transported by an amplitude- and time-discrete signal that is
optimally tuned for the specific properties of the physical channel
(and of the receiving equipment).
The waveform pattern of voltage or current used to represent the
1s and 0s of a digital data on a transmission link is called line
encoding.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Common types of Line Codes
The common types of line encoding are
unipolar
polar
bipolar
Manchester encoding
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Need for Line Codes
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Properties of Line Codes
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Unipolar coding
Unipolar encoding is a line code. A positive voltage represents a
binary 1, and zero volts indicates a binary 0. It is the simplest line
code, directly encoding the bitstream, and is analogous to on-off
keying in modulation.
Its drawbacks are that it is not self-clocking and it has a significant
DC component, which can be halved by using return-to-zero, where
the signal returns to zero in the middle of the bit period.
With a 50% duty cycle each rectangular pulse is only at a positive
voltage for half of the bit period.
This is ideal if one symbol is sent much more often than the other
and power considerations are necessary, and also makes the signal
self-clocking.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
NRZ(Non-Return-to-Zero) - Traditionally, a unipolar scheme was
designed as a non-return-to-zero (NRZ) scheme, in which the positive
voltage defines bit 1 and the negative voltage defines bit 0.
It is called NRZ because the signal does not return to zero at the
middle of the bit.
Compared with its polar counterpart, Uni Polar NRZ, this scheme is
very expensive.
The normalized power (power required to send 1 bit per unit line
resistance) is double that for polar NRZ.
For this reason, this scheme is not normally used in data
communications today.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Return-to-zero
Return-to-zero (RZ) describes a line code used in
telecommunications signals in which the signal
drops (returns) to zero between each pulse.
This takes place even if a number of consecutive
0s or 1s occur in the signal.
The signal is self-clocking. This means that a
separate clock does not need to be sent
alongside the signal, but suffers from using twice
the bandwidth to achieve the same data-rate as
compared to non-return-to-zero format.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Polar RZ
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
BiPolar Signalling
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Manchester Encoding
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Inter Symbol Interference (ISI)
In
telecommunication,
inter
symbol
interference (ISI) is a form of distortion of
a signal in which one symbol interferes with
subsequent symbols.
This is an unwanted phenomenon as the previous
symbols have similar effect as noise, thus making
the communication less reliable.
ISI is usually caused by multipath propagation or
the inherent non-linear frequency response of
a channel causing successive symbols to "blur"
together.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Nyquist criterion for distorionless
transmission
To design
under the following two conditions:
(a). There is no ISI at the sampling instants (Nyquist criterion).
(b). A controlled amount of ISI is allowed (correlative coding)
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Design of Bandlimited Signals for Zero
ISI - Nyquist criterion
Recall the output of the receiving filter,
sampled at t = kT, is given by
Thus, in time domain, a sufficient condition
for p(t) such that it is ISI free is
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Theorem: (Nyquist) A necessary and and sufficient condition
for p(t) to satisfy (1) is that the Fourier transform P(f) satisfies
This is known as the Nyquist pulse-shaping criterion or
Nyquist condition for zero ISI.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Investigate possible pulses which satisfy the
Nyquist criterion
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Disadvantages:
(a) an ideal LPF is not physically realizable.
(b) Note that
Thus, the rate of convergence to zero is low since the
tails of p(t) decay as 1/|t|.
Hence, a small mistiming error in sampling the
output of the matched filter at the demodulator
results in an infinite series of ISI components.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Correlative Coding and Equalization
Correlative Coding
For zero ISI, the symbol rate R = 1/T < 2W, the
Nyquist rate.
We may relax the condition of zero ISI in order to
achieve R = 2W.
The schemes which allow a controlled amount
of ISI to achieve the symbol rate 2W are called
correlative coding or partial response signaling
schemes.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
2nd Nyquist Criterion
Values at the pulse edge are distortionless
p(t) =0.5, when t= -T/2 or T/2; p(t)=0, when t=(2k-1)T/2, k ,
where - /T f /T
Pr ( f ) Re[ (1) n P ( f n / T )] T cos( fT / 2)
n
PI ( f ) Im[ ( 1) n P ( f n / T )] 0
n
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
3rd Nyquist Criterion
Within each symbol period, the integration of signal (area) is proportional to
the integration of the transmit signal (area)
( wt ) / 2
,
T
P ( w) sin( wT / 2)
0,
w
p(t )
/T
( wt / 2)
/ T
sin (wT / 2)
2 n1T
2
A 2 n1
2
1,
p(t )dt
0,
e jwt d w
n0
n0
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Eye Diagram
Eye diagram is a means of evaluating the quality of a received
digital a eform
By quality is meant the ability to correctly recover symbols and timing
The received signal could be examined at the input to a digital receiver
or at some stage within the receiver before the decision stage
Eye diagrams reveal the impact of ISI and noise
Two major issues are 1) sample value variation, and 2) jitter
and sensitivity of sampling instant
Eye diagram reveals issues of both
Eye diagram can also give an estimate of achievable BER
Check eye diagrams at the end of class for participation
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Interpretation of Eye Diagram
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Raised Cosine Eye Diagram
The larger , the wider the
opening.
The larger , the larger
bandwidth (1+ )/Tb
But smaller will lead to
larger errors if not sampled
at the best sampling time
which occurs at the center
of the eye.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Cosine rolloff filter: Eye pattern
2nd Nyquist
2nd Nyquist:
1st Nyquist:
2nd Nyquist:
1st Nyquist:
1st Nyquist
2nd Nyquist:
1st Nyquist:
2nd Nyquist:
1st Nyquist:
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Eye Diagram Setup
Eye diagram is a retrace display of
data waveform
Data waveform is applied to
input channel
Scope is triggered by data
clock
Horizontal span is set to cover
2-3 symbol intervals
Measurement of eye opening is
performed to estimate BER
BER is reduced because of
additive interference and noise
Sampling also impacted by
jitter
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Partial Response Signals
Previous classes: Sy(w)=|P(w)|^2 Sx(w)
Control signal generation methods to reduce Sx(w)
Raise Cosine function for better |P(w)|^2
This class: improve the bandwidth efficiency
Widen the pulse, the smaller the bandwidth.
But there is ISI. For binary case with two symbols, there is
only few possible interference patterns.
By adding ISI in a controlled manner, it is possible to achieve
a signaling rate equal to the Nyquist rate (2W symbols/sec)
in a channel of bandwidth W Hertz.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Thank you
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
UNIT IV
DIGITAL MODULATION SCHEME
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Geometric Representation of
Signals
Objective: To represent any set of M energy
signals {si(t)} as linear combinations of N
orthogonal basis functions, where N M
Real value energy signals s1(t), s2(t),..sM(t),
each of duration TOrthogonal
sec basis
function
si (t ) sij j (t ),
j 1
0 t T
i==1,2,....,M
4.1
(5.5)
coefficient
Energy signal
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Coefficients:
T
sij si (t ) j (t )dt ,
0
i=1,2,....,M
j=1,2,....,M
(5.6)
Real-valued basis functions:
1 if i j
0 i (t ) j (t )dt ij 0 if i j
T
(5.7)
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
The set of coefficients can be viewed as a
N-dimensional vector, denoted by si
Bears a one-to-one relationship with the
transmitted signal si(t)
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
(a) Synthesizer for generating the signal si(t). (b) Analyzer for
generating the set of signal vectors si.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
So,
Each signal in the set si(t) is completely
determined by the vector of its coefficients
si1
s
i2
.
si ,
.
.
siN
i 1,2,....,M
(5.8)
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Finally,
The signal vector si concept can be extended to 2D, 3D
etc. N-dimensional Euclidian space
Provides mathematical basis for the geometric
representation of energy signals that is used in noise
analysis
Allows definition of
Length of vectors (absolute value)
Angles between vectors
Squared value (inner product of si with itself)
si
siT si
N
= sij2 ,
Matrix
Transposition
i 1,2,....,M
(5.9)
j 1
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Figure 5.4
Illustrating the geometric
representation of signals
for the case when N 2
and M 3.
(two dimensional space,
three signals)
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Also,
What is the relation between the vector
representation of a signal and its energy value?
start with the
definition of average
energy in a signal
Where si(t) is
Ei si2 (t )dt
(5.10)
si (t ) sij j (t ), (5.5)
j 1
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
N
N
After substitution: Ei sij j (t ) sikk (t ) dt
k 1
0 j 1
T
After regrouping: Ei
j 1
s s (t ) (t )dt
k 1
ij ik
(5.11)
N
j(t) is orthogonal, so
2
E
i
ij = s i
finally we have:
j 1
The energy of a
signal is equal to the
squared length of its
vector
(5.12)
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Formulas for two signals
Assume we have a pair of signals: si(t) and sj(t),
each represented by its vector,
Then:
T
sij si (t )sk (t )dt s s
0
Inner product of the
signals is equal to the
inner product of their
vector representations
[0,T]
T
i k
(5.13)
Inner product is invariant
to the selection of basis
functions
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Euclidian Distance
The Euclidean distance between two points
represented by vectors (signal vectors) is equal
to
||si-sk|| and the squared value is given by:
si s k
= (sij -skj ) 2
(5.14)
j 1
T
= ( si (t ) sk (t ))2 dt
0
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
BPSK - BINARY PHASE
SHIFT KEYING
Generation of BPSK:
Consider a sinusoidal carrier. If it is modulated
by a bi-polar bit stream, its polarity will be
reversed every time the bit stream changes
polarity.
This, for a sinewave, is equivalent to a phase
reversal (shift). The multiplier output is a BPSK 1
signal.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
BPSK signal in time domain
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
sychronous demodulation of BPSK
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Frequency Shift Keying (FSK)
Binary FSK
Frequency of the constant amplitude carrier is changed
according to the message state high (1) or low (0)
s1 (t ) A cos(2f c 2f )t 0 t Tb (bit 1)
s2 (t ) A cos(2f c 2f )t 0 t Tb (bit 0)
Discontinuous / Continuous Phase
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Discontinuous Phase FSK
Switching between 2 independent oscillators for binary 1
&0
input data
phase jumps
cos w1t
switch
cos w2t
binary 1
sBFSK(t)= vH(t)
=
binary 0
2 Eb
cos(2f H t 1 )
Tb
0 t Tb
sBFSK(t)= vL(t)
=
2 Eb
cos(2f Lt 2 ) 0 t Tb
Tb
results in phase discontinuities
discontinuities causes spectral spreading & spurious transmission
Get useful
studysystems
materials from www.rejinpaul.com
not suited for tightly
designed
www.rejinpaul.com
Continuous Phase FSK
single carrier that is frequency modulated using m(t)
2 Eb
sBFSK(t) =
cos(2f ct (t ))
Tb
=
2 Eb
cos 2f c t 2k FSK m( )d
Tb
where (t) =
2k FSK m( )d
m(t) = discontinuous bit stream
(t) = continuous phase function proportional to integral of m(t)
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
FSK Example
Data
1
FSK
Signal
x
a0
a1
0
1
VCO
modulated composite
signal
cos wct
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Spectrum & Bandwidth of BFSK Signals
complex envelope of BFSK is nonlinear function of m(t)
spectrum evaluation - difficult - performed using actual time
averaged measurements
PSD of BFSK consists of discrete frequency components at
fc
fc nf , n is an integer
PSD decay rate (inversely proportional to spectrum)
PSD decay rate for CP-BFSK
1
f 4
PSD decay rate for non CP-BFSK
f = frequency offset from fc
1
f 2
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Spectrum & Bandwidth of BFSK Signals
Transmission Bandwidth of BFSK Signals (from Carsons Rule)
B = bandwidth of digital baseband signal
BT = transmission bandwidth of BFSK signal
BT = 2f +2B
assume 1st null bandwidth used for digital signal, B
- bandwidth for rectangular pulses is given by B = Rb
- bandwidth of BFSK using rectangular pulse becomes
BT = 2(f + Rb)
if RC pulse shaping used, bandwidth reduced to:
BT = 2f +(1+) Rb
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
General FSK signal and orthogonality
Two FSK signals, VH(t) and VL(t) are orthogonal if
T
VH (t )VL (t )dt 0
interference between VH(t) and VL(t) will average to 0 during
demodulation and integration of received symbol
received signal will contain VH(t) and VL(t)
demodulation of VH(t) results in (VH(t) + VL(t))VH(t)
T
VH (t )VL (t )dt 0
(t )VH (t )dt 0
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
An FSK signal for 0 t Tb
vH(t) =
then
and
2 Eb
cos( 2 ( f c f )t )
Tb
Tb
Eb
0 Tb cos(4f ct ) cos(4ft )dt
Tb
2 Eb
cos( 2 ( f c f )t )
Tb
2Eb
cos( 2 ( f c f )t ) cos( 2 ( f c f )t )
Tb
Eb
=
cos(2 (2 f c )t ) cos(2 (2f )t )
Tb
vH(t) vL(t) =
VH (t )VL (t )dt
0
and vL(t) =
Eb sin( 4f c t ) sin( 4ft )
Tb 4f c
4f 0
Eb sin( 4f cTb ) sin( 4fTb )
Tb
4f c
4f
vH(t) vL(t) are orthogonal if f sin(4fcTb) = -fc(sin(4f Tb)
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
QPSK
Quadrature Phase Shift Keying (QPSK)
can be interpreted as two independent
BPSK systems (one on the I-channel
and one on Q-channel), and thus the
same performance but twice the
bandwidth (spectrum) efficiency.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
QPSK Constellation Diagram
Q
Carrier phases
{0, /2, , 3/2}
Carrier phases
{/4, 3/4, 5/4, 7/4}
Quadrature Phase Shift Keying has twice the
bandwidth efficiency of BPSK since 2 bits are
transmitted in a single modulation symbol
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Types of QPSK
Q
Conventional QPSK
Offset QPSK
/4 QPSK
Conventional QPSK has transitions through zero (i.e. 1800 phase
transition). Highly linear amplifiers required.
In Offset QPSK, the phase transitions are limited to 900, the transitions on
the I and Q channels are staggered.
In /4 QPSK the set of constellation points are toggled each symbol, so
transitions through zero cannot occur. This scheme produces the lowest
envelope variations.
Get useful study materials from www.rejinpaul.com
All QPSK schemes require linear power amplifiers
www.rejinpaul.com
Quadrature Phase Shift Keying (QPSK):
Also a type of linear modulation scheme
Quadrature Phase Shift Keying (QPSK) has twice the bandwidth efficiency of
BPSK, since 2 bits are transmitted in a single modulation symbol.
The phase of the carrier takes on 1 of 4 equally spaced values, such as
where each value of phase corresponds to a unique pair of message bits.
The QPSK signal for this set of symbol states may be defined as:
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
QPSK
The striking result is that the bit error probability of QPSK is identical to
BPSK, but twice as much data can be sent in the same bandwidth. Thus,
when compared to BPSK, QPSK provides twice the spectral efficiency with
exactly the same energy efficiency.
Similar to BPSK, QPSK can also be differentially encoded to allow noncoherent detection.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Quadrature amplitude modulation
Quadrature amplitude modulation (QAM) is both an
analog and a digital modulation scheme.
It conveys two analog message signals, or two
digital bit streams, by changing (modulating)
the amplitudes of two carrier waves, using
the amplitude-shift keying(ASK) digital modulation
scheme or amplitude modulation (AM) analog
modulation scheme.
The two carrier waves, usually sinusoids, are out of
phase with each other by 90 and are thus
called quadrature carriers or quadrature components
hence the name of the scheme.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
QAM Transmitter
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
First the flow of bits to be transmitted is split
into two equal parts: this process generates
two independent signals to be transmitted.
They are encoded separately just like they
were in an amplitude-shift keying (ASK)
modulator.
Then one channel (the one "in phase") is
multiplied by a cosine, while the other
channel (in "quadrature") is multiplied by a
sine.
This way there is a phase of 90 between
them. They are simply added one to the other
and sent through
the
real
channel.
Get useful
study
materials
from www.rejinpaul.com
www.rejinpaul.com
QAM Receiver
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
The receiver simply performs the inverse
operation of the transmitter.
Multiplying by a cosine (or a sine) and by a
low-pass filter it is possible to extract the
component in phase (or in quadrature).
Then there is only an ASK demodulator
and the two flows of data are merged
back.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Carrier Synchronization
Synchronization is one of the most critical
functions of a communication system with
coherent receiver. To some extent, it is the
basis of a synchronous communication
system.
Carrier synchronization
Symbol/Bit synchronization
Frame synchronization
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Receiver needs estimate and compensate for frequency and phase
differences between a received signals carrier wave and the
receivers local oscillator for the purpose of coherent demodulation,
no matter it is analog or digital communication systems.
To extract the carrier
1. Pilot-tone insertion method
Sending a carrier component at specific spectral-line along with
the signal component. Since the inserted carrier component has
high frequency stability, it is called pilot.
2. Direct extraction method
Directly extract the synchronization information from the
received signal component.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
1. Pilot-tone insertion method
insert pilot to the modulated signal
x(t)
Modulator
Bandpass
filter
Add
s(t)
-asin(ct)
cos(ct)
/2phase
shift
The pilot signal is generated by shift the
carrier by 900 and decrease by several dB,
then add to the modulated signal. Assume
the modulated signal has 0 DC component,
then the pilots is
t f t cos c t a sin ct
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
2. Direct extraction method
If the spectrum of the received signal
already contains carrier component, then
the carrier component can be extracted
simply by a narrowband filter or a PLL.
If the modulated signal supresses the
carrier component, then the carrier
component may be extracted by
performing nonlinear transformation or
using a PLL with specific design
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
DPSK
DPSK is a kind of phase shift keying which
avoids the need for a coherent reference
signal at the receiver.
Differential BPSK
0 = same phase as last signal element
1 = 180 shift from last signal element
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
DPSK modulation and
demodulation
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Thank you
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Unit - 5
Error Controlling Codes
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
ENTROPY
The e t op of a sou e is defi ed as the sou e hi h p odu es a e age
i fo atio pe i di idual essage o s
ol i a pa ti ula i te al .
Then the number of messages is given as
The amount of information in message m1 is given as
The total amount of information
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Co t
Thus the total amount of information due to L
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Co t
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
PROPERTIES OF ENTROPY
The entropy of a discrete memoryless channel
source
Property 1
1. Entropy is zero, if the event is sure or its impossible
Get useful study materials from www.rejinpaul.com
Co t
www.rejinpaul.com
Property 2
2. Entropy H= log2 K when the symbols are equally likely for K
symbols PK= 1/K
Get useful study materials from www.rejinpaul.com
Co t
www.rejinpaul.com
Property 3
3. Maximum upper bound on entropy is
Get useful study materials from www.rejinpaul.com
Co t
www.rejinpaul.com
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Co t
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Cont..
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
RATE OF INFORMATION
RATE OF INFORMATION
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
SOURCE CODING
An important problem in communication system is the
efficient representation of data generated by a source, which
can be achieved by source encoding (or) source coding
process
The device which performs source encoding is called source
encoder
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
EFFICIENCY OF SOURCE ENCODER
What are the requirements that a source
encoder should satisfy for it to be efficient--------- Binary codewords & Discrete memory-less
source
What are the advantages of source coding?
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
NOISELESS CODING THEOREM
Shanons information theory for a discrete memoryless
sources and channels involves three theorems.
(i) Shanons first theorem (or) Source coding theorem
(ii) Shanons second theorem (or) channel coding theorem (or)
shanons theorem on channel capacity
(iii) Shanons third theorem (or) information capacity theorem
(or) shanon hartley theorem.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
SHANONS FIRST THEOREM
This theorem is also called as source coding theorem. Consider a discrete
memoryless source
Encoding process
The average code word length L is given as
The coding efficiency of the source encoder is
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
STATEMENT
Shanon first theorem is stated as Given a discrete memoryless source of
entropy H, the average codeword length L for any distortionless source
encoding is bounded as
L>= H
According to source coding theorem the entropy H represents as the
fundamental limit on the average number of bits per source symbol
necessary to represent a discrete memoryless source
It can be made as small as, but not smaller than the entropy H thus, with
Lmin=H, then is represented as
= H/L
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
SHANONS SECOND THEOREM
Why channel coding?
Goal of channel coding
Process
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Co t
Approach
The code rate is given as
Suppose a discrete memoryless source alphabet S, Entropy H(S) bits per
source symbol and source emits and delivers symbols for every Ts seconds
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
STATEMENT
Two parts
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
SHANONS THIRD THEOREM
The channel capacity of a white band limited
gaussian channel is
Signal power
Noise power
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
MUTUAL INFORMATION
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
MUTUAL INFORMATION
The difference between these two values is
called mutual information
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
PROPERTIES OF MUTUAL INFORMATION
1. The mutual info of channel is symmetric
2. The mutual information is always non negative
3. The mutual information of a channel is related to the joint
entropy of the channel input and the channel output by
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
CHANNEL CAPACITY
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
CHANNEL CAPACITY
What is channel capacity?
It must satisfy two constraints
Get useful study materials from www.rejinpaul.com
Co t
www.rejinpaul.com
Transmission efficiency
Redundancy
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Error Control
Coding
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Introduction
Error Control Coding (ECC)
Extra bits are added to the data at the transmitter
(redundancy) to permit error detection or
correction at the receiver
Done to prevent the output of erroneous bits
despite noise and other imperfections in the
channel
The positions of the error control coding and
decoding are shown in the transmission model
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Transmission Model
Error
Digital
Source
Source
Encoder
Control
Coding
Line
Coding
Modulator
X(w)
(Transmit
Filter, etc)
Hc(w)
Transmitter
N(w)
Channel
Noise
+
Error
Digital
Source
Sink
Decoder
Control
Decoding
Demod
Line
Decoding
(Receive
Filter, etc)
Y(w)
Receiver
Get useful study materials from www.rejinpaul.com
Error Models
www.rejinpaul.com
Binary Symmetric Memoryless Channel
Assumes transmitted symbols are binary
E o s affe t s a d s ith e ual p o a ilit
(i.e., symmetric)
Errors occur randomly and are independent
from bit to bit (memoryless)
1-p
0
IN
0
OUT
p is the probability of bit
error or the Bit Error Rate
(BER) of the channel
p
1
1
1-p
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Error Models
Many other types
Burst errors, i.e., contiguous bursts of bit
errors
output from DFE (error propagation)
common in radio channels
Insertion, deletion and transposition errors
We will consider mainly random errors
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Error Control Techniques
Error detection in a block of data
Can then request a retransmission, known as
automatic repeat request (ARQ) for sensitive
data
Appropriate for
Low delay channels
Channels with a return path
Not appropriate for delay sensitive data, e.g.,
real time speech and data
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Error Control Techniques
Forward Error Correction (FEC)
Coding designed so that errors can be corrected at
the receiver
Appropriate for delay sensitive and one-way
transmission (e.g., broadcast TV) of data
Two main types, namely block codes and
convolutional codes. We will only look at block
codes
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Block Codes
We will consider only binary data
Data is grouped into blocks of length k bits
(dataword)
Each dataword is coded into blocks of length n
bits (codeword), where in general n>k
This is known as an (n,k) block code
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Block Codes
A vector notation is used for the datawords
and codewords,
Dataword d = (d1 d2.dk)
Codeword c = (c1 c2..cn)
The redundancy introduced by the code is
quantified by the code rate,
Code rate = k/n
i.e., the higher the redundancy, the lower the
code rate
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Block Code - Example
Dataword length k = 4
Codeword length n = 7
This is a (7,4) block code with code rate = 4/7
For example, d = (1101), c = (1101001)
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Error Control Process
101101
1000
1000
Source code
data chopped
into blocks
Channel
Dataword
(k bits)
Dataword
(k bits)
Codeword
(n bits)
coder
Codeword +
possible errors
(n bits)
Channel
decoder
Channel
Error flags
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Error Control Process
Decoder gives corrected data
May also give error flags to
Indicate reliability of decoded data
Helps with schemes employing multiple layers of
error correction
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Hamming Distance
Error control capability is determined by
the Hamming distance
The Hamming distance between two
codewords is equal to the number of
differences between them, e.g.,
10011011
11010010 have a Hamming distance = 3
Alternatively, can compute by adding
codewords (mod 2)
=01001001 (now count up the ones)
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Hamming Distance
The Hamming distance of a code is equal to
the minimum Hamming distance between
two codewords
If Hamming distance is:
1 no error control capability; i.e., a single error
in a received codeword yields another valid
codeword
XXXXXXX
X is a valid codeword
Note that this representation is diagrammatic
only.
In reality each codeword is surrounded by n
codewords. That is, one for every bit that
could be changed
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Hamming Distance
If Hamming distance is:
2 can detect single errors (SED); i.e., a single error
will yield an invalid codeword
XOXOXO
X is a valid codeword
O in not a valid codeword
See that 2 errors will yield a valid (but
incorrect) codeword
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Hamming Distance
If Hamming distance is:
3 can correct single errors (SEC) or can detect
double errors (DED)
XOOXOOX
X is a valid codeword
O in not a valid codeword
See that 3 errors will yield a valid but incorrect
codeword
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Hamming Distance - Example
Hamming distance 3 code, i.e., SEC/DED
Or can perform single error correction (SEC)
10011011
11011011
11010011
11010010
X
O
O
X
This code corrected this way
This code corrected this way
X is a valid codeword
O is an invalid codeword
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Hamming Distance
The maximum number of detectable errors is
d min 1
That is the maximum number of correctable
errors is given by,
d min 1
t
where dmin is the minimum Hamming distance
between 2 codewords and. means the smallest
integer
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Linear Block Codes
As seen from the second Parity Code
example, it is possible to use a table to hold
all the codewords for a code and to look-up
the appropriate codeword based on the
supplied dataword
Alternatively, it is possible to create
codewords by addition of other codewords.
This has the advantage that there is now no
longer the need to held every possible
codeword in the table.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Linear Block Codes
If there are k data bits, all that is required is to hold k
linearly independent codewords, i.e., a set of k
codewords none of which can be produced by linear
combinations of 2 or more codewords in the set.
The easiest way to find k linearly independent
ode o ds is to hoose those hi h ha e i just
one of the first k positio s a d i the othe k-1 of
the first k positions.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Linear Block Codes
For example for a (7,4) code, only four
codewords are required, e.g.,
1
So, to obtain the codeword for dataword 1011,
the first, third and fourth codewords in the list
are added together, giving 1011010
This process will now be described in more detail
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Linear Block Codes
An (n,k) block code has code vectors
d=(d1 d2.dk) and
c=(c1 c2..cn)
The block coding process can be written as
c=dG
where G is the Generator Matrix
a11 a12 ... a1n a1
a
21 a22 ... a2n a 2
.
.
...
. .
ak 2 study
akn fromwww.rejinpaul.com
... materials
ak
ak1Get useful
www.rejinpaul.com
Linear Block Codes
Thus,
k
c di a i
i 1
ai must be linearly independent, i.e.,
Since codewords are given by summations
of the ai vectors, then to avoid 2 datawords
having the same codeword the ai vectors
must be linearly independent
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Linear Block Codes
Sum (mod 2) of any 2 codewords is
also a codeword, i.e.,
Since for datawords d1 and d2 we have;
d3 d1 d 2
So,
k
i 1
i 1
i 1
i 1
c3 d 3i a i (d1i d 2i )a i d1i a i d 2i a i
c3 c1 c2
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Linear Block Codes
0 is always a codeword, i.e.,
Since all zeros is a dataword then,
k
c 0 ai 0
i 1
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Error Correcting Power of LBC
The Hamming distance of a linear block code (LBC) is
simply the minimum Hamming weight (number of
s o e ui ale tl the dista e f o the all
codeword) of the non-zero codewords
Note d(c1,c2) = w(c1+ c2) as shown previously
For an LBC, c1+ c2=c3
So min (d(c1,c2)) = min (w(c1+ c2)) = min (w(c3))
Therefore to find min Hamming distance just need
to search among the 2k codewords to find the min
Hamming weight far simpler than doing a pair
wise check for all possible codewords.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Linear Block Codes example 1
For example a (4,2) code, suppose;
1 0 1 1
G
0
1
0
1
a1 = [1011]
a2 = [0101]
For d = [1 1], then;
c
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Linear Block Codes example 2
A (6,5) code with
1
0
G 0
1
1
1
1
Is an even single parity code
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Systematic Codes
For a systematic block code the dataword
appears unaltered in the codeword
usually at the start
The generator matrix has the structure,
k
1
0
G
..
R=n-k
0 .. 0 p11 p12 .. p1R
1 .. 0 p21 p22 .. p2 R
I | P
.. .. .. .. .. .. ..
0 .. 1 pk1 pk 2 .. pkR
P is often referred to as parity bits
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Systematic Codes
I is k*k identity matrix. Ensures dataword
appears as beginning of codeword
P is k*R matrix.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Decoding Linear Codes
One possibility is a ROM look-up table
In this case received codeword is used as an address
Example Even single parity check code;
Address
000000
000001
000010
000011
Data
0
1
1
0
Data output is the error flag, i.e., 0 codeword ok,
If no error, dataword is first k bits of codeword
For an error correcting code the ROM can also store
datawords
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Decoding Linear Codes
Another possibility is algebraic decoding, i.e.,
the error flag is computed from the received
codeword (as in the case of simple parity
codes)
How can this method be extended to more
complex error detection and correction
codes?
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
CYCLIC CODES
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Motivation & Properties of cyclic code
Cyclic code are a class of linear block codes.
Thus, we can find generator matrix (G) and
parity check matrix (H).
The reason is that they can be easily
implemented with externally cost effective
electronic circuit.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Definition
An (n,k) linear code C is cyclic if every cyclic
shift of a codeword in C is also a codeword in
C.
If c0 c1 c2 . cn-2
cn-1 c0 c1 . cn-3
cn-2 cn-1 c0 . cn-4
: : :
:
c1 c2 c3 . cn-1
cn-1 is a codeword, then
cn-2
cn-3
:
c0
are all codewords.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Example: (6,2) repetition code
C 000000, 010101, 101010, 111111
is a cyclic code.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Example2: (5,2) linear block code
1 0 1 1 1
G
0
1
1
0
1
is a single error correcting code, the set of codeword are:
0
1
C
0
0 0 0 0 Thus, it is not a cyclic code
0 1 1 1 since, for example, the cyclic
shift of [10111] is [11011]
1 1 0 1
1 0 1 0
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Example 3
The (7,4) Hamming code discussed before is
cyclic:
1010001
1101000
0110100
0011010
0001101
1000110
0100011
1110010
0111001
1011100
0101110
0010111
1001011
1100101
0000000
1111111
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Generator matrix of a non-systematic (n,k) cyclic
codes
The generator matrix will be in this form:
g0
0
G0
g1
gnk 1
gnk
g0
g1
gnk 1
gnk
g0
g1
gnk 1 gnk
0
0
0
gnk
g0
gnk 1
notice that the row are merely cyclic shifts of the
g g0g1 gnk 1gnk 000
basis vector
Get useful study materials from www.rejinpaul.com
r n
www.rejinpaul.com
The code vector are:
C m G ; where m [m0m1 mk 1 ]
c 0 m0 g 0
c1 m0 g1 m1g 0
c 2 m0 g 2 m1g1 m2 g 0
c n 1 m0 g n k m1g n k 1 mn k g 0
Notice that,k 1
C m j g 1 ; where m j 0, if j 0 or j k 1
j 0
This summation is a convolution between
and .
It would be much easier if we deal with multiplication, this transform is done
using Polynomial Representation.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Code Polynomial
Let c = c0 c1 c2 . cn-1. The code polynomial
of c: c(X) = c0 + c1X+ c2 X2 + . + cn-1 Xn-1
where the power of X corresponds to the bit position,
and
the coefficients are 0s and 1s.
Example:
1010001 1+X2+X6
0101110 X+X3+X4+X5
Each codeword is represented by a polynomial of degree
less than or equal n-1. deg[ c(X) ] n 1
Get useful study materials from www.rejinpaul.com
The addition and multiplication are as follow:
ax j bx j (a b) x j
(ax j ).(bx k ) (a . b) x j k
www.rejinpaul.com
Where (a+b) and (a.b) are under GF(2). But
j+k is integral addition
Example:
m( x ) m0 m1 x m2 x 2
g ( x ) g 0 g1 x
addition
m( x ) g ( x ) (m0 g 0 ) (m1 g1 )x (m2 0)x 2
Multipliation
m( x )g ( x ) m0g 0 (m0 g1 m1g 0 )x (m1g1 m2 g 0 )x 2 m2 g1 x 3
Notice that in multiplication the coefficient are the
same as convolution sum
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Implementing the Shift
Let c = c0
c1
c2 . cn-1
and c(i) = cn-i cn-i+1 c0 . cn-i-1 (i shifts to the right)
c(X) = c0 + c1X+ c2 X2 + . + cn-1 Xn-1
c (i)(X) = cn-i + cn-i+1 X + . + cn-1 Xi-1 + . +
c0Xi +. +cn-i-1 Xn-1
What is the relation between c(X) and c (i)(X)?
Apparently, shifting a bit one place to the right is
equivalent
to multiplying the term by X.
Xic(X)= c0Xi +c1X i+1 + .+ cn-i-1 Xn-1 + cn-i Xn .+ cn-1 Xn+i-1
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Implementing the Shift (contd)
Xic(X) = cn-i Xn ++cn-1 Xn+i-1 +c0Xi +c1X i+1 + + cn-i-1 Xn-1
The first i terms have powers n, and are not suitable
for
representing bit locations.
Add to the polynomial the zero-valued sequence:
(cn-i + cn-i ) + (cn-i+1 + cn-i+1 )X + . + (cn-1 + cn-1 )Xi-1
Xic(X) = cn-i (Xn +1) + cn-i+1 X (Xn +1)+. +cn-1 Xi-1 (Xn +1)+
cn-i
+ cn-i+1 X
+. +cn-1 Xi-1+
c0Xi +c1X i+1 + . + cn-i-1 Xn-1
That is:
Xic(X) = q(X)(Xn +1) + c(i)(X)
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Implementing the Shift (contd)
c(i)(X) is the remainder from dividing Xic(X) by (Xn +1).
c(i)(X) = Rem[Xic(X)/ (Xn +1)] = Xic(X) mod (Xn +1).
Example:
c = 0101110. c(X) = X + X3 + X4 + X5.
X3c(X) = X4 + X6 + X7 + X8
Rem[X3c(X)/ (X7 +1)] = 1 + X + X4 + X6 [Show]
c(3) = 1100101
Short cut of long division:
Xic(X)|Xn=1 = q(X)(Xn +1) |Xn=1 + c(i)(X) |Xn=1
That is
c(i)(X) = Xic(X)|Xn=1
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
More on Code Polynomials
The nonzero code polynomial of minimum degree in a
cyclic code C is unique.
(If not, the sum of the two polynomials will be a code polynomial of degree less than
the minimum. Contradiction)
Let g(X) = g0 + g1X +.+ gr-1Xr-1 +Xr be the nonzero code
polynomial of minimum degree in an (n,k) cyclic code.
Then the constant term g0 must be equal to 1.
(If not, then one cyclic shift to the left will produce a code polynomial of degree less
than the minimum. Contradiction)
For the (7,4) code given in the Table, the nonzero code
polynomial of minimum degree is g(X) = 1 + X + X3
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Generator Polynomial
Since the code is cyclic: Xg(X), X2g(X),., Xn-r-1g(X)
are code polynomials in C. (Note that deg[Xn-r1g(X)] = n-1).
Since the code is linear:
(a0 + a1X + . + an-r-1 Xn-r-1)g(X) is also a code
polynomial, where ai = 0 or 1.
A binary polynomial of degree n-1 or less is a
code polynomial if and only if it is a multiple of
g(X).
(First part shown. Second part: if a code polynomial c(X) is not a
multiple of g(X), then Rem[c(X)/g(X)] must be a code polynomial of
degree less than the minimum. Contradiction)
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Generator Polynomial (contd)
All code polynomials are generated from the
multiplication c(X) = a(X)g(X).
deg[c(x)] n-1, deg[g(X)] = r, ==> deg[a(x)] n-r-1
(2k) = # different ways of forming a(x), 2n-r
# codewords,
Therefore, r = deg[g(X)] = n-k
Since deg[a(X)] k-1, the polynomial a(X) may be taken
to be the information
polynomial u(X) (a polynomial
whose coefficients are the information bits). Encoding
is performed by the multiplication c(X) = u(X)g(X).
g(X), generator polynomial, completely defines the
code.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
(7,4) Code Generated by 1+X+X3
Infor.
0000
1000
0100
1100
0010
1010
0110
1110
0001
Code
0000000
1101000
0110100
1011100
0011010
1110010
0101110
1000110
0001101
Code polynomials
0 = 0 . g(X)
1 + X + X3 = 1 . g(X)
X + X2 + X4 = X . g(X)
1 + X2 + X3 + X4 = (1 + X) . g(X)
X2 + X3 + X5 = X2 . g(X)
1 + X+ X2 + X5 = (1 + X2) . g(X)
X+ X3 + X4 + X5 = (X+ X2) . g(X)
1 + X4 + X5 = (1 + X + X2) . g(X)
X3 + X4 + X6 = X3 . g(X)
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
(7,4) Code Generated by 1+X+X3
(Contd)
Infor.
1001
0101
1101
0011
1011
Code
1100101
0111001
1010001
0010111
1111111
0111 0100011
1111 1001011
Code polynomials
1 + X + X4 + X6 = (1 + X3) . g(X)
X+ X2 + X3 + X6 = (X+ X3) . g(X)
1 + X2 + X6 = (1 + X + X3) . g(X)
X2 + X4 + X5 + X6 = (X2 + X3). g(X)
1 + X + X2 + X 3 + X 4 + X 5 + X 6
= (1 + X2 + X3) . g(X)
X + X5 + X6 = (X + X2 + X3). g(X)
1 + X 3 + X 5 + X6
= (1 + X + X2 + X3) . g(X)
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Constructing g(X)
The generator polynomial g(X) of an (n,k) cyclic
code is a factor of Xn+1.
Xkg(X) is a polynomial of degree n.
Xkg(X)/ (Xn+1)=1 and remainder r(X). Xkg(X) = (Xn+1)+ r(X).
But r(X)=Rem[Xkg(X)/(Xn+1)]=g(k)(X) =code polynomial= a(X)g(X).
Therefore, Xn+1= Xkg(X) + a(X)g(X)= {Xk + a(X)}g(X). Q.E.D.
(1)To construct a cyclic code of length n, find the
factors of the polynomial Xn+1.
(2)The factor (or product of factors) of degree n-k
serves as the generator polynomial of an (n,k)
cyclic code. Clearly, a cyclic code of length n does
not exist for every k.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Constructing g(X) (contd)
(3)The code generated this way is guaranteed to be cyclic.
But we know nothing yet about its minimum distance.
The generated code may be good or bad.
Example: What cyclic codes of length 7 can be
constructed?
X7+1 = (1 + X)(1 + X + X3)(1 + X2 + X3)
g(X)
Code
g(X)
Code
(1 + X)
(7,6)
(1 + X)(1 + X + X3)
(7,3)
(1 + X + X3)
(7,4)
(1 + X) (1 + X2 + X3)
(7,3)
(1 + X2 + X3)
(7,4)
(1 + X + X3)(1 + X2 + X3) (7,6)
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Circuit for Multiplying Polynomials (1)
u(X) = uk-1Xk-1 + . + u1X + u0
g(X) = grXr + gr-1Xr-1 + . + g1X + g0
u(X)g(X) = uk-1grXk+r-1
+ (uk-2gr+ uk-1gr-1)Xk+r-2 + .
+ (u0g2+ u1g1 +u2g0)X2 +(u0g1+ u1g0)X +u0g0
gr
gr-1
gr-2
g1
g0
Output
Input
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Circuit for Multiplying Polynomials (2)
u(X)g(X) = uk-1Xk-1(grXr + gr-1Xr-1 + . + g1X + g0)
+ .
+ u1X(grXr + gr-1Xr-1 + . + g1X + g0)
+ u0(grXr + gr-1Xr-1 + . + g1X + g0)
g0
g1
g2
gr-1
gr
Output
Input
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Systematic Cyclic Codes
Systematic: b0 b1 b2 . bn-k-1 u0 u1 u2 . uk-1
b(X) = b0 + b1X+.+bn-k-1Xn-k-1, u(X) = u0+u1X+ .+uk-1Xk-1
then c(X) = b(X) + Xn-k u(X)
a(X)g(X) = b(X) + Xn-k u(X)
Xn-k u(X)/g(X) = a(X) + b(X)/g(X)
Or
b(X) = Rem[Xn-k u(X)/g(X)]
Encoding Procedure:
1. Multiply u(X) by Xn-k
2. Divide Xn-k u(X) by g(X), obtaining the remainder b(X).
3. Add b(X) to Xn-k u(X), obtaining c(X) in systematic form.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Systematic Cyclic Codes (contd)
Example
Consider the (7,4) cyclic code generated by
g(X) = 1 + X + X3. Find the systematic codeword for
the message 1001.
u(X) = 1 + X3
X3u(X) = X3 + X6
b(X) = Rem[X3u(x)/g(X)] = X3u(x) |g(X) = 0 = X3u(x) |X3
= X+1
= X3 (X3 +1) = (1 + X)X = X + X2
Therefore, c = 0111001
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Circuit for Dividing Polynomials
Output
g0
g1
g2
gr-1
gr
Input
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Encoder Circuit
Gate
g1
g2
gr-1
Gate ON. k message bits are shifted into the channel.
The parity bits are formed in the register.
Gate OFF. Contents of register are shifted into the
channel.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
(7,4) Encoder Based on 1 + X + X3
Gate
+
Input
Register :
000
initial
+
1
110
1st shift
1
101
2nd shift
3rd shift
0
100
1
100
4th shift
Codeword: 1 0 0 1 0 1 1
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Parity-Check Polynomial
Xn +1 = g(X)h(X)
deg[g(x)] = n-k, deg[h(x)] = k
g(x)h(X) mod (Xn +1) = 0.
h(X) is called the parity-check polynomial. It
plays the rule of the H matrix for linear codes.
h(X) is the generator polynomial of an (n,n-k)
cyclic code, which is the dual of the (n,k) code
generated by g(X).
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Decoding of Cyclic Codes
STEPS:
(1) Syndrome computation
(2) Associating the syndrome to the error pattern
(3) Error correction
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Syndrome Computation
Received word: r(X) = r0 + r1X +.+ rn-1Xn-1
If r(X) is a correct codeword, it is divisible by g(X).
Otherwise: r(X) = q(X)g(X) + s(X).
deg[s(X)] n-k-1.
s(X) is called
the syndrome polynomial.
s(X) = Rem[r(X)/g(X)] = Rem[ (a(X)g(X) + e(X))/g(x)]
= Rem[e(X)/g(X)]
The syndrome polynomial depends on the error pattern
only.
s(X) is obtained by shifting r(X) into a divider-by-g(X)
circuit. The register contents are the syndrome bits.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Example: Circuit for Syndrome Computation
Gate
r = 0010110
+
Shift
1
2
3
4
5
6
7
Input
0
1
1
0
1
0
0
Register contents
0 0 0 (initial state)
000
100
110
011
011
111
1 0 1 (syndrome s)
What is g(x)?
Find the syndrome using
long division.
Find the syndrome using
the shortcut for the
remainder.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Association of Syndrome to Error
Pattern
Look-up table implemented via a combinational logic circuit
(CLC). The complexity of the CLC tends to grow exponentially
with the code length and the number of errors to correct.
Cyclic property helps in simplifying the decoding circuit.
The circuit is designed to correct the error in a certain location
only, say the last location. The received word is shifted cyclically
to trap the error, if it exists, in the last location and then correct
it. The CLC is simplified since it is only required to yield a single
output e telling whether the syndrome, calculated after every
cyclic shift of r(X), corresponds to an error at the highest-order
position.
The received digits are thus decoded one at a time.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Meggit Decoder
Shift r(X) into the buffer B and the syndrome
register R simultaneously. Once r(X) is completely
shifted in B, R will contain s(X), the syndrome of
r(X).
1. Based on the contents of R, the detection circuit
yields the output e (0 or 1).
2. During the next clock cycle:
(a) Add e to the rightmost bit of B while shifting
the contents of B. (The rightmost bit of B may be
read out). Call the modified content of B r1(1)(X).
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Meggit Decoder (contd)
(b) Add e to the left of R while shifting the
contents of R. The modified content of R is
s1(1)(X), the syndrome of r1(1)(X) [will be shown
soon].
Repeat steps 1-2 n times.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
General Decoding Circuit
.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
More on Syndrome Computation
Let s(X) be the syndrome of a received polynomial r(X)
= r0 + r1X +.+ rn-1Xn-1 . Then the remainder resulting
from dividing Xs(X) by g(X) is the syndrome of r(1)(X),
which is a cyclic shift of r(X).
Proof: r(X) = r0 + r1X +.+ rn-1Xn-1
r(1)(X) = rn-1 + r0X +.+ rn-2Xn-1 = rn-1 + Xr(X) + rn-1Xn
= rn-1(Xn+1) + Xr(X)
c(X)g(X) + y(X) = rn-1 g(X)h(X)+ X{a(X)g(x) + s(X)}
where y(X) is the syndrome of r(1)(X) .
Xs(X) = {c(X) + a(X) + rn-1 h(X)}g(X) + y(X)
Therefore, Syndrome of r(1)(X)= Rem[Xs(X)/g(X)]. Q.E.D.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
More on Syndrome Computation
(contd)
Note: for simplicity of notation, let Rem[Xs(X)/g(X)]
be denoted by s(1)(X). s(1)(X) is NOT a cyclic shift of
s(X), but the syndrome of r(1)(X) which is a cyclic
shift of r(X).
Example:
r(X) = X2 + X4 + X5; g(X) = 1 + X + X3
s(X) = Rem[r(X)/g(X)] = 1 + X2
r(1)(X) = X3 + X5 + X6
s(1)(X) = Rem[r(1)(X)/g(X)] = 1 (polynomial)
Also, s(1)(X) = Rem[Xs(X)/g(X)] = 1.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
More on Syndrome Computation
(contd)
Gate
r = 0010110
Gate +
Shift
Input
1
2
3
4
5
6
7
8 (input gate off)
9
0
1
1
0
1
0
0
-
Register contents
0 0 0 (initial state)
000
100
110
011
011
111
1 0 1 (syndrome s)
1 0 0 (syndrome s (1) )
0 1 0 (syndrome s (2) )
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
More on Syndrome Computation
(contd)
Let r(X) = r0 + r1X +.+ rn-1Xn-1 has the syndrome s(X).Then
r(1)(X) = rn-1 + r0 X + .+ rn-2Xn-1 has the syndrome:
s(1)(X) = Rem[r(1)(X)/g(X)].
Define r1 (X) = r(X) + Xn-1 = r0 + r1X +.+ (rn-1+1)Xn-1
The syndrome of r1 (X), call it s1 (X):
s1 (X)= Rem[{r(X)+ Xn-1}/g(X)] = s(X) + Rem[Xn-1/g(X)]
r1(1)(X), which is one cyclic shift of r1 (X), has the
syndrome
s1(1)(X) = Rem[X s1 (X)/g(X)] = Rem[Xs(X)/g(X)+ Xn/g(X)]
= s(1)(X) + 1 (since Xn +1 = g(X)h(X))
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Worked Example
Consider the (7,4) Hamming code generated by 1+X+X3.
Error pattern
X6
X
X4
X3
X2
X1
X0
Syndrome polynomial.
1 + X2
1 + X + X2
X + X2
1+X
X2
X
1
101
111
011
110
001
010
100
Let c = 1 0 0 1 0 1 1 and r = 1 0 1 1 0 1 1
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Cyclic Decoding of the (7,4) Code
.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Error Correction Capability
Error correction capability is inferred from the
roots of g(X).
Results from Algebra of Finite Fields:
Xn +1 has n roots (in an extension field)
These roots can be expressed as powers of one
element, a.
The roots are a0, a1 , ., an-1.
The roots occur in conjugates.
i
a
2 j mod n
constitute a conjugate set.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Designing a Cyclic Code
Theorem:
If g(X) has l roots (out of it n-k roots) that are consecutive
powers of a, then the code it generates has a minimum
distance d = l + 1.
To design a cyclic code with a guaranteed minimum
distance of d, form g(X) to have d-1 consecutive roots. The
parameter d is called the designed minimum distance of
the code.
Since roots occur in conjugates, the actual number of
consecutive roots, say l, may be greater than d-1. dmin = l +
1 is called the actual minimum distance of the code.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Design Example
X15 + 1 has the roots 1= a0, a1 , ., a14.
Conjugate group
polynomial
a0)
(a, a2 , a4 , a8)
(a3 , a6 , a9 , a12)
X4
(a5 , a10)
(a7, a14 , a13 , a11)
Corresponding
f1X1 + X
f2X 1 + X + X4
f3X 1 + X + X2 + X3 +
f4X 1 + X + X2
f5X 1 + X3 + X4
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Design Example (contd)
Find g(X) that is guaranteed to be a double error
correcting code.
The code must have a, a2 , a3 and a4 as roots.
g(X) = f2X f3X = 1 + X4 + X6 + X7 + X8
This generator polynomial generates a (15, 7) cyclic code
of minimum distance at least 5.
Roots of g(X) = a, a2, a3 , a4 , a6, a8 , a9 , a12.
Number of consecutive roots = 4.
The actual minimum distance of the code is 5.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Cyclic Codes
Some Standard Cyclic
Block Codes
BCH Codes
Linear
Codes
Hamming Codes
The Hamming Codes: single-error correcting codes
which can be expressed in cyclic form.
BCH: the Bose-Chaudhuri-Hocquenghem are among
the most important of all cyclic block codes.
Extenstion of Hamming for t-error correcting codes.
Some Burst-Correcting Codes: good burstcorrecting codes have been found mainly by
computer search.
Cyclic Redundancy Check Codes: shortened cyclic
error-detecting codes used in automatic repeat
request (ARQ) systems.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
BCH Codes
Definition of the codes:
For any positive integers m (m>2) and t0 (t0 <
n/2), there is a BCH binary code of length n =
2m - 1 which corrects all combinations of t0 or
fewer errors and has no more than mt0 paritycheck bits.
Block length
2m 1
Number of parity - check bits
n k mt 0
min imum distance
d min 2t 0 1
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Table of Some BCH Codes
n
7
15
15
15
31
31
31
k
4
11
7
5
26
16
11
d (designed) d ( actual)
3
3
3
3
5
5
7
7
3
3
5
7
7
11
g(X)*
13
23
721
2463
45
107657
5423325
* Octal representation with highest order at the left.
721 is 111 010 001 representing 1+X4+X6+X7+X8
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Burst Correcting Codes
good burst-correcting codes have been found mainly
by computer search.
The length of an error burst, b, is the total number of
bits in error from the first error to the last error,
inclusive.
The minimum possible number of parity-check bits
required to correct a burst of length b or less is given
r 2b
by the Rieger bound.
The best understood codes for correcting burst errors
are cyclic codes.
For correcting longer burst interleaving is used.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Table of Good Burst-Correcting Codes
n
7
15
15
31
63
63
511
1023
k
3
10
9
25
56
55
499
1010
b
2
2
3
2
2
3
4
4
g(X) (octal)
35 (try to find dmin!)
65
171
161
355
711
10451
22365
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Cyclic Redundancy Check Codes
Shortened cyclic codes
Error-detecting codes
used in automatic repeat request (ARQ)
systems.
Usually concatenated with error correcting code
CRC
Encoder
Error Correction
Encoder
To
Error Correction
Decoder
CRC
Syndrom
e Checker
To
Info Sink
Transmitter
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Performance of CRC Codes
CRC are typically evaluated in terms of their
error pattern coverage
Burst error detection capability
Probability of undetected error
For a (n,k) CRC the coverage, , is the ratio of the number of invalid
blocks of length n to the total number of blocks of length n.
This ratio is a measure of the probability that a randomly chosen block is
not a valid code block. By definition,
1 2 r
where r is the number of check bits
For some near-optima CRC codes,
see table 5.6.5
Code
Coverage
CRC-12
0.999756
CRC-ANSI
0.999985
CRC-32A
0.99999999977
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Basic Definitions
k =1, n = 2 , (2,1) Rate-1/2 convolutional code
Two-stage register ( M=2 )
Each input bit influences the output for 3 intervals
(K=3)
K = constraint length of the code = M + 1
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Generator Polynomial
A convolutional code may be defined by a set
of n generating polynomials for each input bit.
For the circuit under consideration:
g1(D) = 1 + D + D2
g2(D) = 1 + D2
The set {gi(D)} defines the code completely.
The length of the shift register is equal to the
highest-degree generator polynomial.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
State Diagram Representation
The output depends on the current input and
the state of the encoder ( i. e. the contents of
the shift register).
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Trellis Diagram Representation
Expansion of state diagram in time.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Decoding
A message m is encoded into the code
sequence c.
Each code sequence represents a path in
the trellis diagram.
Minimum Distance Decoding
Upon receiving the received sequence r, search
for the path that is closest ( in Hamming
distance) to r .
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
The Viterbi Algorithm
Walk through the trellis and compute the Hamming
distance between that branch of r and those in the
trellis.
At each level, consider the two paths entering the
same node and are identical from this node onwards.
From these two paths, the one that is closer to r at this
stage will still be so at any time in the future. This path
is retained, and the other path is discarded.
Proceeding this way, at each stage one path will be
saved for each node. These paths are called the
survivors. The decoded sequence (based on MDD) is
guaranteed to be one of these survivors.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
The Viterbi Algorithm (contd)
Each survivor is associated with a metric of
the accumulated Hamming distance (the
Hamming distance up to this stage).
Carry out this process until the received
sequence is considered completely. Choose
the survivor with the smallest metric.
Get useful study materials from www.rejinpaul.com
The Viterbi Algorithm:
www.rejinpaul.com
The viterbi algorithm is used to decode convolutional codes
and any structure or system that can be described by a trellis.
It is a maximum likelihood decoding algorithm that selects
the most probable path that maximizes the likelihood
function.
The algorithm is based on add-compare-select the best path
each time at each state.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Example: For the convolutional code example in the previous lecture,
starting from state zero, Decode the following received sequence.
At the end of the
trellis, select the
path with the
minimum
cumulative
Hamming weight
This is the
survival
path in
this
example
Decoded
sequence is
m=[10 1110]
Add the weight of the
path at each state
Compute the two possible paths at
each state and select the one
with less cumulative Hamming
Getweight
useful study materials
This is called the
survival path
from www.rejinpaul.com
www.rejinpaul.com
Distance Properties of Conv. Codes
Def: The free distance, dfree, is the minimum Hamming
distance between any two code sequences.
Criteria for good convolutional codes:
Large free distance, dfree.
Small Hamming distance (i.e. as few differences as possible
) between the input information sequences that produce
the minimally separated code sequences. dinf
There is no known constructive way of designing a
conv. code of given distance properties. However, a
given code can be analyzed to find its distance
properties.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Distance Prop. of Conv. Codes (contd)
Convolutional codes are linear. Therefore, the
Hamming distance between any pair of code
sequences corresponds to the Hamming distance
between the all-zero code sequence and some nonzero
code sequence. Thus for a study of the distance
properties it is possible to focus on the Hamming
distance between the all-zero code sequence and all
nonzero code sequences.
The nonzero sequence of minimum Hamming weight
diverges from the all-zero path at some point and
remerges with the all-zero path at some later point.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Distance Properties: Illustration
sequence 2: Hamming weight = 5, dinf = 1
sequence 3: Hamming weight = 7, dinf = 3.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Modified State Diagram
The span of interest to us of a nonzero path starts
from the 00 state and ends when the path first
returns to the 00 state. Split the 00 state (state a)
to two states: a0 and a1.
The branches are labeled with the dummy
variables D, L and N, where:
The power of D is the Hamming weight (# of 1s) of the
output corresponding to that branch.
The power of N is the Hamming weight (# of 1s) of the
information bit(s) corresponding to that branch.
The power of L is the length of the branch (always = 1).
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Modified State Diagram (contd)
.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Properties of the Path
Sequence 2:
code sequence:
state sequence:
Labeled:
Prop. : w =5, dinf
branches.
Sequence 3:
code sequence:
state sequence:
Labeled:
Prop. : w =7, dinf
branches.
.. 00 11 10 11 00 ..
a0 b c a1
(D2LN)(DL)(D2L) = D5L3N
=1, diverges from the allzero path by 3
.. 00 11 01 01 00 10 11 00 ..
a0 b d c b c a1
(D2LN)(DLN)(DL)(DL)(LN)(D2L) = D7L6N3
=3, diverges from the allzero path by 6
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Transfer Function
Input-Output relations:
a0 = 1
b = D2LN a0 + LNc
c = DLb + DLNd
d = DLNb + DLNd
a1 = D2Lc
The transfer function T(D,L,N) = a1 /a0
D5 L3
T(D, L, N)
1 DNL(1 L)
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Transfer Function (contd)
Performing long division:
T = D5L3N + D6L4N2 + D6L5N2 + D7L5N3 + .
If interested in the Hamming distance property of
the code only, set N = 1 and L = 1 to get the
distance transfer function:
T (D) = D5 + 2D6 + 4D7
There is one code sequence of weight 5. Therefore
dfree=5.
There are two code sequences of weight 6,
four code sequences of weight 7, .
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Decoding of Convolutional Codes
Let Cm be the set of allowable code sequences of length m.
Not all sequences in {0,1}m are allowable code sequences!
Each code sequence c C
can be represented by a unique path
m
through the trellis diagram
What is the probability that the code sequence is sent and the
c
binary sequence
is received?
dH y,
c
mdH y,
c
Pr y |
c p
.(1 p)
where p is the probability of bit error of BSC from modulation
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Decoding Rule for Convolutional Codes
Maximum Likelihood Decoding Rule:
d y, c
c min
Pr y,
cCm H
cCm
Choose the code sequence through the trellis which has the
max
smallest Hamming distance to the received sequence!
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
The Viterbi Algorithm
The Viterbi Algorithm (Viterbi, 1967) is a clever way of
implementing Maximum Likelihood Decoding.
Computer Scientists will recognize the Viterbi Algorithm as an
e a ple of a C te h i ue alled D a i P og a
i g
efe e e: G. D. Fo e , The Vite i Algo ith ,
Proceedings of the IEEE, 1973
Chips are available from many manufacturers which
implement the Viterbi Algorithm for K < 10
Can be used for either hard or soft decision decoding
We consider hard decision decoding initially
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Basic Idea of Viterbi Algorithm
There are 2rm code sequences in Cm .
This number of sequences approaches infinity as m
becomes large
Instead of searching through all possible sequences,
find the best code sequence "one stage at a time"
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
The Viterbi Algorithm
(Hamming Distance Metric)
Initialization:
Let time i = 0.
We assign each state j a metric Z j 0 at time 0.
We know that the code must start in the state 0.
Therefore we assign:
Z j 0 0
Z j 0 for all other states
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
The Viterbi Algorithm (continued)
Consider decoding of the ith segment:
Let y i be the segment of n bits received between times i
and i + 1
ci
There are several code segments
of n bits which lead
into state j at time i+1. We wish to find the most likely one.
ci
Let sc i be the state from which the code segment
emerged
ci
For each state j, we assume that
into
state j if:
Z s
c i i d
is the path leading
c ,y
H i
is the smallest of all the code segments leading into state
j.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
The Viterbi Algorithm (continued)
Iteration:
Let
Let i=i+1
i d c i , y
Z i 1 Z
s
ci
j
H
Repeat previous step
Incorrect paths drop out as i approaches infinity.
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Viterbi Algorithm Decoding Example
r =1/2, K = 3 code from previous example
c = (0 0 1 1
y
= (0 1 1 1
01
00
10
10
1 1) is sent
01
00
10
10
1 1) is received.
What path through the trellis does the Viterbi Algorithm choose?
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Viterbi Algorithm Decoding Example
(continued)
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
SUMMARY
Learnt about the concepts of Entropy and the
source coding techniques
Statement and the theorems of Shanon
Concepts about mutual information and
channel capacity
Understand error control coding techniques
and the concepts about linear block codes,
cyclic codes, convolution codes & viterbi
decoding algorithm
Get useful study materials from www.rejinpaul.com
www.rejinpaul.com
Thank you
Get useful study materials from www.rejinpaul.com