UCCN2043 Lecture Notes
5.0
Error Detection and Correction
In a digital communication system, totally error-free transmission is not possible due to
transmission impairments. At the receiving end, there should be a mechanism for detection of
the errors and if possible for their correction. In this lesson, we will study the various
techniques used for error detection and correction.
5.1
Need For Error Detection And Correction
Consider a communication system in which the transmitted bit stream is
101101110
The transmitted electrical signal corresponding to this bit stream and the received waveform
are shown in Figure 5.1.
Figure 5.1: Errors introduced by transmission medium
Due to the noise introduced in the transmission medium, the electrical signal is distorted. By
using a threshould, the receiver determines whether a 1 is transmitted or a 0 is transmitted. In
this case, the receiver decodes the bit stream as
101001010
At two places, the received bit is in error1 has become 0 in both places.
In a digital communication system, some bits are likely to be received in error due to the
noise in the communication channel. As a result, 1 may become 0 or 0 may become 1. The
Bit Error Rate (BER) is a parameter used to characterize communication systems.
Page 1 of 31
UCCN2043 Lecture Notes
How many errors can be tolerated by a communication system? It depends on the application.
For instance, if English text is transmitted, and a few letters are received in error, it is
tolerable. Studies indicate that even if 20% of the letters are missing, human beings can
understand the text.
Suppose the communication system is used to transmit digitized voice from one place to
another. Studies indicate that even if the Bit Error Rate is 10-3, the listener will be able to
understand the speech. In other words, a voice communication system can tolerate one error
for every 1000 bits transmitted.
Note: The performance of a communication system can be characterized by the Bit Error
Rate (BER). If BER is 10-3, there is one error per 1000 bits.
Now consider the case of a banking application. Suppose you need to transfer $100 from your
account to a friend's account through a data communication network. If the digit 1 becomes 3
due to one bit error during transmission, then instead of $100, $300 will be deducted from
your account! So, for such applications, not even a single error can be tolerated. Hence, it is
very important to detect the errors for data applications.
Errors can be classified as random errors and burst errors.
Random Errors
Appear at random intervals.
Occur at random places in the bit stream
Burst Errors
Caused by sudden disturbances in the medium such as
Lightning
Sudden interference with the nearby devices
Such disturbances cause many consecutive (sequence of) bits to be in error.
Detection and correction of errors is done through channel coding. In channel coding,
additional bits are added at the transmitter end, and these additional bits are used at the
receiving end to check whether the transmitted data is received correctly or not and, if
possible, to correct the errors.
Page 2 of 31
UCCN2043 Lecture Notes
5.2
Channel Coding
The following figure helps to identify the placement of channel coding and decoding module
in communication systems.
Information to be
transmitted
Source
coding
Channel
Channel
coding
codin
g
Modulation
Transmitter
Channel
Information
received
Source
decoding
Channel
Channel
decoding
decodin
g
Demodulation
Receiver
Channel coding is most often applied to communications links in order to improve the
reliability of the information being transferred.
By adding additional bits to the transmitted data stream, it is possible to detect and even
correct for errors in the receiver.
The added coding bits lower the raw data transmission rate through the channel (Coding
expands the occupied bandwidth for a particular message data rate).
Channel codes that are used to detect errors are called error detection codes, while codes
that can detect and correct errors are called error correction codes. There are three general
types of channel codes:- Block codes, Convolutional codes and Concatenated Codes
Error Detection Codes
Error Correction Codes
(Forward Error Correction Codes)
Block Codes
Convolutional Codes
Concatenated Codes
Because decoding is performed after the demodulation portion of the receiver, coding can
be considered to be a post detection technique.
Channel coding is generally treated independently from the type of modulation used,
although this has changed recently with the use of
Trellis coded modulation (TCM) schemes
OFDM, and
New space-time processing that combines coding, antenna diversity, and
modulation to achieve large coding gains without any bandwidth expansion.
Page 3 of 31
UCCN2043 Lecture Notes
Error Detection
In its most elementary form this involves recognizing which part of the received
information is in error and, if appropriate or possible, requesting a repeat transmission
ARQ (Automatic Repeat Request Systems).
Error Detection and Correction
With added complexity, it is possible not only to detect errors, but also to build in the
ability to correct errors without recourse to retransmission.
This is particularly useful where there is no feedback path to the transmission source with
which to request a resend. This process is known as FEC (Forward Error Correction).
ARQ and FEC may be combined, such that minor errors are corrected without
retransmission, and major errors are corrected via a request for retransmission: this is
called Hybrid Automatic Repeat-Request (HARQ).
5.2.1
Automatic Repeat Request Systems (ARQ)
The following figure shows the implementation of an ARQ system in communication
systems.
Source
Encoder
Transmitter
Transmit
Controller
Channel
Modulation
Receiver
Demodulation
Acknowledge
There are basically three types of ARQ implementations
Stop and Wait ARQ
Go Back N ARQ
Selective ARQ
Page 4 of 31
Destination
Decoder
Transmit
Controller
UCCN2043 Lecture Notes
Stop and Wait ARQ
This is the simplest ARQ method where the transmitter waits after each message for an
acknowledgement of correct reception (known as an ACK) from the receiver.
If the message is received in error, a negative acknowledgment (NAK) is returned. While
this process is taking place, any new messages must be stored in a buffer at the
transmitter site.
Retransmission
Transmitting
Data
Received Data
Time
N
A
K
A
C
K
A
C
K
Time
Error
Output Data
2
ACK: Acknowledge
NAK: Negative ACK
Page 5 of 31
Time
UCCN2043 Lecture Notes
Go Back N ARQ
As the name suggests, the transmitter in this case continues to transmit messages in
sequence until a NAK is received.
The NAK identifies which message was in error and the transmitter then 'back-tracks' to
this message, starting to retransmit all messages in the sequence from when the error
occurred.
Clearly, this has less signalling overhead (no ACKs used) than the Stop and Wait protocol.
Go-back 3
Transmitting
Data
Received Data
1 2 3 4 5 3 4 5 6 7 5
N
A
K
1 2
1 2
Time
N
A
K
3 4
Error
Output Data
Go-back 5
Time
Error
3 4
Page 6 of 31
Time
UCCN2043 Lecture Notes
Selective Repeat ARQ
By making the protocol slightly more complex, and by providing a buffer in the receiver
as well as the transmitter, it is of course possible for the receiver to inform the transmitter
of the specific message or packet that is in error.
The transmitter then needs to only send this specific message which the receiver can
reinsert in the correct place in the receiver buffer.
Although the most complex, this is also the most efficient type of ARQ protocol and the
most widely used. There are several variants of this protocol optimized for a given set of
channel characteristics.
Retransmission
Transmitting
Data
Received Data
1 2 3 4 5 3 6 7 8 9 7
N
A
K
1 2
Buffer
1 2
Output Data
1 2
Time
N
A
K
4 5 3 6
Error
5.3
Retransmission
8 9 7
Time
Error
4 5 3 6
8 9 7
3 4 5 6
7 8 9
Time
Time
Error Detection Schemes
A basic requirement of the ARQ system is for the receiving equipment to be able to detect
the presence of errors in the received data.
Error detection is most commonly realized using a suitable hash function (or checksum
algorithm).
A hash function adds a fixed-length tag to a message, which enables receivers to
verify the delivered message by re-computing the tag and comparing it with the
one provided.
Page 7 of 31
UCCN2043 Lecture Notes
There exists a variety of Error Detection Schemes which differ in terms of their
complexity and their suitability for detecting certain kinds of errors
Repetition Codes
Parity Bits
Checksums
Cyclic Redundancy Checks (CRCs)
5.3.1
Parity
One of the simplest yet most frequently used techniques for detecting errors is the parity
check bit.
The parity check bit is usually a single bit (1 or 0) appended to the end of a data word
such that the number of 1s in the new data word is even for even parity, or odd for odd
parity.
Parity is used in serial communication protocols whereby we transmit one character at a time.
For example, if the information bits are
1011010
then an additional bit is added, which is called a parity bit. The parity bit can be added in
such a way that the total number of ones becomes even. In such a case, it is called even parity.
In the above bit stream, already there are four ones, and hence a 0 is added as the parity bit.
The bit stream transmitted is
10110100
In case of odd parity, the additional bit added will make the total number of ones odd. For
odd parity, the additional bit added in the above case is 1 and the transmitted bit stream is
10110101
At the receiving end, from the first 7 bits, the receiver will calculate the expected parity bit. If
the received parity and the calculated parity match, it is assumed that the character received is
OK. Else, it can be concluded that at least one error has occurred during transmission and the
ARQ process can begin. Of course, if two bits are in error, the parity check will pass, and the
errors will go undetected.
It is very easy to verify that parity can detect errors only if there is an odd number of error(s);
if the number of errors is 1, 3, or 5, the error can be detected. If the number of errors is even,
parity bit cannot detect the error.
Parity check is best suited to low noise, low distortion links where the error rate is known
to be very low.
Page 8 of 31
UCCN2043 Lecture Notes
For links with a high probability of error, more sophisticated error checking methods
must be used such as the block or convolutional codes which require the addition of
larger numbers of redundant bits.
5.3.2
Checksum
Checksum of information bits is calculated using simple binary arithmetic. Checksum is used
extensively because its computation is very easy. However, checksum cannot detect all errors.
Suppose we want to send two characters, C and U.
The 7-bit ASCII values for these characters are
C 1000011
U 1010101
In addition to transmitting these bit streams, the binary representation of the sum of these two
characters is also sent.
The value of C is 67 and the value of U is 85.
The sum is 152.
The binary representation of 152 is
1 0 0 1 1 0 0 0.
This bit stream is also attached to the original binary stream, corresponding to C and U, while
transmitting the data.
So, the transmitted bit stream is
1000011 1010101 10011000
At the receiving end, the checksum is again calculated. If the received checksum matches this
calculated checksum, then the receiver assumes that the received data is OK. The checksum
cannot detect all the errors.
Also, if the characters are sent in a different order, i.e., if the sequence is changed, the
checksum will be the same and hence the receiver assumes that the data is correct.
However, checksum is used mainly because its computation is very easy, and it provides a
reasonably good error detection capability.
Note: Checksum is used for error detection in TCP/IP protocols to check whether packets
are received correctly. Different algorithms are used for calculation of checksum.
Page 9 of 31
UCCN2043 Lecture Notes
5.3.3
Cyclic Redundancy Checks (CRCs)
One of the most common, and one of the most powerful, error-detecting codes is the
Cyclic Redundancy Check (CRC). CRC is well suited for detecting burst errors and is
particularly easy to implement in hardware, and is therefore commonly used in digital
networks and storage devices such as hard disk drives.
Implementation of CRC can be described as follow:
Given a k bit block of bits, or message, the transmitter generates an (n-k)-bit
sequence, known as a Frame Check Sequence (FCS), such that the resulting frame,
consisting of n bits, is exactly divisible by some predetermined number.
The receiver then divides the incoming frame by that number and, if there is no
remainder, assumes there was no error.
Additional bits added to the information bits are called the CRC bits. These bits can be 16 or
32. If the additional bits are 16, the CRC is represented as CRC-16. CRC-32 uses 32
additional bits. There are international standards for calculation of CRC-16 and CRC-32.
Since CRC calculation is very important, the C programs to calculate the CRC are given in
Appendix 1 and 2 at the end of the lecture note. When these programs are executed, the
information bits and the CRC in hexadecimal notation will be displayed.
CRC Code Creation & Detection
The CRC creation process is defined as follows:
Get the block of raw message (k bits).
Left shift the raw message by (n-k) bits and then divide it by G (r bits).
Get the remainder R as FCS (number of bits, r=n-k bits)
Append the R to the raw message. The result (n bits) is the frame to be transmitted.
CRC is checked using the following process:
Receive the frame (n bits)
Divide it by G (r bits)
Check the remainder. If the remainder is not zero, then there is an error in the
frame.
These procedures can be further clarified in three ways: modulo-2 arithmetic,
polynomials, and digital logic.
Note: In CRC-polynomial calculation, a standard polynomial is used. This polynomial is
different for CRC-16 and CRC-32. The bit stream is divided by this polynomial to
calculate the CRC bits.
Page 10 of 31
UCCN2043 Lecture Notes
CRC Code Creation & Detection Modulo-2 Arithmetic
Modulo-2 arithmetic is in fact just the exclusive-OR (XOR) operation.
Now define
T
D
F
P
T( n bits )
=
=
=
=
n-bit frame to be transmitted
k-bit block of data, or message, the first k bits of T
(n - k)-bit FCS, the last (n - k) bits of T
pattern of (n - k + 1) bits; this is the predetermined divisor
D( k bits )
F( n k bits )
= 2 ( nk ) D + F
By multiplying D by 2(n-k), we have in effect shifted it to the left by (n k) bits and padded
out the result with zeroes. Adding F yields the concatenation of D and F, which is T.
Example:
Page 11 of 31
UCCN2043 Lecture Notes
CRC Code Creation & Detection - Polynomials
In polynomials approach, the CRC process can be viewed by expressing all values as
polynomials in a dummy variable X, with binary coefficients. The coefficients correspond
to the bits in the binary number.
The code word with n bits is expressed as
c(X)
c1Xn-1 + c2Xn-2 + + cnX0
A generator polynomial with constant coefficients, is used as the divisor in a polynomial
long division over a finite field.
Taking the input data (blocks of input bits called message polynomial) as the dividend
where the remainder (the FCS) becomes the result.
Note: Even parity is a special case of a single-bit CRC, where the single-bit FCS is
generated by the generator polynomial (divisor) x+1.
Message polynomial is divided by Generator polynomial giving quotient and remainder,
the coefficients of the remainder form the bits of final CRC.
Page 12 of 31
UCCN2043 Lecture Notes
Define:
M
The original frame (k bits) to be transmitted before adding the Frame Check
Sequence (FCS).
The resulting FCS of r bits to be added to M (usually r=8, 16, 32).
The cascading of M and F.
The predefined CRC generating polynomial with pattern of r+1 bits.
The main idea in CRC algorithm is that the FCS is generated so that the remainder of T/G
is zero.
Example:
Find the codewords c(x) if
m(x) = 1 + x + x2
and
g(x) = 1 + x + x3 for (7,4) CRC.
Answer:
We have
n = Total number of bits = 7,
k = Number of information bits = 4,
r = Number of parity bits = n - k = 3.
Remember
c(X)
c1Xn-1 + c2Xn-2 + + cnX0
m(x) = (0)x3 + (1)x2 + (1)x + (1)1
and
g(x) = (1)x3 + (0)x2 + (1)x + (1)1
1011
0111
Page 13 of 31
UCCN2043 Lecture Notes
CRC Code Capabilities
An error E(X) will only be undetectable if it is divisible by G(X). It can be shown that all
of the following errors are not divisible by a suitably chosen G(X) and hence are
detectable:
All single-bit errors, if G(X) has more than one non-zero term.
All double-bit errors, as long as G(X) has a factor with at least three terms.
Any odd number of errors, as long as P(X) contains a factor (X + 1)
Any burst error for which the length of the burst is less than or equal to (n k);
that is, less than or equal to the length of the FCS
A fraction of error bursts of length n - k + 1; the fraction equals (1 - 2-(n-k-1))
A fraction of error bursts of length greater than n - k + 1; the fraction equals (1 2-(n-k))
Common CRC Codes
Code
Generator polynomial g(x)
Parity check bits
CRC-12
1+x+x2+x3+x11+x12
12
CRC-16
1+x2+x15+x16
16
CRC-CCITT
1+x5+x15+x16
16
The CRC-12 system is used for transmission of streams of 6-bit characters and generates
a 12-bit FCS.
Both CRC-16 and CRC-CCITI are popular for 8-bit characters, in the United States and
Europe, respectively, and both result in a 16-bit FCS.
5.4
Error Detection & Correction Schemes (Forward Error Correction Schemes)
Correction of errors using an error detection code, requires that block of data be
retransmitted, using the ARQ disciplines explained in Section 5.2.1. For wireless
applications this approach is inadequate for two reasons.
The bit error rate on a wireless link can be quite high, which would result in a
large number of retransmissions.
In some cases, especially satellite links, the propagation delay is very long
compared to the transmission time of a single frame. The result is a very
inefficient system.
Thus, Error Detection & Correction Schemes or more commonly called Forward Error
Correction (FEC) Schemes are inevitable.
Page 14 of 31
UCCN2043 Lecture Notes
The key idea of FEC is to transmit enough redundant data to allow receiver to recover
from errors all by itself. No sender retransmission required.
There are two main types of forward error correction coding scheme:
Block coding where a group (block) of bits is processed as a whole in order to
generate a new (longer) coded block for transmission. A complementary block
decoder is used in the receiver.
Convolutional coding operates on the incoming serial bit stream generating a
real-time encoded serial output stream. A complementary serial (convolutional)
decoder is used in the receiver.
In Forward Error Correction Schemes, there are 4 possible outcomes:
If there are no bit errors, the input to the FEC decoder is identical to the original
codeword, and the decoder produces the original data block as output.
For certain error patterns, it is possible for the decoder to detect and correct those
errors. Thus, even though the incoming data block differs from the transmitted
codeword, the FEC decoder is able to map this block into the original data block.
For certain error patterns, the decoder can detect but not correct the errors. In this
case, the decoder simply reports an uncorrectable error.
For certain, typically rare, error patterns, the decoder does not detect that any
errors have occurred and maps the incoming n-bit data block into a k-bit block
that differs from the original k-bit block.
Page 15 of 31
UCCN2043 Lecture Notes
5.4.1
FECC Block Codes
Information is divided into blocks of length k. r parity bits or check bits are added to each
block (total length n = k+r).
Each k-bit sequence are mapped into a unique n-bit codeword
An (n, k) block code encodes k data bits into n-bit codewords.
There are 2k valid codewords out of a total of 2n possible codewords.
The ratio of redundant bits to data bits, (n - k) / k, is called the Redundancy of the
code.
The ratio of data bits (k) to total bits (n), is called the Code Rate, R = k/n.
Decoder looks for codeword closest to received vector (code vector + error vector)
Tradeoffs between Efficiency, Reliability and Encoding/Decoding complexity.
Block Codes - Principles
The Hamming distance d(v1, v2) between two n-bit binary sequences v1 and v2 is the
number of bits in which v1 and v2 disagree.
For example, if v1 = 011011, v2 = 110001. Then d(v1, v2) = 3
For a code consisting of the codewords w1, w2,, ws where s = 2k, the Minimum
Distance dmin of the code is defined as:
d min
min
i j
[d (w , w )]
i
The number of errors, t, that can be detected satisfies:
t
dmin - 1
The maximum number of guaranteed correctable errors per codeword satisfies:
d 1
t = min
Page 16 of 31
UCCN2043 Lecture Notes
Linear Block Codes and Examples
A Block Code is linear if and only if the modulo-2 sum of any two codewords is also a
valid codeword.
Linear Block Codes Hamming Codes
Hamming codes are a family of (n, k) block error-correcting codes that have the
following parameters:
Block length:
Number of data bits:
Number of check bits:
Minimum distance:
Where x 3.
n = 2x - 1
k = 2x - x - 1
r=nk=x
dmin = 3
Linear Block Codes Cyclic Codes
Most of the error-correcting block codes that are in use are in a category called cyclic
codes.
For such codes, if the n-bit sequence
c = (c0, c1, ... , cn-1)
is a valid codeword, then
(cn-1, c0, c1, , cn-2),
which is formed by ,cyclically shifting c one place to the right, is also a valid codeword.
This class of codes can be easily encoded and decoded using linear feedback shift
registers (LFSRs).
Examples of cyclic codes include the Bose-Chaudhuri-Hocquenhem (BCH) and ReedSolomon codes.
Page 17 of 31
UCCN2043 Lecture Notes
Block Codes Codewords Generation and Error Detection
The uncoded k data bits be represented by the m vector:
m
=
(m1, m2, , mk)
The corresponding codeword be represented by the n-bit c vector:
c
=
(c1, c2, ck, ck+1, , cn-1, cn)
Each parity bit consists of weighted modulo-2 sum of the data bits represented by
symbol.
c1 = m1
c 2 = m 2
...
c k = m k
c = m p
1 1( k +1) m 2 p 2 ( k +1) ... m k p k ( k +1)
k +1
...
c n = m1 p1n m 2 p 2 n ... m k p kn
The block length C of the Linear Block Code is
C=mG
where m is the information codeword block length, G is the generator matrix.
Generator matrix
G = [Ik | P]kn
x n k +i 1
where pi = Remainder of
for i=1, 2, .., k, and I is unit matrix.
g ( x)
The parity check matrix
H = [PT | In-k ]
where PT is the transpose of the matrix P.
Page 18 of 31
UCCN2043 Lecture Notes
Operations of the generator matrix and the parity check matrix
Message
vector
m
Generator
matrix
G
Code
Vector
C
Code
Vector
C
Parity
check
matrix
HT
Null
vector
0
The parity check matrix H is used to detect errors in the received code by using the fact that
c * HT = 0 ( null vector)
Let x = c e be the received message where c is the correct code and e is the error.
Compute
s =
=
xHT
(c e) H T
= c HT e HT
= e HT
If s is 0 then message is correct else there are errors in it, from common known error patterns
the correct message can be decoded.
Block Codes Generator Matrix (Example)
Find linear block code encoder G if code generator polynomial g(x)=1+x+x3 for a (7, 4)
code.
We have
n = Total number of bits = 7,
k = Number of information bits = 4,
r = Number of parity bits = n - k = 3.
1 0L 0 p1
0 1L 0 p
2
G = [I | P ] =
,
L LLL
0 0L1 pk
where
x 3+i 1
pi = Remainder of
g ( x)
for i=1, 2, .., k, and I is unit matrix.
Page 19 of 31
UCCN2043 Lecture Notes
x3
p1 = Re
=1+ x
3
1 + x + x
[0 11]
x4
p 2 = Re
= x + x2
3
1
+
x
+
x
[11 0]
x5
p3 = Re
=1+ x + x2
3
1
+
x
+
x
[111]
x6
p 4 = Re
=1 + x2
3
1
+
x
+
x
[1 0 1]
Therefore
0 11
11 0
P=
111
1 0 1
and
1 0 0 0
0 1 0 0
G=
0 0 1 0
0 0 0 1
0 11
11 0
111
1 0 1
Block Codes Codewords Generation (Example)
1
0
Consider a (7,4) linear block code, where G =
0
0 0 0 0 1 1
1 0 0 1 1 0
0 1 0 1 1 1
0 0 1 1 0 1
Show that the codeword c = [1 0 1 1 0 0 1] for input m = [1 0 1 1].
1
0
Codeword , c = mG = [1 0 1 1]
0
0
1
0
0
0
0
1
0
0
0
0
1
0
1
1
1
1
1
1
0
1
0
1
1(1) 0(0) 1(0) 1(0), 1(0) 0(1) 1(0) 1(0),
1(0) 0(0) 1(1) 1(0), 1(0) 0(0) 1(0) 1(1),
=
1(0) 0(1) 1(1) 1(1), 1(1) 0(1) 1(1) 1(0),
1(1) 0(0) 1(1) 1(1)
[1
0 1 1 0 0 1]
Page 20 of 31
UCCN2043 Lecture Notes
Block Codes Error Detection with No Error (Example)
Assuming the information is m = [1 0 1 1].
Hence the data with FECC to be transmitted is c = [1 0 1 1 0 0 1]
If there is no error in transmission, error vector, e = [0 0 0 0 0 0 0]
The received vector,
x
=
=
=
=>
x
=
c e
[1011001] [0000000]
[1011001]
c
Calculate s.
s
xH T
where
H = PT | I
As calculated earlier
0
1
P=
1
1 1
1 0
1 1
0 1
0 1 1 1
P T = 1 1 1 0
1 0 1 1
Therefore
0 1 1 1 1 0 0
H = P | I = 1 1 1 0 0 1 0
1 0 1 1 0 0 1
s =
[1
0 1 1 0 0 1]
0
1
1
1
0
0
HT
1 1
1 0
1 1
0 1
0 0
1 0
0 1
Page 21 of 31
0
1
= 1
1
0
0
1 1
1 0
1 1
0 1
0 0
1 0
0 1
UCCN2043 Lecture Notes
s
= [ 1(0) 0(1) 1(1) 1(1) 0(1) 0(0) 1(0),
1(1) 0(1) 1(1) 1(0) 0(0) 0(1) 1(0),
1(1) 0(0) 1(1) 1(1) 0(0) 0(0) 1(1) ]
= [0 0 0]
Block Codes Error Detection with 1-bit Error (Example)
Now, assuming the 4th bit is corrupted at the receiving end.
In other words, the error vector e = [0 0 0 1 0 0 0]
This, the received vector,
x
=
c e
=
[1011001] [0001000]
=
[1010001]
=>
x
The position of the erroneous bit can be identified by matrix calculation of s and with
reference to HT.
s =
xH T
[1
0 1 0 0 0 1]
0
1
1
1
0
0
1 1
1 0
1 1
0 1
0 0
1 0
0 1
= [ 1(0) 0(1) 1(1) 0(1) 0(1) 0(0) 1(0),
1(1) 0(1) 1(1) 0(0) 0(0) 0(1) 1(0),
1(1) 0(0) 1(1) 0(1) 0(0) 0(0) 1(1) ]
= [1 0 1]
Referring to HT where H T
0
1
= 1
1
0
0
1 1
1 0
1 1
0 1
0 0
1 0
0 1
It is thus identified that the erroneous bit is bit 4.
Page 22 of 31
UCCN2043 Lecture Notes
Block Codes Error Detection with > 1-bit Error (Example)
Now, assuming the 2nd bit and 4th bit are corrupted at the receiving end.
In other words, the error vector e = [0 1 0 1 0 0 0]
This, the received vector,
x
=
c e
=
[1011001] [0101000]
=
[1110001]
=>
x
Try to identify the erroneous bits!
s =
xH T
[1
1 1 0 0 0 1]
0
1
1
1
0
0
1 1
1 0
1 1
0 1
0 0
1 0
0 1
= [ 1(0) 1(1) 1(1) 0(1) 0(1) 0(0) 1(0),
1(1) 1 (1) 1(1) 0(0) 0(0) 0(1) 1(0),
1(1) 1 (0) 1(1) 0(1) 0(0) 0(0) 1(1) ]
= [0 1 1]
Referring to HT where H T
Oops! Something is wrong!
0
1
= 1
1
0
0
1 1
1 0
1 1
0 1
0 0
1 0
0 1
=>
We have hit the limitation of (7, 4) block code!
(Refer to section Block Codes Principles)
Page 23 of 31
UCCN2043 Lecture Notes
5.4.2
FECC Convolutional Codes
Encoding of continuous information stream rather than information blocks.
Easy implementation using shift register
=> Assuming k inputs and n outputs
The encoder has memory and stores the past input data in the shift registers and performs
modulo-2 sum of some of the past inputs with the current input to form the output bits.
The encoder is written as (n, k, K) encoder, where K is the constraint length defined as the
number of shift registers+1.
Generator vectors are commonly used to represent the actual behaviour of the encoder.
Several algorithms exist for decoding convolutional codes. For relatively small values of
K, Viterbi Algorithm is widely used. Longer constraint length codes are more practically
decoded with any of several sequential decoding algorithms.
Convolutional Codes Encoder
The following example shows a (2,1,3) convolutional encoder
(n, k, K) =
(2, 1, 3)
=>
2 outputs, 1 input, 3 constraint length
with generator vectors g1 = [1 0 1] and g2 = [1 1 0]
From the above figure
There are two generator vectors (each is K bits long) corresponding to the (n=2)
output streams.
The bits from left to right represent the connections in the encoder circuit.
A 1 represent a link (which performs the modulo-2 sum) and
A 0 represent no link.
Depending on its generator vectors, A (2,1,3) encoder may take various forms
Generators are often expressed in octal format by grouping 3 bits together.
Therefore, the generator for the above example is (1012, 1102) = (58,68).
Page 24 of 31
UCCN2043 Lecture Notes
Convolutional Codes State Table
To describe the bahaviour of a convolutional encoder when a particular bit stream is
applied to the input, a time-indexed state table may be handy.
The following example shows the state table when the bit stream [1 0 0 1 1 0] is applied
to the input of the convolutional encoder where
(n, k, K) =
(2, 1, 3)
Generator =
(58,68)
(In other word generator vectors g1 = [1 0 1] and g2 = [1 1 0])
011001
Time index, i
Encoder State, m1, m2
00
10
01
00
10
11
Input bit, x
Output, c = y1y2
11
01
10
11
10
11
To generally describe the behaviour of a convolutional encoder, three alternative methods
are often used:
State Diagram
Trellis Diagram
Tree Diagram
Convolutional Codes State Diagram
Operation of a convolutional encoder can be described by a state diagram. The state of the
encoder is defined as the content of its shift register.
The following example shows the state diagram of the convolutional encoder where
(n, k, K) =
(2, 1, 3)
Generator =
(58,68)
(In another word generator vectors g1 = [1 0 1] and g2 = [1 1 0])
Page 25 of 31
UCCN2043 Lecture Notes
Convolutional Codes Trellis Diagram
The trellis diagram is another way to present the general behaviour of a convolutional
encoder. In addition, it can be used to show the path of an actual input sequence.
Referring to the convolutional encoder described above, if a bit stream of [1 0 0 1 1] is
applied to the input. The Trellis diagram would looks like
11001
Page 26 of 31
UCCN2043 Lecture Notes
Convolutional Codes Tree Diagram
Just like the Trellis diagram, a Tree diagram can present the general behaviour of a
convolutional encoder as well as showing the path of an actual input sequence.
Referring to the convolutional encoder described above, if a bit stream of [1 0 0 1 1] is
applied to the input. The Tree diagram would looks like
y1 y2
Page 27 of 31
UCCN2043 Lecture Notes
5.4.3
FECC Interleaving
Input Data
a1, a2, a3, a4, a5, a6, a7, a8, a9,
Write
Interleaving
Transmitting
Data
R
e
a
d
a1,
a5,
a9,
a13,
a2, a3,
a4
a6,
a7, a8
a10, a11, a12
a14, a15, a16
a1, a5, a9, a13, a2, a6, a10, a14, a3,
Read
De-Interleaving
Output Data
W
r
i
t
e
a1,
a5,
a9,
a13,
a2, a3,
a4
a6,
a7, a8
a10, a11, a12
a14, a15, a16
a1, a2, a3, a4, a5, a6, a7, a8, a9,
Page 28 of 31
UCCN2043 Lecture Notes
Burst error
Transmitting
Data
De-Interleaving
Output Data
0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0,
W
r
i
t
e
Read
0,
0,
0,
1,
1,
1,
1,
0,
0,
0,
0,
0,
0
0
0
0
0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1,
Discrete error
Note: It needs to be noted that in source coding techniques, removing the redundancy in the
signal reduces the data rate.
For instance, in voice coding, low-bit rate coding techniques reduce the redundancy.
In contrast, in error correcting codes, redundancy is introduced to facilitate error
correction at the receiver.
Page 29 of 31
UCCN2043 Lecture Notes
Appendix 1
Program for calculation of CRC-16.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>long CRC = 0x0000;
long GenPolynomial = 0x8005; //Divisor for CRC-16 Polynomial
void bitBybit(int bit);
int main()
{
unsigned int MsgLength;
int i=0,j=0;
char SampleMsg[] = "Hello World";
char tempBuffer[100];
MsgLength = sizeof(SampleMsg)-1;
printf("\nActual Message: %s\n",SampleMsg);
strcpy(tempBuffer, SampleMsg);
tempBuffer[MsgLength] = 0x00;
tempBuffer[MsgLength+1] = 0x00;
tempBuffer[MsgLength+2] = '\0';
printf("\nAfter padding 16 0-bits to the Message:");
for(i=0;i<MsgLength+2;++i)
{
unsigned char ch = tempBuffer[i];
unsigned char mask = 0x80;
for(j=0;j<8;++j)
{
bitBybit(ch&mask);
mask>>=1;
}
printf(" ");
}
printf("\n\nCalculated CRC:0x%x\n\n",CRC);
return 0;
}
void bitBybit(int bit)
{
long firstBit = (CRC & 0x8000);
CRC = (CRC << 1);
if(bit)
{
CRC = CRC ^ 1;
printf("1");
}
else
{
CRC = CRC ^ 0;
printf("0");
}
if(firstBit)
{
CRC = (CRC^GenPolynomial);
}
}
In this listing, the actual message to be transmitted is "Hello World". The message is padded
with sixteen 0 bits, and the message bit stream is
01001000 01100101 01101100 01101100 01101111 00100000 01010111
01101111 01110010 01101100 01100100 00000000 00000000
The calculated CRC value in hexadecimal notation is 0x303f70c3.
Page 30 of 31
UCCN2043 Lecture Notes
Appendix 2
Program for calculation of CRC-32.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
long CRC = 0x00000000L;
long GenPolynomial = 0x04c11db7L; //Divisor for CRC-32 Polynomial
void bitBybit(int bit);
int main()
{
unsigned int MsgLength;
int i=0,j=0;
char SampleMsg[] = "Hello World";
char tempBuffer[100];
MsgLength = sizeof(SampleMsg)-1;
printf("\nActual Message: %s\n",SampleMsg);
strcpy(tempBuffer, SampleMsg);
tempBuffer[MsgLength] = 0x00;
tempBuffer[MsgLength+1] = 0x00;
tempBuffer[MsgLength+2] = 0x00;
tempBuffer[MsgLength+3] = 0x00;
tempBuffer[MsgLength+4] = '\0';
printf("\nAfter padding 32 0-bits to the Message:");
for(i=0;i<MsgLength+4;++i)
{
unsigned char ch = tempBuffer[i];
unsigned char mask = 0x80;
for(j=0;j<8;++j)
{
bitBybit(ch&mask);
mask>>=1;
}
printf(" ");
}
printf("\n\nCalculated CRC:0x%x\n\n",CRC);
return 0;
}
void bitBybit(int bit)
{
long firstBit = (CRC & 0x80000000L);
CRC = (CRC << 1);
if(bit)
{
CRC = CRC ^ 1;
printf("1");
}
else
{
CRC = CRC ^ 0;
printf("0");
}
if(firstBit)
{
CRC = (CRC^GenPolynomial);
}
}
This listing gives the C program to calculate CRC-32. In this program the message for which
CRC has to be calculated is "Hello World". The message bit stream is
01001000 01100101 01101100 01101100 01101111 00100000 01010111
01101111 01110010 01101100 01100100 00000000 00000000 00000000
00000000
The calculated CRC is 0x31d1680c.
Page 31 of 31