Linear block codes add redundancy to data by encoding it into blocks of n coded bits from k information bits, forming (n,k) block codes. The encoder uses a generator matrix to map k message bits to n codeword bits. Codewords can be systematically encoded by placing the k message bits first followed by (n-k) parity check bits. The parity check matrix H defines the parity check equations to detect errors in the received codeword. The syndrome of a received word indicates the error pattern. The minimum distance of a block code is the smallest Hamming distance between distinct codewords and determines its error correction capability. Linear block codes are widely used in communications and storage for their simplicity in implementing error detection and correction.
CONTENTS
Introduction
Linearblock codes
Generator matrix
Systematic encoding
Parity check matrix
Syndrome and error detection
Minimum distance of block codes
Applications
Advantages and Disadvantages
3.
INTRODUCTION
The purposeof error control coding is to enable the
receiver to detect or even correct the errors by introducing
some redundancies in to the data to be transmitted.
There are basically two mechanisms for adding
redundancy:
1. Block coding
2. Convolutional coding
4.
LINEAR BLOCK CODES
The encoder generates a block of n coded bits from k information
bits and we call this as (n, k) block codes.
The coded bits are also called as code word symbols.
Why linear???
A code is linear if the modulo-2 sum of two code words is
also a code word.
5.
n codeword symbols can take 2푛 possible values. From
that we select 2푘 code words to form the code.
A block code is said to be useful when there is one to one
mapping between message m and its code word c as
shown above.
6.
GENERATOR MATRIX
All code words can be obtained as linear combination of basis vectors.
The basis vectors can be designated as {푔1, 푔2, 푔3,….., 푔푘}
For a linear code, there exists a k by n generator matrix such that
푐1∗푛 = 푚1∗푘 . 퐺푘∗푛
where c={푐1, 푐2, ….., 푐푛} and m={푚1, 푚2, ……., 푚푘}
7.
BLOCK CODES INSYSTEMATIC FORM
In this form, the code word consists of (n-k) parity check bits
followed by k bits of the message.
The structure of the code word in systematic form is:
The rate or efficiency for this code R= k/n
8.
G = [퐼푘 P]
C = m.G = [m mP]
Message
part Parity part
Example:
Let us consider (7, 4) linear code where k=4 and n=7
m=(1110) and G = =
c= m.G = 풎ퟏ품ퟏ + 풎ퟐ품ퟐ + 풎ퟑ품ퟑ + 풎ퟒ품ퟒ
= 1.품ퟏ + ퟏ. 품ퟐ + ퟏ. 품ퟑ + ퟎ. 품ퟒ
1 1 0 1 0 0 0
0 1 1 0 1 0 0
1 1 1 0 0 1 0
1 0 1 0 0 0 1
품ퟎ
품ퟏ
품ퟐ
품ퟑ
9.
c = (1101000)+ (0110100) + (1110010)
= (0101110)
Another method:
Let m=(푚1, 푚2, 푚3, 푚4) and c= (푐1, 푐2, 푐3, 푐4, 푐5, 푐6, 푐7)
c=m.G= (푚1, 푚2, 푚3, 푚4)
1 1 0 1 0 0 0
0 1 1 0 1 0 0
1 1 1 0 0 1 0
1 0 1 0 0 0 1
By matrix multiplication we obtain :
푐1=푚1 + 푚3 + 푚4, 푐2=푚1 + 푚2 + 푚3, 푐3= 푚2 + 푚3 + 푚4, 푐4=푚1,
푐5=푚2, 푐6=푚3, 푐7=푚4
The code word corresponding to the message(1110) is (0101110) .
10.
PARITY CHECK MATRIX(H)
When G is systematic, it is easy to determine the
parity check matrix H as:
H = [퐼푛−푘 푃푇 ]
The parity check matrix H of a generator matrix is
an (n-k)-by-n matrix satisfying:
T
퐻(푛−푘)∗푛퐺푛∗푘 = 0
Then the code words should satisfy (n-k) parity
check equations
T T
푐1∗푛퐻푛∗(푛−푘) = 푚1∗푘퐺푘∗푛퐻푛∗(푛−푘) = 0
SYNDROME AND ERRORDETECTION
For a code word c, transmitted over a noisy channel, let r be
the received vector at the output of the channel with error e
+ c r = c+e
e
1, if r ≠c
0, if r=c i e =
Syndrome of received vector r is given by:
T
s = r.H =(푠1, 푠2, 푠3, … … . . , 푠푛−푘)
13.
Properties of syndrome:
The syndrome depends only on the error pattern and
not on the transmitted word.
T T T T
s = (c+e).H = c.H + e.H = e.H
All the error pattern differ by atleast a code word
have the same syndrome s.
The error vector,e=(푒1, 푒2, 푒3, 푒4, 푒5, 푒6, 푒7)=(0100000)
*
C= r + e
= (0001110)+(0100000)
= (0101110)
where C *
is the actual transmitted code word
16.
MINIMUM DISTANCE OFABLOCK CODE
Hamming weight w(c ) : It is defined as the number of non-zero
components of c.
For ex: The hamming weight of c=(11000110) is 4
Hamming distance d( c, x): It is defined as the number of places where
they differ .
The hamming distance between c=(11000110) and x=(00100100) is 4
The hamming distance satisfies the triangle inequality
d(c, x)+d(x, y) ≥ d(c, y)
The hamming distance between two n-tuple c and x is equal to the
hamming weight of the sum of c and x
d(c, x) = w( c+ x)
For ex: The hamming distance between c=(11000110) and
is 4 and the weight of c + x = (11100010) is 4.
x=(00100100)
17.
Minimum hammingdistance d : It is defined as the smallest
min
The Hamming distance between two code vectors in C is equal
to the Hamming weight of a third code vector in C.
d = min{w( c+x):c, x €C, c≠x}
= min{w(y):y €C, y≠ 0}
= w
≠
min
distance between any pair of code vectors in the code.
For a given block code C, d is defined as:
dm i n=min{ d(c, x): c, x€C, c x}
min
min
18.
APPLICATIONS
Communications:
Satellite and deep space communications.
Digital audio and video transmissions.
Storage:
Computer memory (RAM).
Single error correcting and double error detecting code.
19.
ADVANTAGES DISADVANTAGES
It is the easiest and
simplest technique to
detect and correct errors.
Error probability is
reduced.
Transmission bandwidth
requirement is more.
Extra bits reduces bit rate
of transmitter and also
reduces its power.