KEMBAR78
Linear block coding | PPTX
LINEAR BLOCK CODING 
Presented by 
NANDINI MILI 
JEEVANI KONDA
CONTENTS 
Introduction 
Linear block codes 
Generator matrix 
Systematic encoding 
Parity check matrix 
Syndrome and error detection 
Minimum distance of block codes 
Applications 
Advantages and Disadvantages
INTRODUCTION 
The purpose of error control coding is to enable the 
receiver to detect or even correct the errors by introducing 
some redundancies in to the data to be transmitted. 
There are basically two mechanisms for adding 
redundancy: 
1. Block coding 
2. Convolutional coding
LINEAR BLOCK CODES 
The encoder generates a block of n coded bits from k information 
bits and we call this as (n, k) block codes. 
The coded bits are also called as code word symbols. 
Why linear??? 
A code is linear if the modulo-2 sum of two code words is 
also a code word.
 n code word symbols can take 2푛 possible values. From 
that we select 2푘 code words to form the code. 
 A block code is said to be useful when there is one to one 
mapping between message m and its code word c as 
shown above.
GENERATOR MATRIX 
 All code words can be obtained as linear combination of basis vectors. 
 The basis vectors can be designated as {푔1, 푔2, 푔3,….., 푔푘} 
 For a linear code, there exists a k by n generator matrix such that 
푐1∗푛 = 푚1∗푘 . 퐺푘∗푛 
where c={푐1, 푐2, ….., 푐푛} and m={푚1, 푚2, ……., 푚푘}
BLOCK CODES IN SYSTEMATIC FORM 
In this form, the code word consists of (n-k) parity check bits 
followed by k bits of the message. 
The structure of the code word in systematic form is: 
The rate or efficiency for this code R= k/n
G = [ 퐼푘 P] 
C = m.G = [m mP] 
Message 
part Parity part 
Example: 
Let us consider (7, 4) linear code where k=4 and n=7 
m=(1110) and G = = 
c= m.G = 풎ퟏ품ퟏ + 풎ퟐ품ퟐ + 풎ퟑ품ퟑ + 풎ퟒ품ퟒ 
= 1.품ퟏ + ퟏ. 품ퟐ + ퟏ. 품ퟑ + ퟎ. 품ퟒ 
1 1 0 1 0 0 0 
0 1 1 0 1 0 0 
1 1 1 0 0 1 0 
1 0 1 0 0 0 1 
품ퟎ 
품ퟏ 
품ퟐ 
품ퟑ
c = (1101000) + (0110100) + (1110010) 
= (0101110) 
Another method: 
Let m=(푚1, 푚2, 푚3, 푚4) and c= (푐1, 푐2, 푐3, 푐4, 푐5, 푐6, 푐7) 
c=m.G= (푚1, 푚2, 푚3, 푚4) 
1 1 0 1 0 0 0 
0 1 1 0 1 0 0 
1 1 1 0 0 1 0 
1 0 1 0 0 0 1 
By matrix multiplication we obtain : 
푐1=푚1 + 푚3 + 푚4, 푐2=푚1 + 푚2 + 푚3, 푐3= 푚2 + 푚3 + 푚4, 푐4=푚1, 
푐5=푚2, 푐6=푚3, 푐7=푚4 
The code word corresponding to the message(1110) is (0101110) .
PARITY CHECK MATRIX (H) 
 When G is systematic, it is easy to determine the 
parity check matrix H as: 
H = [퐼푛−푘 푃푇 ] 
 The parity check matrix H of a generator matrix is 
an (n-k)-by-n matrix satisfying: 
T 
퐻(푛−푘)∗푛퐺푛∗푘 = 0 
 Then the code words should satisfy (n-k) parity 
check equations 
T T 
푐1∗푛퐻푛∗(푛−푘) = 푚1∗푘퐺푘∗푛퐻푛∗(푛−푘) = 0
Example: 
Consider generator matrix of (7, 4) linear block code 
H = [퐼푛−푘 푃푇 ] and G = [ 푃 퐼푘] 
The corresponding parity check matrix is: 
H= 
1 0 0 
0 1 0 
0 0 1 
1 0 1 
1 1 1 
0 1 1 
1 
0 
1 
G. 퐻푇 = 
1 1 0 1 0 0 0 
0 1 1 0 1 0 0 
1 1 1 0 0 1 0 
1 0 1 0 0 0 1 
ퟏ ퟎ ퟎ 
ퟎ ퟏ ퟎ 
ퟎ ퟎ ퟏ 
ퟏ ퟏ ퟎ 
ퟎ ퟏ ퟏ 
ퟏ ퟏ ퟏ 
ퟏ ퟎ ퟏ 
= 0
SYNDROME AND ERROR DETECTION 
 For a code word c, transmitted over a noisy channel, let r be 
the received vector at the output of the channel with error e 
+ c r = c+e 
e 
1, if r ≠c 
0, if r=c i e = 
Syndrome of received vector r is given by: 
T 
s = r.H =(푠1, 푠2, 푠3, … … . . , 푠푛−푘)
Properties of syndrome: 
 The syndrome depends only on the error pattern and 
not on the transmitted word. 
T T T T 
s = (c+e).H = c.H + e.H = e.H 
 All the error pattern differ by atleast a code word 
have the same syndrome s.
Example: 
Let C=(0101110) be the transmitted code and r=(0001110) be the 
received vector. 
s=r. 퐻푇=(푠1, 푠2, 푠3) 
=(푟1, 푟2, 푟3, 푟4, 푟5, 푟6, 푟7) 
ퟏ ퟎ ퟎ 
ퟎ ퟏ ퟎ 
ퟎ ퟎ ퟏ 
ퟏ ퟏ ퟎ 
ퟎ ퟏ ퟏ 
ퟏ ퟏ ퟏ 
ퟏ ퟎ ퟏ 
The syndrome digits are: 
푠1 = 푟1 + 푟4 + 푟6 + 푟7 = 0 
푠2 = 푟2 + 푟4 + 푟5 + 푟6 = 1 
푠3 = 푟3 + 푟5 + 푟6 + 푟7 = 0
The error vector, e=(푒1, 푒2, 푒3, 푒4, 푒5, 푒6, 푒7)=(0100000) 
* 
C= r + e 
= (0001110)+(0100000) 
= (0101110) 
where C * 
is the actual transmitted code word
MINIMUM DISTANCE OFA BLOCK CODE 
Hamming weight w(c ) : It is defined as the number of non-zero 
components of c. 
For ex: The hamming weight of c=(11000110) is 4 
Hamming distance d( c, x): It is defined as the number of places where 
they differ . 
The hamming distance between c=(11000110) and x=(00100100) is 4 
 The hamming distance satisfies the triangle inequality 
d(c, x)+d(x, y) ≥ d(c, y) 
 The hamming distance between two n-tuple c and x is equal to the 
hamming weight of the sum of c and x 
d(c, x) = w( c+ x) 
For ex: The hamming distance between c=(11000110) and 
is 4 and the weight of c + x = (11100010) is 4. 
x=(00100100)
 Minimum hamming distance d : It is defined as the smallest 
min 
 The Hamming distance between two code vectors in C is equal 
to the Hamming weight of a third code vector in C. 
d = min{w( c+x):c, x €C, c≠x} 
= min{w(y):y €C, y≠ 0} 
= w 
≠ 
min 
distance between any pair of code vectors in the code. 
For a given block code C, d is defined as: 
dm i n=min{ d(c, x): c, x€C, c x} 
min 
min
APPLICATIONS 
 Communications: 
Satellite and deep space communications. 
Digital audio and video transmissions. 
 Storage: 
Computer memory (RAM). 
Single error correcting and double error detecting code.
ADVANTAGES DISADVANTAGES 
 It is the easiest and 
simplest technique to 
detect and correct errors. 
 Error probability is 
reduced. 
 Transmission bandwidth 
requirement is more. 
 Extra bits reduces bit rate 
of transmitter and also 
reduces its power.
Thank you……

Linear block coding

  • 1.
    LINEAR BLOCK CODING Presented by NANDINI MILI JEEVANI KONDA
  • 2.
    CONTENTS Introduction Linearblock codes Generator matrix Systematic encoding Parity check matrix Syndrome and error detection Minimum distance of block codes Applications Advantages and Disadvantages
  • 3.
    INTRODUCTION The purposeof error control coding is to enable the receiver to detect or even correct the errors by introducing some redundancies in to the data to be transmitted. There are basically two mechanisms for adding redundancy: 1. Block coding 2. Convolutional coding
  • 4.
    LINEAR BLOCK CODES The encoder generates a block of n coded bits from k information bits and we call this as (n, k) block codes. The coded bits are also called as code word symbols. Why linear??? A code is linear if the modulo-2 sum of two code words is also a code word.
  • 5.
     n codeword symbols can take 2푛 possible values. From that we select 2푘 code words to form the code.  A block code is said to be useful when there is one to one mapping between message m and its code word c as shown above.
  • 6.
    GENERATOR MATRIX All code words can be obtained as linear combination of basis vectors.  The basis vectors can be designated as {푔1, 푔2, 푔3,….., 푔푘}  For a linear code, there exists a k by n generator matrix such that 푐1∗푛 = 푚1∗푘 . 퐺푘∗푛 where c={푐1, 푐2, ….., 푐푛} and m={푚1, 푚2, ……., 푚푘}
  • 7.
    BLOCK CODES INSYSTEMATIC FORM In this form, the code word consists of (n-k) parity check bits followed by k bits of the message. The structure of the code word in systematic form is: The rate or efficiency for this code R= k/n
  • 8.
    G = [퐼푘 P] C = m.G = [m mP] Message part Parity part Example: Let us consider (7, 4) linear code where k=4 and n=7 m=(1110) and G = = c= m.G = 풎ퟏ품ퟏ + 풎ퟐ품ퟐ + 풎ퟑ품ퟑ + 풎ퟒ품ퟒ = 1.품ퟏ + ퟏ. 품ퟐ + ퟏ. 품ퟑ + ퟎ. 품ퟒ 1 1 0 1 0 0 0 0 1 1 0 1 0 0 1 1 1 0 0 1 0 1 0 1 0 0 0 1 품ퟎ 품ퟏ 품ퟐ 품ퟑ
  • 9.
    c = (1101000)+ (0110100) + (1110010) = (0101110) Another method: Let m=(푚1, 푚2, 푚3, 푚4) and c= (푐1, 푐2, 푐3, 푐4, 푐5, 푐6, 푐7) c=m.G= (푚1, 푚2, 푚3, 푚4) 1 1 0 1 0 0 0 0 1 1 0 1 0 0 1 1 1 0 0 1 0 1 0 1 0 0 0 1 By matrix multiplication we obtain : 푐1=푚1 + 푚3 + 푚4, 푐2=푚1 + 푚2 + 푚3, 푐3= 푚2 + 푚3 + 푚4, 푐4=푚1, 푐5=푚2, 푐6=푚3, 푐7=푚4 The code word corresponding to the message(1110) is (0101110) .
  • 10.
    PARITY CHECK MATRIX(H)  When G is systematic, it is easy to determine the parity check matrix H as: H = [퐼푛−푘 푃푇 ]  The parity check matrix H of a generator matrix is an (n-k)-by-n matrix satisfying: T 퐻(푛−푘)∗푛퐺푛∗푘 = 0  Then the code words should satisfy (n-k) parity check equations T T 푐1∗푛퐻푛∗(푛−푘) = 푚1∗푘퐺푘∗푛퐻푛∗(푛−푘) = 0
  • 11.
    Example: Consider generatormatrix of (7, 4) linear block code H = [퐼푛−푘 푃푇 ] and G = [ 푃 퐼푘] The corresponding parity check matrix is: H= 1 0 0 0 1 0 0 0 1 1 0 1 1 1 1 0 1 1 1 0 1 G. 퐻푇 = 1 1 0 1 0 0 0 0 1 1 0 1 0 0 1 1 1 0 0 1 0 1 0 1 0 0 0 1 ퟏ ퟎ ퟎ ퟎ ퟏ ퟎ ퟎ ퟎ ퟏ ퟏ ퟏ ퟎ ퟎ ퟏ ퟏ ퟏ ퟏ ퟏ ퟏ ퟎ ퟏ = 0
  • 12.
    SYNDROME AND ERRORDETECTION  For a code word c, transmitted over a noisy channel, let r be the received vector at the output of the channel with error e + c r = c+e e 1, if r ≠c 0, if r=c i e = Syndrome of received vector r is given by: T s = r.H =(푠1, 푠2, 푠3, … … . . , 푠푛−푘)
  • 13.
    Properties of syndrome:  The syndrome depends only on the error pattern and not on the transmitted word. T T T T s = (c+e).H = c.H + e.H = e.H  All the error pattern differ by atleast a code word have the same syndrome s.
  • 14.
    Example: Let C=(0101110)be the transmitted code and r=(0001110) be the received vector. s=r. 퐻푇=(푠1, 푠2, 푠3) =(푟1, 푟2, 푟3, 푟4, 푟5, 푟6, 푟7) ퟏ ퟎ ퟎ ퟎ ퟏ ퟎ ퟎ ퟎ ퟏ ퟏ ퟏ ퟎ ퟎ ퟏ ퟏ ퟏ ퟏ ퟏ ퟏ ퟎ ퟏ The syndrome digits are: 푠1 = 푟1 + 푟4 + 푟6 + 푟7 = 0 푠2 = 푟2 + 푟4 + 푟5 + 푟6 = 1 푠3 = 푟3 + 푟5 + 푟6 + 푟7 = 0
  • 15.
    The error vector,e=(푒1, 푒2, 푒3, 푒4, 푒5, 푒6, 푒7)=(0100000) * C= r + e = (0001110)+(0100000) = (0101110) where C * is the actual transmitted code word
  • 16.
    MINIMUM DISTANCE OFABLOCK CODE Hamming weight w(c ) : It is defined as the number of non-zero components of c. For ex: The hamming weight of c=(11000110) is 4 Hamming distance d( c, x): It is defined as the number of places where they differ . The hamming distance between c=(11000110) and x=(00100100) is 4  The hamming distance satisfies the triangle inequality d(c, x)+d(x, y) ≥ d(c, y)  The hamming distance between two n-tuple c and x is equal to the hamming weight of the sum of c and x d(c, x) = w( c+ x) For ex: The hamming distance between c=(11000110) and is 4 and the weight of c + x = (11100010) is 4. x=(00100100)
  • 17.
     Minimum hammingdistance d : It is defined as the smallest min  The Hamming distance between two code vectors in C is equal to the Hamming weight of a third code vector in C. d = min{w( c+x):c, x €C, c≠x} = min{w(y):y €C, y≠ 0} = w ≠ min distance between any pair of code vectors in the code. For a given block code C, d is defined as: dm i n=min{ d(c, x): c, x€C, c x} min min
  • 18.
    APPLICATIONS  Communications: Satellite and deep space communications. Digital audio and video transmissions.  Storage: Computer memory (RAM). Single error correcting and double error detecting code.
  • 19.
    ADVANTAGES DISADVANTAGES It is the easiest and simplest technique to detect and correct errors.  Error probability is reduced.  Transmission bandwidth requirement is more.  Extra bits reduces bit rate of transmitter and also reduces its power.
  • 20.