Communication Systems Lab Spring 2017 National Taiwan University
Lecture 6: Convolutional Code
Scribe: 王傑生、鄭煦 Lecture Date: 4/12, 2017 Lecturer: I-Hsiang Wang
Agenda
• Convolutional Encoding
• State Transition Diagram and Equivalent Trellis
• Decoding: maximum likeihood sequence detection(MLSD)
• MLSD via Viterbi Algorithm
Recap Last time we showed that ”random” linear block code combined with simple modulation (binary PAM). Can
achieve rate-efficient and energy-efficient reliable communication.
• b·G=c
• b ∈ {0, 1}1×k
• c ∈ {0, 1}1×n
• k≤n
k
• rate R = n
Introduction to Convolutional Encoding
• In convolutional encoding the Linear Transform is a ”linear filter” (c.t. In Linear block code the Linear trnasform
is a ”matrix multiplication”)
Example 1
b = (b1 , b2 , ..., bn , ...)
1
Communication Systems Lab Spring 2017 National Taiwan University
(1) (2) (1) (2) (1) (2)
c = (c1 , c1 , c2 , c2 , ..., cn , cn , ...)
where
c(1)
n = bn ⊕ bn−1 = (b ∗ g
(1)
)n (1st linear filter)
c(2)
n = bn ⊕ bn−1 ⊕ bn−2 = (b ∗ g
(2)
)n (2nd linear filter)
1, if n = 0, 2
1, if n = 0, 1, 2
(1) (2)
note: gn = and gn =
0, else
0, else
• Block diagram of the two linear filters:
1st & 2nd linear filter:
1
Rate of this linear encoding R = 2
• In the following, we discuss how the 2 bits stored in the 2 shift registers change with the input bit stream.
observation: length of the FIR filters = number of shift registers + 1
A running example:
State trasition diagram and Equivalent Trellis
• state transition diagram:
”state”: (s1 , s2 )
2
Communication Systems Lab Spring 2017 National Taiwan University
• Trellis Representation:
Generalization
∑k
• c(l) = i=1 b(i) ∗ g (i,k)
No Feed Back(FIR): G(z) is polynominal
3
Communication Systems Lab Spring 2017 National Taiwan University
With Feed Back(IIR): G(z) is rational
k
R= n
• Termination: In practice, due to packetization, we have to terminate at a given time K0 ≫ L (L: number of shift
registers, called ”constrained length”. When termination happens, one needs to ”reset” the registers by appending
L 0’s at the end. Rate = K0
2(K0 +L) ≈ 1
2 when K0 ≫ L
Decoding Convolutional Code
• Maximum Likelihood Sequence Detection (MLSD):
Detection Problem:
given
y1 , y2 , ..., yn (soft-decision, y ∈ R or C)
d1 , d2 , ..., dn (hard-decision, d ∈ {0, 1})
find the most likely input bit sequence b̂
In the previous lecture, we already know that ML = MD where the distance metrics are Euclidean distance in
soft-decision, Hamming distance in hard-decision
thus
arg minb ∥ y − x(b) ∥2
b̂ =
arg minb dn (d, c(b))
• Key observation
The target function can be decomposed with respect to time
∑n
– 1. For soft-decision, the target function is ∥ y − x(b) ∥22 = i=1 |yi − xi (b)|2
∑n
– 2. For hard-decision, the target function is dH (d, c(b)) = i=1 ⊮{di ̸= ci (b)} where ⊮{·}: indicator function.
Unify the above two as a minimun cost problem b̂ = arg minAll length−n path p on the trellis {Cn (p, s)}
• Viteribi Algorithm (Dynamic Programming for solving minimun cost problem on a trellis)
let Vk (s) ≜ minAll length−k path p on the trellis {Ck (p, s)}
Vk+1 (s) = minr→s {Vk (r) + U (r, s, dk )}
Pk+1 (s) = arg minr→s {Vk (r) + U (r, s, dk )}
where
U (r, s, dk ) = dH (C(r → s), dk )
U (r, s, yk ) =∥ x(r → s) − yk ∥2
2
Complexity: linear in n, exponential in L