Upload
apram-singh
View
228
Download
0
Embed Size (px)
Citation preview
EE436 Lecture Notes 1
EEE436DIGITAL COMMUNICATION
Coding
En. Mohd Nazri MahmudMPhil (Cambridge, UK)BEng (Essex, UK)[email protected] 2.14
EE436 Lecture Notes 2
Convolutional Codes
Unlike block codes that operate on block-by-block basis, the convolutional codes are based on message bits that come in serially rather than in blocks.The structure of a generic convolutional encoder can be written in a compact form of (n, k, L) ; n= no of streams, or no of modulo-2 adder; k=number of group of shift register; L=number of state of the shift registerThe encoder takes an L-bit message sequence to produce a coded output sequence of length n(L+M) bits, where n=no of stream of the encoded bits, M=no of bits of message sequenceThe encoder is implemented by a tapped shift register with L+1 stages.
EE436 Lecture Notes 3
Convolutional Codes
The message bits in the register are combined by modulo-2 addition to form the encoded bit, cj
Cj = mj-L.gL + …..+ mj-1.g1 + mj.g0
=
L
iigijm
0.
cj
EE436 Lecture Notes 4
Convolutional Codes – Example
A (2,1,2) convolutional encoder with n=2, k=1 and L=2
c’j
c’’j
EE436 Lecture Notes 5
Convolutional Codes
To provide the extra check bits for error control, the encoded bits from multiple streams are interleaved.For example, a convolutional encoder with n=2 (ie two streams of encoded bits)
Encoded bits from stream 1, c’j = mj-2 + mj-1 + mj
c’j
c’’j
Encoded bits from stream 2, c’’j = mj-2 + mjinterleaved
C= c’1c’’1c’2c’’2c’3c’’3………..
EE436 Lecture Notes 6
Convolutional Codes
Each stream may be characterised in terms of its impulse response ( the response of that stream to a symbol 1 with zero initial condition).Every stream is also characterised in terms of a generator polynomial, which is the unit-delay transform of the impulse response
i
Mgigigig ,........,2,1,0
Let the following generator sequence denotes the impulse response of the i-th path
The generator polynomial of the i-th path is given by
MDiMgDigDigigDig ........2
210
Where D denotes the unit-delay variable
EE436 Lecture Notes 7
Convolutional Codes – ExampleA convolutional encoder with two streams of encoded bits with message sequence 10011 as an input
c’j
c’’j
Stream 1
Stream 2First, we find the impulse response of both streams to a symbol 1.
Impulse response of stream 1 = (1 1 1)Impulse response of stream 2 = (1 0 1)
Then, write the corresponding generator polynomial of both streams
MDiMgDigDigigDig ........2
210
EE436 Lecture Notes 8
Convolutional Codes – Example
First, we find the impulse response of both streams to a symbol 1.Impulse response of stream 1 = (1 1 1)Impulse response of stream 2 = (1 0 1)
Then, write the corresponding generator polynomials of both streams
211 DDDg
212 DDg
Then, write the message polynomial for input message (10011)
431 DDDm
Then, find the output polynomial for both streams by multiplying the generator polynomial and the message polynomial
EE436 Lecture Notes 9
Convolutional Codes – Example
Then, find the output polynomial for both streams by multiplying the generator polynomial and the message polynomial
DmDgDc .22
DmDgDc .11
= (1 + D + D2)(1 + D3 + D4)
= 1 + D + D2 + D3 + D6
= (1 + D2)(1 + D3 + D4)
= 1 + D2 + D3 + D4 + D5 + D6
So, the output sequence for stream 1is 1111001
The output sequence for stream 2is 1011111
InterleaveC = 11,10,11,11,01,01,11
Original message (10011) Encoded sequence (11101111010111)
EE436 Lecture Notes 10
Convolutional Codes – ExerciseA convolutional encoder with two streams of encoded bits with message sequence 110111001 as an input
c’j
c’’j
Stream 1
Stream 2
EE436 Lecture Notes 11
Convolutional Codes – Exercise
First, we find the impulse response of both streams to a symbol 1.Impulse response of stream 1 = (1 1 1)Impulse response of stream 2 = (1 0 1)
Then, write the corresponding generator polynomials of both streams
211 DDDg
212 DDg
Then, write the message polynomial for input message (110111001)
85431 DDDDDDm
Then, find the output polynomial for both streams by multiplying the generator polynomial and the message polynomial
EE436 Lecture Notes 12
Convolutional Codes – Exercise
Then, find the output polynomial for both streams by multiplying the generator polynomial and the message polynomial
DmDgDc .22
DmDgDc .11
= 1 + D5 + D7 + D8 + D9 + D10 = 1 + D +D2 + D4 + D6 + D7 + D8 + D10
So, the output sequence for stream 1is 10000101111
The output sequence for stream 2is 11101011101
InterleaveC = 11 01 01 00 01 10 01 11 11 10 11
Original message (110111001)Encoded sequence (11 01 01 00 01 10 01 11 11 10 11 )
EE436 Lecture Notes 13
Convolutional Codes – Code Tree, Trellis and State Diagram
Draw a code tree for the convolutional encoder below.
c’j
c’’j
Stream 1
Stream 2
EE436 Lecture Notes 14
Code Tree
EE436 Lecture Notes 15
Trellis
EE436 Lecture Notes 16
State Diagram
EE436 Lecture Notes 17
Maximum Likelihood Decoding of a Convolutional Code
Let m denotes a message vector c denotes the corresponding code vector r the received code vector (which may differ from c due to channel noise)
Given the received vector r, the decoder is required to make an estimate m^ of the message vector
Since there is a one-to-one correspondence between the message vector m and the code vector c, therefore
m^ = m only if c^ = c , otherwise a decoding error is committed
The decoding rule is said to be optimum when the probability of decoding error is minimised
EE436 Lecture Notes 18
Maximum Likelihood Decoding of a Convolutional Code
Let say both the transmitted code vector c and the received vector r represent binary sequence of length N.
These two sequences may differ from each other in some locations because of errors due to channel noise.
Let ci and ri denote the ith element of c and r respectively
We then have
ii
N
icrpcrp
1
The decoding rule is said to be optimum when the probability of decoding error is minimised
EE436 Lecture Notes 19
ii
N
icrpcrp
1
Correspondingly, the log-likelihood is
ii
N
icrpcrp loglog
1
Let the transition probability be defined as ii crp
ppii crp 1
If ii cr If ii cr
The probability of decoding error is minimised if the log-likelihood function is maximized (the estimate c^ is chosen to maximise the function)
EE436 Lecture Notes 20
Suppose also that the received vector r differs from the transmitted code vector c inexactly d positions (where d is the Hamming distance between vectors r and c)
We may rewrite the log-likelihood function
ii
N
icrpcrp loglog
1
as pdNpdcrp 1logloglog
pNppd
1log1
log
EE436 Lecture Notes 21
pdNpdcrp 1logloglog
pNppd
1log1
log
In general, the probability of error occurring is low enough for us to assume p < ½And also that Nlog(1-p) is constant for all c.
The function is maximised when d is minimised. (ie the smallest Hamming distance)
“ Choose the estimate c^ that minimise the Hamming distance betweenThe received vector r and the transmitted vector c.”
The maximum likelihood decoder is reduced to a minimum distance decoder.
The received vector r is compared with each possible transmitted code vector cand the particular one “closest” to r is chosen as the correct transmitted code vector.
EE436 Lecture Notes 22
The Viterbi Algorithm
The equivalence between maximum likelihood decoding and minimum distance decoding implies that we may decode a convolutional code by choosing a pathin the code tree whose coded sequence differs from the received sequence in thefewest number of places
Since a code tree is equivalent to a trellis, we may limit our choice to the possible paths in the trellis representation of the code.
Viterbi algorithm operates by computing a metric or discrepancy for every possiblepath in the trellis
The metric for a particular path is defined as the Hamming distance between the coded sequence represented by that path and the received sequence.
For each state in the trellis, the algorithm compares the two paths entering the nodeand the path with the lower metric is retained.
The retained paths are called survivors.
The sequence along the path with the smallest metric is the maximum likelihoodchoice and represent the transmitted sequence
EE436 Lecture Notes 23
The Viterbi Algorithm -example
In the circuit below, suppose that the encoder generates an all-zero sequence and that the received sequence is (0100010000…) in which the are two errors due to channel noise: one in second bit and the other in the sixth bit.
We can show that this double-error pattern is correctable using the Viterbi algorithm.
c’j
c’’j
Stream 1
Stream 2
EE436 Lecture Notes 24
The Viterbi Algorithm -example
EE436 Lecture Notes 25
The Viterbi Algorithm -exercise
Using the same circuit, suppose that the received sequence is 110111. Using the Viterbi algorithm, what is the corresponding encoded sequence transmitted by the receiver? What is the original message bit?