16
3/15/2012 1 convolutional codes: decoding Prof. Luiz DaSilva [email protected] +353 1 896-3660 error probability let C m be the set of allowable code sequences of length m not all sequences in {0,1} m allowable each sequence c C m can be represented by a path through the trellis diagram probability that sequence c is sent and sequence y is received ) , ( ) , ( ) 1 ( ] | [ c y d m c y d H H p p c y P = p = bit error probability resulting from modulation scheme

convolutional codes: decoding - Luiz DaSilva · 01/02/2012 · convolutional codes: decoding Prof. Luiz ... Zj (i 1) Zs(c)(i ... i + = + 3/15/2012 6 Viterbi algorithm decoding example

  • Upload
    vudan

  • View
    238

  • Download
    0

Embed Size (px)

Citation preview

3/15/2012

1

convolutional codes: decoding

Prof. Luiz [email protected]+353 1 896-3660

error probability

• let Cm be the set of allowable code sequences of length m– not all sequences in {0,1}m allowable

– each sequence c ∈ Cm can be represented by a path through the trellis diagram

• probability that sequence c is sent and sequence y is received

),(),( )1(]|[ cydmcyd HH ppcyP −−=

p = bit error probability resulting from modulation scheme

3/15/2012

2

maximum likelihood decoding rule

• choose the sequence through the trellis with minimum distance to the received sequence

• identical to the rule for max likelihood demodulation– simply replaces k-dimensional vector that represents the symbol with m-dimensional vector representing the received sequence of bits

)},({min]}|[{max cydcyP HCcCc mm ∈∈ ⇒

the Viterbi algorithm (1967)

• a clever way of implementing maximum likelihood decoding– computer scientists classify this as an example of dynamic programming

• chips are available from many manufacturers to implement Viterbidecoding

• can be used for hard or soft decoding

3/15/2012

3

Viterbi algorithm: basic idea

• number of length-m code sequences approaches infinity as m becomes large– instead of searching through the entire space to find the closest vector, find the best code sequence one stage at a time

• consider two valid codewords (paths in the trellis), x and y, which merge at some point

x

y

if the distance between x and the received vector r (up until this node) is smaller than the distance between y and r, y cannot be part of the shortest path

X

Viterbi algorithm: analogy

• to determine the driving distance between NYC and LA, send 1000 people driving across the US on different paths

• suppose 2 of these people end up in Dallas and know that their planned path from Dallas to LA is the same– if person A arrived in Dallas through a shorter path than person B, then…

– person B might as well quit right now, because there is no way his/her path from NYC to LA is the shortest

3/15/2012

4

Viterbi decoder: implementation

• complexity proportional to the number of states– increases exponentially with constraint length

• well suited to parallel implementation– each state has x transitions into it– each node computes x path metrics, adds them to previous metrics, and compares

– much analysis goes into optimizing the implementation of this calculation

initialization

• assign to each state j a metric Zj(0) at time i = 0• we know we need to start at state 0• Z0(0) = 0

• Zj(0) = ∞ for all other states j

3/15/2012

5

decoding the ith segment

• let yi be the segment of n bits received between times i and i+1

• there are several code segments ci that lead into state j at time i+1– we want to find the most likely one

• let s(ci) be the state from which code segment ci emerged

• for each state j, we assume that was the path that led to j if

is smaller than for any other code segment leading to state j

),()()( iHcs cydiZi

+

iteration

• let

• let i = i+1

• repeat previous step

• incorrect paths drop out as i approaches infinity

),()()1( )( iHcsj cydiZiZi

+=+

3/15/2012

6

Viterbi algorithm decoding example

• r = ½, K = 3 code from previous example

• c = (00 11 01 00 10 10 11) is sent

• y = (01 11 01 00 10 10 11) is received

• which path through the trellis does the Viterbi algorithm choose?

step 1: state transition diagram and trellis

00

10

11

01

1/11

0/00

1/10

0/01

0/11

1/00

0/101/01

00 00

01 01

10 10

11 11

0/00

1/01

3/15/2012

7

step 2: initialize state metrics

00

01

10

11

0

inf

inf

inf

y = (0 1 1 1 0 1 0 0 1 0 1 0 1 1)

step 3: create branch metrics

y = (0 11 10 10 01 0 1 01 1)

00 00

01 01

10 10

11 11

0/00

1/01

0

inf

inf

inf

d=1

d=1

d=1d=1

d=0

d=2

d=0

d=2

3/15/2012

8

step 4: create new state metrics

00 00

01 01

10 10

11 11

0/00

1/01

0

inf

inf

inf

d=1

d=1

d=1d=1

d=0

d=2

d=0

d=2

1

inf

1

inf

• examine the paths entering each state

• estimate the resulting state metrics and choose the path with the smallest new state metric

• eliminate the other path

step 5: update trellis

y = (0 1 1 1 0 1 0 0 1 0 1 0 1 1)

00

01

10

11

0

inf

inf

inf

1

inf

1

inf

3/15/2012

9

step 6: create branch metrics

y = (0 11 10 10 01 0 1 01 1)

00 00

01 01

10 10

11 11

0/00

1/01

1

inf

1

inf

d=2

d=0

d=0d=2

d=1

d=1

d=1

d=1

step 7: update trellis

y = (0 1 1 1 0 1 0 0 1 0 1 0 1 1)

00

01

10

11

0

inf

inf

inf

1

inf

1

inf

3

2

1

2

3/15/2012

10

step 8: create branch metrics

y = (0 11 10 10 01 0 1 01 1)

00 00

01 01

10 10

11 11

0/00

1/01

3

2

1

2

d=1

d=1

d=1d=1

d=0

d=2

d=0

d=2

step 9: update trellis

y = (0 1 1 1 0 1 0 0 1 0 1 0 1 1)

00

01

10

11

0

inf

inf

inf

1

inf

1

inf

3

2

1

2

3

1

3

2

3/15/2012

11

step 10: create branch metrics

y = (0 11 10 10 01 0 1 01 1)

00 00

01 01

10 10

11 11

0/00

1/01

3

1

3

2

d=0

d=2

d=2d=0

d=1

d=1

d=1

d=1

step 11: update trellis

y = (0 1 1 1 0 1 0 0 1 0 1 0 1 1)

00

01

10

11

0

inf

inf

inf

1

inf

1

inf

3

2

1

2

3

1

3

2

3

3

1

3

3/15/2012

12

step 12: create branch metrics

y = (0 11 10 10 01 01 01 1)

00 00

01 01

10 10

11 11

0/00

1/01

3

3

1

3

d=1

d=1

d=1d=1

d=2

d=0

d=2

d=0

step 13: update trellis

y = (0 1 1 1 0 1 0 0 1 0 1 0 1 1)

00

01

10

11

0

inf

inf

inf

1

inf

1

inf

3

2

1

2

3

1

3

2

3

3

1

3

4

3

4

1

3/15/2012

13

step 14: create branch metrics

y = (0 11 10 10 01 01 01 1)

00 00

01 01

10 10

11 11

0/00

1/01

4

3

4

1

d=1

d=1

d=1d=1

d=2

d=0

d=2

d=0

step 15: update trellis

y = (0 1 1 1 0 1 0 0 1 0 1 0 1 1)

00

01

10

11

0

inf

inf

inf

1

inf

1

inf

3

2

1

2

3

1

3

2

3

3

1

3

4

3

4

1

4

1

4

3

3/15/2012

14

step 16: create branch metrics

y = (0 11 10 10 01 01 01 1)

00 00

01 01

10 10

11 11

0/00

1/01

4

1

4

3

d=2

d=0

d=0d=2

d=1

d=1

d=1

d=1

step 17: update trellis

y = (0 1 1 1 0 1 0 0 1 0 1 0 1 1)

00

01

10

11

0

inf

inf

inf

1

inf

1

inf

3

2

1

2

3

1

3

2

3

3

1

3

4

3

4

1

4

1

4

3

1

4

3

4

3/15/2012

15

step 18: decoding – choose state with smallest metric and trace back

y = (0 1 1 1 0 1 0 0 1 0 1 0 1 1)

00

01

10

11

0

inf

inf

inf

1

inf

1

inf

3

2

1

2

3

1

3

2

3

3

1

3

4

3

4

1

4

1

4

3

1

4

3

4

step 19: decoding – determine associated bits

y = (0 1 1 1 0 1 0 0 1 0 1 0 1 1)

00

01

10

11

0

inf

inf

inf

1

inf

1

inf

3

2

1

2

3

1

3

2

3

3

1

3

4

3

4

1

4

1

4

3

1

4

3

4

0/11

0/101/10

1/000/01

1/11

0/00

cest = (0 0 1 1 0 1 0 0 1 0 1 0 1 1)

3/15/2012

16

summary

• optimal decoder will find the path through the trellis that lies at minimum distance from the received signal

• Viterbi decoding decreases the complexity of the search by deciding on the optimal path one step at a time

• complexity of the Viterbi algorithm is proportional to the number of states– grows exponentially with constraint length

continuous operation

• optimum decoding requires that all bits have arrived to be able to trace back the path

• in practice, usually safe to assume that paths have merged after approximately 5K time intervals– diminishing returns after ~ 5K