27
Division of Engineering and Applied Sciences March 2004 Belief-Propagation with Information Correction: Near Maximum-Likelihood Decoding of LDPC Codes Ned Varnica + , Marc Fossorier # , Alek Kavčić + + Division of Engineering and Applied Sciences Harvard University # Department of Electrical Engineering University of Hawaii

Division of Engineering and Applied Sciences March 2004 Belief-Propagation with Information Correction: Near Maximum-Likelihood Decoding of LDPC Codes

  • View
    224

  • Download
    0

Embed Size (px)

Citation preview

Division of Engineering and Applied Sciences

March 2004

Belief-Propagation with Information Correction: Near Maximum-Likelihood

Decoding of LDPC Codes

Ned Varnica+, Marc Fossorier#, Alek Kavčić+

+Division of Engineering and Applied Sciences Harvard University

#Department of Electrical Engineering University of Hawaii

Division of Engineering and Applied Sciences

slide 2

Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes

Outline

• Motivation – BP vs ML decoding

• Improved iterative decoder of LDPC codes

• Types of BP decoding errors

• Simulation results

Division of Engineering and Applied Sciences

slide 3

Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes

LDPC Code Graph

• Bipartite Tanner code graph G = (V,E,C)

– Variable (symbol) nodes vi V, i = 0, 1, …, N-1

– Parity check nodes cj C, j = 0 , 1, … , Nc-1

• Code rate – R = k/N, k N-Nc

• Belief Propagation– Iterative propagation of

conditional probabilities

V

0( ) 3Gd v 1( ) 3Gd v 2( ) 2Gd v

. . .

. . .3( ) 3Gd v

C

• Parity check matrix

H

A non-zero entry in H

an edge in G

Nc x N

Division of Engineering and Applied Sciences

slide 4

Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes

Standard Belief-Propagation on LDPC Codes

• Locally operating– optimal for cycle-free graphs

• Optimized LDPC codes (Luby et al 98, Richardson, Shokrollahi & Urbanke 99, Hou, Siegel & Milstein 01, Varnica & Kavcic 02)

– sub-optimal for graphs with cycles

• Good finite LDPC have an exponential number of cycles in their Tanner graphs (Etzion, Trachtenberg and Vardy 99)

• Encoder constructions

• BP to ML performance gap due to convergence to pseudo-codewords (Wiberg 95, Forney et al 01, Koetter & Vontobel 03)

Division of Engineering and Applied Sciences

slide 5

Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes

Examples

• Short Codes - e.g. Tanner code with N = 155, k = 64,

diam = 6, girth = 8, dmin = 20

• Long Codes - e.g. Margulis Code with N = 2640

k = 1320

0 0.5 1 1.5 2 2.5 3 3.5 410

- 5

10- 4

10- 3

10- 2

10- 1

100

Eb / N

0 [dB]

WE

R

ML DecoderBP Decoder

1 1.5 2 2.510

-8

10-7

10-6

10-5

10-4

10-3

10-2

10-1

100

Eb / N

0

WE

R

BP DecoderML upper bound

Division of Engineering and Applied Sciences

slide 6

Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes

Goals

• Construct decoder

– Improved BP decoding performance

– More flexibility in performance versus complexity

– Can nearly achieve ML performance with much lower computational burden

• Reduce or eliminate LDPC error floors

• Applications

– Can use with any “off-the-shelf” LDPC encoder

– Can apply to any communication/data storage channel

Division of Engineering and Applied Sciences

slide 7

Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes

^

Subgraph Definitions

Definition 1:Definition 1: SUC graph GS(L) = (VS

(L) , ES(L) , CS

(L) ) is graph induced by

SUC CS(L)

• Syndrome s = H x(L)

• CS(L) - Set of unsatisfied check nodesSet of unsatisfied check nodes (SUC)(SUC) CS

(L) = {ci : (Hx(L))i 0}

• VS(L) - Set of variable nodes incident to c CS

(L)

• ES(L) - Set of edges connecting VS

(L) and CS(L)

channelx {0,1}N r RN

transmitted binary vector

received vector

• dGs(v) - Degree in SUC graph GS(L) for v V

• dGs(v) dG(v)

^

^

BP decode

r

x(L) {0,1}N

decoded vector after L iterations

BCJR detect

or

i

J

kk i k iw x h r

0

Division of Engineering and Applied Sciences

slide 8

Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes

Properties of SUC graph

Observation 1Observation 1: The higher the degree dGs(v) of a node v Vs(L)

the more likely is v to be in error

ddGsGs = 0 = 0 ddGsGs = 1 = 1 ddGsGs = 2 = 2 ddGsGs = 3 = 3

Channel information LLR ( Channel information LLR ( loglog((pptruetrue/p/pfalsefalse)) )) 2.8 2.3 1.5 1.1

LLR messages received from check nodesLLR messages received from check nodes 3.6 1.6 0.2 - 0.7

Percentage of variable nodes in errorPercentage of variable nodes in error 8.1 9.3 31.2 51.1

e.g. Statistics for Tanner (155,64) code blocks for which BP failed on AWGN channel at SNR = 2.5 dB

• Select v node

• Perform information correction

Division of Engineering and Applied Sciences

slide 9

Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes

Node Selection Strategy 1

( )LSV

( )LSC

0( ) 2SG

d v 2( ) 2SG

d v 1SG

d 1SG

d 1SG

d 1SG

d 12( ) 2SG

d v 1SG

d 1SG

d 1SG

d 1SG

d 1SG

d 1SG

d

Strategy 1Strategy 1: Determine SUC graph and select the node with maximal degree dGs in SUC graph GS

(L)

Select node v0 or v2 or v12

Division of Engineering and Applied Sciences

slide 10

Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes

Properties of SUC graph, cntd

Observation 2:Observation 2: The smaller the number of neighbors (wrt to SUC graph)neighbors (wrt to SUC graph)

with high degree, the more likely v is to be in error

Definition 2:Definition 2: Nodes v1 and v2 are neighbors with respect to SUCneighbors with respect to SUC if there exist c CS

(L) incident to both v1 and v2

• nv(m) - number of neighbors of v

with degree dGs = m

( ) 2Gsd v 2Gsd 1Gsd

. . .

. . . nv

(2) = 1 and nv(1) = 4

CS(L)

1Gsd 1Gsd 1Gsd

Division of Engineering and Applied Sciences

slide 11

Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes

Node Selection Strategy 2

Strategy 2:Strategy 2: Among nodes with maximal degree dGs select a node with minimal number of highest degree neighbors

( )LSC

0( ) 2SG

d v 2( ) 2SG

d v 1SG

d 1SG

d 1SG

d 1SG

d 12( ) 2SG

d v 1SG

d 1SG

d 1SG

d 1SG

d 1SG

d 1SG

d

Select node v0 nv0

(2) = nv12

(2) = 1; nv2

(2) = 2

nv0(1)

= 4; nv12(1)

= 6

Division of Engineering and Applied Sciences

slide 12

Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes

Alternatives to Strategy 2Strategy 2

• dGs = max dGs(v)

• Set of suspicious nodesSet of suspicious nodes Sv = {v : dGs(v) = dGs }

• Edge penalty function r(v,c) =

(Nc - set of v nodes incident to c)

• Penalty function R(v) = r(v,c) – r(v,c)

• Select vp Sv as vp = argmin R(v)

• Numerous related approaches possible

max

max max

v V

vn Nc\{v}

v Svmax

max

c Cs c Cs

max dGs(vn); if Nc \ {v}

0 ; if Nc \ {v} =

Division of Engineering and Applied Sciences

slide 13

Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes

Node Selection Strategy 3

Observation 3:Observation 3: A variable node v is more likely to be incorrect if its decoder

input is less reliable, i.e., if |O(v)| is lower

StrategyStrategy 3: 3: Among nodes with maximal degree dGs select node with minimal

input reliability |O(v)|

• Decoder input on node vi

) | 1 (

) | 0 (log ) (

r x p

r x pv O

i

ii

• Memoryless AWGN channel:

Division of Engineering and Applied Sciences

slide 14

Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes

Message Passing - Notation

• Set of log-likelihood ratios messages on v nodes:

M = (C,O)

• Decoder input:

O = [O (v0 ), …, O (vN-1)]

• Channel detector (BCJR) input

B = [B (v0 ), …, B (vN-1)]

C

. . .

. . .

. . . . . .

. . .

. . .

CV

O

. . . T T T

)( 0VB

)( 1NVO)( 0VO )( 1VO

)( 1VB )( 1NVB

Division of Engineering and Applied Sciences

slide 15

Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes

Symbol Correction Procedures

• Replace decoder and detector input LLRs corresponding to selected vp

1. O (vp) = +S and B (vp) = +S

2. O (vp) = –S and B (vp) = –S

• Perform correction in stages

• Test 2j combinations at stage j

• For each test perform additional Kj iterations

• Max number of attempts (stages) jmax

)0(Μ)0(

pv

)1(Μ)1(

pv

)3(Μ)3(

pv

)4(Μ)4(

pv

start

SS

S

S

S

S

j = 1 j = 2 j = 3

7

8

3

19

10

4S

)2(Μ)2(

pv

)5(Μ)5(

pv

)6(Μ)6(

pv

SS

S

S

S

S

11

12

5

213

14

6

S

Division of Engineering and Applied Sciences

slide 16

Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes

Symbol Correction Procedures

• “codeword listing” approach– Test all 2jmax possibilities – W – collection of valid codeword candidates – Pick the most likely candidate

• e.g. for AWGN channel set

x = argmin d(r,w)

• “first codeword” approach– Stop at a first valid codeword– Faster convergence, slightly

worse performance for large jmax

wW

)0(Μ)0(

pv

)1(Μ)1(

pv

)3(Μ)3(

pv

)4(Μ)4(

pvstart

SS

S

S

S

S

j = 1 j = 2 j = 3

7

8

3

19

10

4S

)2(Μ)2(

pv

)5(Μ)5(

pv

)6(Μ)6(

pv

SS

S

S

S

S

11

12

5

213

14

6

S

^

Division of Engineering and Applied Sciences

slide 17

Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes

Parallel and Serial Implementation ( jmax= 3 )

start

)0(Μ)0(

pv

)1(Μ)1(

pv

)2(Μ)2(

pv

)2(Μ)2(

pv

SS

S

S

S

S

j = 1 j = 2 j = 3

3

4

2

16

7

5S

)1(Μ)1(

pv

)2(Μ)2(

pv

)2(Μ)2(

pv

SS

S

S

S

S

10

11

9

813

14

12

S

)0(Μ)0(

pv

)1(Μ)1(

pv

)3(Μ)3(

pv

)4(Μ)4(

pv

start

SS

S

S

S

S

j = 1 j = 2 j = 3

7

8

3

19

10

4S

)2(Μ)2(

pv

)5(Μ)5(

pv

)6(Μ)6(

pv

SS

S

S

S

S

11

12

5

213

14

6

S

Division of Engineering and Applied Sciences

slide 18

Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes

Complexity - Parallel Implementation

)0(Μ)0(

pv

)1(Μ)1(

pv

)3(Μ)3(

pv

)4(Μ)4(

pv

start

SS

S

S

S

S

j = 1 j = 2 j = 3

7

8

3

19

10

4S

)2(Μ)2(

pv

)5(Μ)5(

pv

)6(Μ)6(

pv

SS

S

S

S

S

11

12

5

213

14

6

S

• Decoding continued– M need to be stored– storage (2jmax)– lower Kj required – “first codeword” procedure -

fastest convergence

• Decoding restarted– M need not be stored

– higher Kj required

Division of Engineering and Applied Sciences

slide 19

Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes

Can we achieve ML?

Fact 1:Fact 1: As jmax N, “codeword listing” algorithm“codeword listing” algorithm with Kj = 0, for j < jmax, and Kjmax = 1 becomes ML decoder

• For low values of jmax (jmax << N) performs very close to ML decoder

– Tanner (N = 155, k = 64) code

– jmax = 11, Kj = 10

– Decoding continued– faster decoding – M need to be stored

– ML almost achieved0 0.5 1 1.5 2 2.5 3 3.5 4

10-5

10-4

10-3

10-2

10-1

100

Eb / N0 [dB]

WE

R

ML decoder“codeword listing” procedure

original BP (max 100 iter)

Division of Engineering and Applied Sciences

slide 20

Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes

Pseudo-codewords Elimination

• Pseudo-codewords compete with codewords in locally-operating BP decoding (Koetter & Vontobel 2003)

• c - a codeword in an m-cover of G

• i - fraction of time vi V assumes incorrect value in c

• = (0,1, …,N-1) - pseudo-codeword

• pseudo-distance (for AWGN)

• Eliminate a large number of pseudo-codewords by forcing symbol ‘0’ or symbol ‘1’ on nodes vp

– Pseudo-distance spectra improved

– Can increase min pseudo-distance if jmax is large enough

1

0

2

21

0

N

i

N

ii

i

Division of Engineering and Applied Sciences

slide 21

Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes

Types of BP decoding errors

1. Very high SNRs (error floor region)

Stable errors on saturated subgraphs:

• decoder reaches a steady state and fails• messages passed in SUC graph saturated

2. Medium SNRs (waterfall region) Unstable Errors:

• decoder does not reach a steady state

Definition 3:Definition 3: Decoder D has reached a steady state in the interval [L1,L2] if Cs

(L) = Cs(L1) for all L [L1,L2]

Division of Engineering and Applied Sciences

slide 22

Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes

SUC Properties in Error Floor Region

Corollary:Corollary: For regular LDPC codes with

2

)()(

vdvd G

Gs Theorem 1:Theorem 1: In the error floor region

3max Gd

1max Gsd

• Information correction for high SNRs (error floor region)– Pros:

– Small size SUC– Faster convergence

– Cons:– dGs plays no role in node selection

Division of Engineering and Applied Sciences

slide 23

Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes

Simulation Results

• Tanner (155,64) code

– Regular (3,5) code

– Channel: AWGN

– Strategy 3

– jmax = 11, Kj = 10

– More than 1dB gain

– ML almost achieved

0 0.5 1 1.5 2 2.5 3 3.5 410

-5

10-4

10-3

10-2

10-1

100

Eb

/ N0

[dB]

WE

R

ML decoder“codeword listing” procedure“first codeword” procedureoriginal BP (max 100 iter)

Division of Engineering and Applied Sciences

slide 24

Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes

Simulation Results

• Tanner (155,64) code

– Regular (3,5) code

– Channel: AWGN

– Strategy 3

– “First codeword” procedure

– jmax = 4,6,8 and 11

– Kj = 101 1.5 2 2.5 3 3.5

10-5

10-4

10-3

10-2

10-1

100

Eb / N

0 [dB]

WE

R

Original BP (400 iter)Str 3 (L = 100, j

max = 4, K = 10)

Str 3 (L = 100, jmax

= 6, K = 10)

Str 3 (L = 100, jmax

= 8, K = 10)

Str 3 (L = 100, jmax

= 11, K = 10)

ML decoder

Division of Engineering and Applied Sciences

slide 25

Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes

Simulation Results – Error Floors

• Margulis (2640,1320) code

– Regular (3,6) code

– Channel: AWGN

– Strategy 3

– “First codeword” procedure

– jmax = 5, Kj = 20

– More than 2 orders of magnitudes WER improvement

1 1.5 2 2.510

-8

10-7

10-6

10-5

10-4

10-3

10-2

10-1

100

Eb / N

0

WE

R

Original BPStrategy 3; j

max=5, K

j = 20

Division of Engineering and Applied Sciences

slide 26

Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes

Simulation Results – ISI Channels

– Tanner (155,64) code

– Channels: –Dicode (1-D)–EPR4 (1-D)(1+D)2

–Strategy 2

– jmax = 11, Kj = 20

– 1dB gain

– 20 % of detected errors are ML

1.5 2 2.5 3 3.5 4 4.510

-5

10-4

10-3

10-2

10-1

100 Dicode - Strategy 2

Dicode - orig BP (100 iter)EPR4 - Strategy 2EPR4 - orig BP (100 iter)

WE

R

Eb / N

0

Dicode

EPR4

Division of Engineering and Applied Sciences

slide 27

Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes

Conclusion

• Information correction in BP decoding of LDPC codes

– More flexibility in performance vs complexity

– Can nearly achieve ML performance with much lower computational burden

– Eliminates a large number of pseudo-codewords• Reduces or eliminates LDPC error floors

• Applications– Can use for any “off-the-shelf” LDPC encoder

– Can apply to any communication/data storage channel