5
Upper Bounds on the Error Exponents of LDPC Code Ensembles David Burshtein 1 School of Electrical Engineering Tel-Aviv University Tel-Aviv 69978, Israel Email: [email protected] Ohad Barak 1 School of Electrical Engineering Tel-Aviv University Tel-Aviv 69978, Israel Email: [email protected] Abstract— We consider the ensemble of regular LDPC codes and use recent concentration results on the distance spectrum to derive upper bounds on the error exponent of a randomly chosen code from the ensemble. These bounds hold with some condence level that approaches one as the connectivity of the graph increases. We show that the bounds can be used to obtain the true error exponent over some range of channel parameter values, with the above condence level. I. I NTRODUCTION The error exponent of the ensemble of random linear codes is well known [1]. This ensemble consists of all linear codes with binary {0, 1} generating matrices, whose elements are drawn at random, independently with uniform {1/2, 1/2} probability. The true error exponent of this ensemble is the ran- dom coding error exponent [1]. A similar result for the Shan- non random coding ensemble was obtained by Gallager [2]. Now consider the ensemble of typical codes from the above ensemble of random linear codes. This ensemble consists of all codes in the ensemble of random linear codes except for those codes with minimum distance less than δ GV (R) , where R is the code rate, > 0 is a sufciently small constant and δ GV (R) is the threshold of the Gilbert Varshamov bound. Denote by E rc (R) and E exp (R) the random coding and expurgated error exponents, respectively. As 0, the error exponent of the above expurgated ensemble approaches max(E rc (R),E exp (R)) [1]. Recall that for any given > 0, the minimum distance of almost all codes in the ensemble of random linear codes is above δ GV (R) , when the block length N is sufciently large. Hence almost all codes in the random linear codes ensemble also belong to the expurgated ensemble (a negligible number of codes are expurgated), so that this is indeed the ensemble of typical random linear codes. Now consider the regular-(c,d) LDPC code ensemble. In [3] it is shown for this ensemble that the average error probability P e satises lim N→∞ ln P e ln N = c 2 1 c even c 2 c odd 1 This research was supported by the Israel Science Foundation, grant no. 927/05 and by a fellowship from The Yitzhak and Chaya Weinstein Research Institute for Signal Processing at Tel Aviv University. It is also known [4], [3] that P e is dominated by a negligible fraction of bad codes with a small minimum distance. For sufciently large values of c and d (with some xed rate R =1 c/d), the error probability of a typical code in the ensemble is exponentially decreasing with an error exponent that is above E rc (R) for any given > 0 (i.e. given any > 0 and rate R we can choose c and d sufciently large such that R =1 c/d and the error exponent of a typical code is above E rc (R) w.p. 1 o(1)). In fact, since the minimum distance of a typical code in the ensemble is above δ GV (R) when c and d are sufciently large, the union bound can be applied for LDPC ensembles as in [1]. We thus conclude that when c and d are sufciently large, the error exponent is at least max(E rc (R),E exp (R)) . Above the critical rate, R crit , this implies that the true error exponent approaches E rc (R) as d →∞. Recently, concentration results on the spectrum of regular LDPC codes were obtained in [5] and [6]. When the graph connectivity is sufciently large, the concentration property is guaranteed [5]. In this work we use these results to derive upper bounds on the error exponent of a typical code in the ensemble. These bounds hold with some condence level. Combining these results with known lower bounds on the error exponent, we obtain condence intervals on the error expo- nent. Over some range of channel parameter and transmission rate values, when the graph connectivity is sufciently large, the upper bound of the interval approaches the lower bound with probability that approaches one. In fact, in this case the true error exponent approaches max(E rc (R),E exp (R)) with probability that approaches one. The complete derivation of our results appear in [7]. II. NOTATIONS Throughout this paper we shall use the following notations: Given two functions a(x) and b(x) where x ∈X such that b(x) > 0, a = O(b) will denote that |a(x)| <K ·b(x) for all x ∈X where K is some constant. o(1) denotes an expression that approaches 0 as N →∞. Let p be a vector of length l, whose elements are non- negative and their sum is less than 1. We indicate by H(p)= H(p 1 ,p 2 , ..., p l ) the entropy of the distribution ISIT 2006, Seattle, USA, July 9 14, 2006 401 1424405041/06/$20.00 ©2006 IEEE

[IEEE 2006 IEEE International Symposium on Information Theory - Seattle, WA (2006.7.9-2006.7.9)] 2006 IEEE International Symposium on Information Theory - Upper Bounds on the Error

  • Upload
    ohad

  • View
    215

  • Download
    3

Embed Size (px)

Citation preview

Page 1: [IEEE 2006 IEEE International Symposium on Information Theory - Seattle, WA (2006.7.9-2006.7.9)] 2006 IEEE International Symposium on Information Theory - Upper Bounds on the Error

Upper Bounds on the Error Exponents of LDPCCode Ensembles

David Burshtein1

School of Electrical EngineeringTel-Aviv University

Tel-Aviv 69978, IsraelEmail: [email protected]

Ohad Barak1

School of Electrical EngineeringTel-Aviv University

Tel-Aviv 69978, IsraelEmail: [email protected]

Abstract— We consider the ensemble of regular LDPC codesand use recent concentration results on the distance spectrumto derive upper bounds on the error exponent of a randomlychosen code from the ensemble. These bounds hold with someconfidence level that approaches one as the connectivity of thegraph increases. We show that the bounds can be used to obtainthe true error exponent over some range of channel parametervalues, with the above confidence level.

I. INTRODUCTION

The error exponent of the ensemble of random linear codesis well known [1]. This ensemble consists of all linear codeswith binary 0, 1 generating matrices, whose elements aredrawn at random, independently with uniform 1/2, 1/2probability. The true error exponent of this ensemble is the ran-dom coding error exponent [1]. A similar result for the Shan-non random coding ensemble was obtained by Gallager [2].Now consider the ensemble of typical codes from the aboveensemble of random linear codes. This ensemble consists ofall codes in the ensemble of random linear codes except forthose codes with minimum distance less than δGV(R) − ε,where R is the code rate, ε > 0 is a sufficiently smallconstant and δGV(R) is the threshold of the Gilbert Varshamovbound. Denote by Erc(R) and Eexp(R) the random codingand expurgated error exponents, respectively. As ε → 0, theerror exponent of the above expurgated ensemble approachesmax(Erc(R), Eexp(R)) [1]. Recall that for any given ε > 0,the minimum distance of almost all codes in the ensemble ofrandom linear codes is above δGV(R) − ε, when the blocklength N is sufficiently large. Hence almost all codes in therandom linear codes ensemble also belong to the expurgatedensemble (a negligible number of codes are expurgated), sothat this is indeed the ensemble of typical random linear codes.

Now consider the regular-(c,d) LDPC code ensemble. In [3]it is shown for this ensemble that the average error probabilityP e satisfies

limN→∞

− lnP e

lnN=

c2 − 1 c evenc − 2 c odd

1This research was supported by the Israel Science Foundation, grant no.927/05 and by a fellowship from The Yitzhak and Chaya Weinstein ResearchInstitute for Signal Processing at Tel Aviv University.

It is also known [4], [3] that P e is dominated by a negligiblefraction of bad codes with a small minimum distance. Forsufficiently large values of c and d (with some fixed rateR = 1 − c/d), the error probability of a typical code in theensemble is exponentially decreasing with an error exponentthat is above Erc(R) − ε for any given ε > 0 (i.e. given anyε > 0 and rate R we can choose c and d sufficiently large suchthat R = 1 − c/d and the error exponent of a typical code isabove Erc(R) − ε w.p. 1 − o(1)). In fact, since the minimumdistance of a typical code in the ensemble is above δGV(R)−εwhen c and d are sufficiently large, the union bound can beapplied for LDPC ensembles as in [1]. We thus conclude thatwhen c and d are sufficiently large, the error exponent is atleast max(Erc(R), Eexp(R))−ε. Above the critical rate, Rcrit,this implies that the true error exponent approaches Erc(R) asd → ∞.

Recently, concentration results on the spectrum of regularLDPC codes were obtained in [5] and [6]. When the graphconnectivity is sufficiently large, the concentration property isguaranteed [5]. In this work we use these results to deriveupper bounds on the error exponent of a typical code in theensemble. These bounds hold with some confidence level.Combining these results with known lower bounds on the errorexponent, we obtain confidence intervals on the error expo-nent. Over some range of channel parameter and transmissionrate values, when the graph connectivity is sufficiently large,the upper bound of the interval approaches the lower boundwith probability that approaches one. In fact, in this case thetrue error exponent approaches max(Erc(R), Eexp(R)) withprobability that approaches one. The complete derivation ofour results appear in [7].

II. NOTATIONS

Throughout this paper we shall use the following notations:

• Given two functions a(x) and b(x) where x ∈ X suchthat b(x) > 0, a = O(b) will denote that |a(x)| < K ·b(x)for all x ∈ X where K is some constant.

• o(1) denotes an expression that approaches 0 as N → ∞.• Let p be a vector of length l, whose elements are non-

negative and their sum is less than 1. We indicate byH(p) = H(p1, p2, ..., pl) the entropy of the distribution

ISIT 2006, Seattle, USA, July 9 ­ 14, 2006

4011­4244­0504­1/06/$20.00 ©2006 IEEE

Page 2: [IEEE 2006 IEEE International Symposium on Information Theory - Seattle, WA (2006.7.9-2006.7.9)] 2006 IEEE International Symposium on Information Theory - Upper Bounds on the Error

p1, p2, ..., pl, 1−∑

i pi, and by h(p) the binary entropyof p, both in nats.

• We define the inverse function y = h−1(x) such thaty ≤ 1/2.

• We denote by δGV(R)∆=h−1((1 − R) ln 2) the thresholdof the Gilbert Varshamov bound.

• We indicate by(Nk

)where k = (k1, ..., kl), the number

of ways to divide N distinct items into l + 1 unorderedgroups of sizes k1, ..., kl, N − ∑

i ki. This number ofcombinations is N !/(N −∑

i ki)!(∏

i ki!).

III. BACKGROUND

Consider an LDPC (c, d)-regular bipartite-graph ensemble.Let R be the design rate, R = 1 − c/d in bits per channeluse, and let N denote the code length. Let Si be a randomvariable that equals the number of codewords of weight i inthe drawn code. We set i = αN , where α is the normalizedweight.

Let µi∆=ESi and σi

∆=√

E(Si − µi)2. In [5] we showedthat asymptotically, for large N , the ratio σi/µi approachesa constant which does not depend on N , but only on thenormalized weight. We defined

βα∆= lim

N→∞σαN

µαN

and showed that βα → 0 when c, d → ∞ (while R is keptconstant). Furthermore, we showed that for a code drawnrandomly from the ensemble, if µαN > 1,

1N

lnSαN =1N

lnµαN + o(1) (1)

w.p. at least 1 − β2α − o(1).

Lower and upper bounds on the minimum distance of acode drawn randomly from the ensemble (the latter is subjectto some confidence level) have been derived. We have [4]dmin ≥ Nδ(c, d) w.p. 1 − o(1), where

δ(c, d)∆= infα > 0 : limN→∞

1N

lnµαN ≥ 0 (2)

is the typical normalized minimum distance, and [5],

dmin ≤ N · infα : βα < ε,1N

lnµαN > 0

w.p. at least 1 − ε2 − o(1) (the confidence level).In this paper we consider the error probability, Pe, of a

code that is drawn at random with uniform probability fromthe ensemble and transmitted over a memoryless binary-inputoutput-symmetric (MBIOS) channel. We denote the channeltransition probability distribution by P (y|x) where x ∈ 0, 1and y takes values from the channel output alphabet. Forconvenience we denote the Bhattacharyya parameter of thechannel by

B∆=∑

y

√P (y|0)P (y|1)

We assume a discrete output channel, although the resultsimmediately extend also to continuous output channels, withthe obvious notational changes (converting summations to

integrals). In the following sections we first recall known lowerbounds and then obtain upper bounds on the error exponent− 1

N lnPe of a code in the ensemble that hold with someconfidence level, similar to the bounds obtained in [5] on thespectrum and minimum distance.

IV. A LOWER BOUND ON THE ERROR EXPONENT

We consider two ensembles. The standard ensemble ofregular-(c,d) LDPC codes, and the corresponding expurgatedensemble, that comprises the codes in the standard ensemble,except for those codes with codewords whose weight iseither less than δ(c, d) − ε or larger than 1 − δ(c, d) + ε.Here ε > 0 can be chosen arbitrarily small, and δ(c, d) isthe typical normalized minimum distance (see (2)). As wealready noted above, for sufficiently large values of c andd, δ(c, d) approaches δGV(R). For sufficiently large blocklength N , the expurgated ensemble comprises almost all codesin the standard ensemble, and therefore this is the ensembleof typical codes [4], [3] 2. We denote the average distancespectrum of the expurgated ensemble by µx. Since for largevalues of N , almost all codes in the ensemble also belong tothe expurgated ensemble [3], we have

µxαN ≤ (1 + o(1))µαN (3)

Now consider a code drawn at random with uniform probabil-ity from the expurgated ensemble (i.e., we consider a typicalcode in the standard ensemble). The error probability of thecode is Pe, and its weight distribution is Sx

αN .Since the code is linear and the channel is MBIOS, the

error probability is independent of the transmitted codeword.Hence, without loss of generality we assume that the all-zero codeword was transmitted. Let Y be the channel outputrandom variable, and denote by cmM−1

m=0 the codewordswhere c0 is the all-zero codeword, and M = 2NR is thenumber of codewords. Then,

Pe ≤ Pr

M−1⋃m=1

(Y : P (Y|cm) ≥ P (Y|c0))

Denote by P e the average error probability in the expurgatedensemble. Then by the union bound, it can be shown that

P e ≤ N max1≤i≤N

⎧⎨⎩µx

i Pr

⎛⎝ i∑

j=1

− lnP (Yj |0)P (Yj |1)

≥ 0

⎞⎠

⎫⎬⎭

2In [4], [3] the expurgated ensemble comprises all codes in the standardensemble, except for those codes whose minimum distance is less thanδ(c, d) − ε. In this paper it is more convenient to modify the definition ofthe expurgated ensemble. When d is even, the spectrum is symmetric, i.e.Si = SN−i for all i ≥ 0 (since the all-ones word is a codeword). Therefore,in this case the two definitions of the expurgated ensemble are identical. Whend is odd, by [3][Lemma 2, (21)], the fraction of codes that have codewordswith weight larger than 1−h−1(1−R) is exponentially (in N ) small. Thus thefraction of codes that have codewords with weight larger than 1− δ(c, d)+ εis also exponentially (in N ) small. Therefore we can follow the argumentsin [3] and conclude that in this case too, the expurgated ensemble comprisesalmost all codes in the standard ensemble.

ISIT 2006, Seattle, USA, July 9 ­ 14, 2006

402

Page 3: [IEEE 2006 IEEE International Symposium on Information Theory - Seattle, WA (2006.7.9-2006.7.9)] 2006 IEEE International Symposium on Information Theory - Upper Bounds on the Error

Using this and (3), it can be further shown that

− 1N

lnP e ≥ ELB + o(1) (4)

where

ELB∆= − max

δ(c,d)≤α≤1−δ(c,d)

1N

lnµαN + α lnB

(5)

In addition, by Markov’s inequality, it follows that

Pr(− 1

NlnPe > ELB + o(1)

)> 1 − o(1) (6)

and eventually, that for any ε > 0, if d is sufficiently large,

− 1N

lnP e ≥ −δGV(R) lnB + o(1) − ε, R < Rx;

R0 − R ln 2 + o(1) − ε, R ≥ Rx.(7)

where R0 and Rx, the cutoff rate and the lowest rate atwhich the random coding and expurgated exponents are equal,respectively, are defined by

R0∆= − ln

B + 12

and

Rx∆=1 − 1

ln 2· h

( BB + 1

)Thus the bound (6) implies the following. For any ε > 0 andrate R < Rcrit, if d is sufficiently large then

Pr− 1

N lnPe > max(Erc(R), Eexp(R)) − ε + o(1)

> 1 − o(1) (8)

V. A dmin-BASED UPPER BOUND

We use the confidence interval on the minimum distancethat we obtained in [5] to derive an upper bound on the errorexponent − 1

N lnPe of a code that is drawn at random withuniform probability from the ensemble. This bound holds withthe same confidence level that is specified for the minimumdistance. The bound is based on the confidence interval onthe minimum distance together with the two codewords upperbound on the error probability [8][chapter 5]. Recall that with-out loss of generality we can assume that the zero codewordwas transmitted. Suppose that our code has minimum distancedmin. Then,

Pe ≥ Pr

(dmin∑i=1

− lnP (Yi|0)P (Yi|1)

> 0

)(9)

where Yi is the i’th channel output random variable given thatthe transmitted codeword (and in particular the i’th bit) waszero. If for some fixed δ, dmin ≤ δN , then it can be shownthat

− 1N

lnPe ≤ −δ lnB + o(1) (10)

In [5] we obtained upper bounds on the minimum distance.For d sufficiently large these bounds assert that δ approachesδGV (R) with probability that approaches one. Note in addi-tion, that when δ = δGV (R), the two codewords bound (10)coincides with the expurgated exponent lower bound on the

error exponent [8][chapter 5] for rates R ≤ Rx. Combiningthese results with (8), we obtain that when R < Rx the unionbound-based analysis is tight, that is the true error exponent isthe expurgated exponent, with confidence level that approachesone as d → ∞.

VI. SECOND UPPER BOUND ON THE ERROR EXPONENT

In this section we use the lower bound on the probabilityof a union due to Dawson and Sankoff [9], which assertsthe following. Let Aii∈I be a finite family of events ina probability space (Ω, P ). Denote by S1 =

∑i∈I P (Ai)

and S2 =∑

i∈I, j∈I P (Ai

⋂Aj) where we sum only over

unordered pairs of distinct events. Then

P

(⋃i∈I

Ai

)≥ 2

r + 1S1 − 2

r(r + 1)S2 (11)

for any positive integer r. Note that this bound is an im-provement of the Bonferroni inequality, which can be obtainedfrom (11) by setting r = 1. We also note that an improvementon [9] was proposed by de Caen [10]. Applying (11) in ourcase yields

Pe ≥ 2r + 1

P1 − 2r(r + 1)

P2 (12)

where r is any positive integer,

P1 =M−1∑m=1

Pr Y : Am

P2 =∑

0<m1<m2<M

PrY : Am1

⋂Am2

(13)

and the event Am∆= P (Y|cm) > P (Y|c0).

Now it can be verified that,

P1 =N∑

i=1

Sxi Pr

⎛⎝ i∑

j=1

− lnP (Yj |0)P (Yj |1)

> 0

⎞⎠

By our concentration results [5] (recall (1)), and the fact thatalmost all codes in the standard ensemble also belong to theexpurgated one, we obtain that if δ(c, d) < α < 1 − δ(c, d)(so that lnµαN > 0) then,

1N

lnSxαN =

1N

lnµαN + o(1) (14)

w.p. at least 1− β2α − o(1), and β2

α → 0 as d → ∞. Hence itcan be obtained that

− 1N

lnP1 < ELB + o(1) (15)

(recall (5)) w.p. at least 1− β2α0

− o(1), where α0 maximizesthe expression within · in the right hand side of (5) (hencethe confidence level is determined by the maximizing α).Repeating the previous development of the lower bound to− lnP e/N , (7), we obtain that for any ε > 0, if d issufficiently large then w.p. at least 1 − ε,

− 1N

lnP1 <

−δGV (R) lnB + o(1) + ε, R < Rx ;R0 − R ln 2 + o(1) + ε, R ≥ Rx .

(16)

ISIT 2006, Seattle, USA, July 9 ­ 14, 2006

403

Page 4: [IEEE 2006 IEEE International Symposium on Information Theory - Seattle, WA (2006.7.9-2006.7.9)] 2006 IEEE International Symposium on Information Theory - Upper Bounds on the Error

Now P2 can also be expressed as

P2 =∑i,j,k

Sxi,j,kei,j,k

where the first term, Sxi,j,k, is the number of unordered

codeword pairs with Hamming weights i and k and with anoverlap of j ones, of some code in the expurgated ensemble.The second term, ei,j,k, is the probability that, under theall-zero transmitted codeword assumption, the channel outputis such that both codewords are more likely than the zerocodeword. That is,

ei,j,k∆= Pr

(i∑

l=1

− lnP (Yl|0)P (Yl|1)

> 0 ,

j∑l=1

− lnP (Yl|0)P (Yl|1)

+i+k−j∑l=i+1

− lnP (Yl|0)P (Yl|1)

> 0

)

where Yli+k−jl=1 are i.i.d random variables that correspond to

the channel output given that zero was transmitted. Now,

P 2 =∑i,j,k

Sx

i,j,kei,j,k (17)

where Sx

i,j,k = E

Sxi,j,k

. Denote i = αN , j = δN , k =

βN . Note that the range of summation is restricted to

δ(c, d) − ε < α < 1 − δ(c, d) + ε (18)

δ(c, d) − ε < β < 1 − δ(c, d) + ε (19)

δ(c, d) − ε < α + β − 2δ < 1 − δ(c, d) + ε (20)

(the restriction (20) is due to the fact that the sum (modulo-2)of the two given codewords is also a codeword).

Similar to (3) we have

Sx

αN,δN,βN ≤ (1 + o(1))SαN,δN,βN (21)

and using the methods in [5], one may obtain that

1N

lnSx

αN,δN,βN ≤ h(α − δ, δ, β − δ) − 2(1 − R) ln 2

+ (1 − R) ln[1 + 3|1 − 2(δ(c, d) − ε)|d] + o(1) (22)

Now consider Shannon’s random coding ensemble. We canassume that the first codeword is the all zero codeword, with-out changing the error probability of the ensemble. Let P

rc

2

denote the average value of P2, defined in (13), correspondingto this ensemble. Then similar to (17) we have,

Prc

2 =∑i,j,k

Src

i,j,kei,j,k (23)

where

Src

αN,δN,βN =(

N

(α−δ)N,δN,(β−δ)N

)(M − 1)(M − 2)

21

22N

Hence,

1N

lnSrc

αN,δN,βN = h(α − δ, δ, β − δ) − 2(1 − R) ln 2 + o(1)(24)

From (22) and (24) we conclude that

1N

lnSx

αN,δN,βN ≤ 1N

lnSrc

αN,δN,βN + ∆ + o(1)

where

∆∆=(1 − R) ln[1 + 3|1 − 2(δ(c, d) − ε)|d] (25)

Combining this with (17) and (23) we have

P 2 < Prc

2 eN ·[∆+o(1)]

where ε > 0 in (25) can be chosen arbitrarily small. If weexpress P

rc

2 as in [2], it can be shown that

Prc

2 ≤ exp −N [E0(2, Q) − 2R ln 2]where Q is the uniform distribution, Q(0) = Q(1) = 1/2, and

E0(ρ,Q)∆= − ln∑

y

(∑k

Q(k)P (y|k)1/(1+ρ)

)1+ρ

Now, Markov’s inequality yields

1N

lnP2 <1N

lnP 2 + o(1)

w.p. 1 − o(1). Hence

− 1N

lnP2 > E0(2, Q) − 2R ln 2 − ∆ + o(1) (26)

w.p. 1 − o(1). Denote

φ1∆= − max

δ(c,d)≤α≤1−δ(c,d)

1N

lnµαN + α lnB

, (27)

φ2∆=E0(2, Q) − 2R ln 2 − (1 − R) ln

[1 + 3|1 − 2δ(c, d)|d]

Note that φ1 = ELB . By (15), P1 > e−N [φ1+o(1)] w.p. atleast 1 − β2

α0− o(1) (recall that α0 is the value of α that

maximizes the expression within · in (27)) and by (26),P2 < e−N [φ2+o(1)] w.p. 1−o(1). If φ2 > φ1 we use (12) withr = 1. Otherwise we use (12) with r = eN(φ1−φ2+ε) whereε > 0 can be arbitrarily small. We thus obtain the followingupper bound on the error exponent,

− 1N

lnPe <

φ1 + o(1), if φ2 > φ1 ;2φ1 − φ2 + o(1), if φ2 ≤ φ1 .

(28)

w.p. at least 1− β2α0

− o(1). Hence when φ2 > φ1, the lowerbound (6) (which is a consequence of the union bound) istight. By the previous development in Section V we alreadyknow that when R < Rx and d → ∞, the true error exponentis the expurgated exponent. When R ≥ Rx, we have shown(see (16)) that for any ε > 0, if d is sufficiently large thenφ1 approaches R0 −R ln 2, which is the random coding errorexponent for R < Rcrit. Hence, in the region Rx ≤ R ≤ Rcrit,for large enough d the true error exponent approaches therandom coding error exponent with probability that approachesone, provided that φ2 > φ1, which for sufficiently large d isequivalent to

R < (E0(2, Q) − R0) / ln 2

ISIT 2006, Seattle, USA, July 9 ­ 14, 2006

404

Page 5: [IEEE 2006 IEEE International Symposium on Information Theory - Seattle, WA (2006.7.9-2006.7.9)] 2006 IEEE International Symposium on Information Theory - Upper Bounds on the Error

This result can be combined with the recent results of Bargand Mcgregor [11][Theorem 17]. It can be verified numericallythat when the crossover probability of a binary symmetricchannel (BSC) satisfies 0.075 < p < 0.5, the true errorexponent approaches max(Erc(R), Eexp(R)) as d → ∞ forall rates.

VII. EXAMPLE

We consider a BSC with crossover probability p. In Fig. 1we plot the minimum distance-based upper bound on the errorexponent (10) for the regular-(5,25) LDPC code ensemble at aconfidence level of 95%, and compare it to the random codingand sphere packing bounds.

0 0.002 0.004 0.006 0.008 0.01 0.0120

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

crossover probability

erro

r ex

po

nen

t

random codingsphere packingminimum distance−based bound (95%)

Fig. 1. The random coding, sphere packing and minimum distance-basedbound (95% confidence level) on the error exponent for the regular-(5,25)LDPC ensemble transmitted over a BSC. The Shannon limit is 0.031. Thenormalized minimum distance is 0.015 with a confidence level of 95%

In Fig. 2 we plot the minimum distance-based upper boundon the error exponent (10) and the second upper bound (28),both at a confidence level which is at least 99.9%, forthe regular-(8,10) LDPC code ensemble. We compare thesebounds to the lower (union) bound (6) and to the spherepacking bound. The figure also shows the region where theunion bound coincides with the upper bound, and is thus thetrue error exponent with confidence level at least 99.9%.

VIII. CONCLUSION

We obtained upper bounds on the error exponent of regularLDPC code ensembles, and showed that these bounds canbe used to obtain the true error exponent over some rangeof channel parameter values with some confidence level thatapproaches one as the graph connectivity increases.

However, further research is required in order to determinethe true error exponent over the entire range of the parametervalues.

0.03 0.035 0.04 0.045 0.05 0.055 0.06 0.065

0.14

0.16

0.18

0.2

0.22

0.24

0.26

0.28

0.3

0.32

0.34

crossover probability

erro

r ex

po

nen

t

sphere packingd

min−based bound (99.9%)

Second upper boundLower (union) bound

Fig. 2. The union bound, sphere packing, minimum distance-based boundand the second upper bound on the error exponent (both at a confidence levelof at least 99.9%) for the regular-(8,10) LDPC ensemble transmitted over aBSC. The Shannon limit is 0.243. The normalized minimum distance is 0.242with a confidence level of 99.9%. The union bound is the true error exponentfor crossover probabilities between zero and the point marked by ’x’ withconfidence level at least 99.9%.

REFERENCES

[1] A. Barg and G. D. Forney, “Random codes: Minimum distances anderror exponents,” IEEE Transactions on Information Theory, vol. 48,no. 9, pp. 2568–2573, Sept. 2002.

[2] R. G. Gallager, “The random coding bound is tight for the average code,”IEEE Transactions on Information Theory, vol. 19, no. 2, pp. 244–246,Mar. 1973.

[3] G. Miller and D. Burshtein, “Bounds on the maximum likelihooddecoding error probability of low density parity check codes,” IEEETransactions on Information Theory, vol. 47, no. 7, pp. 2696–2710,Nov. 2001.

[4] R. G. Gallager, Low Density Parity Check Codes. Cambridge, Mas-sachusetts: M.I.T. Press, 1963.

[5] O. Barak and D. Burshtein, “Lower bounds on the spectrum and errorrate of LDPC code ensembles,” in Proceedings of the InternationalSymposium on Information Theory (ISIT2005), Adelaide, Australia,Sept. 2005, pp. 42–46.

[6] V. Rathi, “On the asymptotic weight distribution of regular ldpc ensem-bles,” in Proceedings of the International Symposium on InformationTheory (ISIT2005), Adelaide, Australia, Sept. 2005, pp. 2161–2165.

[7] O. Barak and D. Burshtein, “Lower bounds on the spectrum and errorrate of LDPC code ensembles,” submitted to IEEE Transactions onInformation Theory.

[8] R. G. Gallager, Information Theory and Reliable Communication. NewYork: Wiley, 1968.

[9] D. A. Dawson and D. Sankoff, “An inequality for probabilities,” Proc.Amer. Math. Soc., vol. 18, pp. 504–507, 1967.

[10] D. de Caen, “A lower bound on the probability of a union,” DiscreteMathematics, vol. 169, pp. 217–220, 1997.

[11] A. Barg and A. Mcgregor, “Distance distribution of binary codes andthe error probability of decoding,” IEEE Transactions on InformationTheory, vol. 51, no. 12, pp. 4237–4246, Dec. 2005.

ISIT 2006, Seattle, USA, July 9 ­ 14, 2006

405