48
An Introduction to Distributed Channel Coding Alexandre Graell i Amat and Ragnar Thobaben Department of Signals and Systems, Chalmers University of Technology, Gothenburg, Sweden School of Electrical Engineering, Royal Institute of Technology (KTH), Stockholm, Sweden October 1, 2013 Abstract This chapter provides an introductory survey on distributed channel coding techniques for relay networks. The main focus is on decode-and-forward relaying for the basic three-node relay channel. We show how linear block code structures can be deduced from fundamental information theoretic communication strategies. Code design and optimization are discussed taking low-density parity-check (LDPC) block codes and spatially-coupled LDPC codes as particular examples. We also provide an overview on distributed codes that are based on convolutional codes and turbo-like codes, and discuss extensions to multi-source cooperative relay networks. 1

An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

  • Upload
    others

  • View
    33

  • Download
    0

Embed Size (px)

Citation preview

Page 1: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

An Introduction to Distributed Channel Coding

Alexandre Graell i Amat† and Ragnar Thobaben‡

†Department of Signals and Systems, Chalmers University of Technology,

Gothenburg, Sweden

‡School of Electrical Engineering, Royal Institute of Technology (KTH),

Stockholm, Sweden

October 1, 2013

Abstract

This chapter provides an introductory survey on distributed channel coding techniques forrelay networks. The main focus is on decode-and-forward relaying for the basic three-noderelay channel. We show how linear block code structures can be deduced from fundamentalinformation theoretic communication strategies. Code design and optimization are discussedtaking low-density parity-check (LDPC) block codes and spatially-coupled LDPC codes asparticular examples. We also provide an overview on distributed codes that are based onconvolutional codes and turbo-like codes, and discuss extensions to multi-source cooperativerelay networks.

1

Page 2: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

Contents

1 Introduction 3

2 The Three-node Relay Channel 62.1 Basic Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.2 Relaying Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.3 Fundamental Coding Strategies for Decode-and-Forward Relaying . . . . . . . . . . 8

2.3.1 Full-duplex Relaying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.3.2 Half-Duplex Relaying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.3.3 Design Objectives: Achieving the Optimal Decode-and-Forward Rates . . . . 12

3 Distributed Coding for the Three-node Relay Channel 173.1 LDPC Code Designs for the Relay Channel . . . . . . . . . . . . . . . . . . . . . . . 17

3.1.1 Code Structures for Decode-and-Forward Relaying . . . . . . . . . . . . . . . 173.1.2 Irregular LDPC Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213.1.3 Spatially-coupled LDPC Codes . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.2 Distributed Turbo-Codes and Related Code Structures . . . . . . . . . . . . . . . . 313.2.1 Code optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333.2.2 Noisy Relay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

4 Relaying with Uncertainty at the Relay 344.1 Compress-and-Forward Relaying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344.2 Soft-Information Forwarding and Estimate-and-Forward . . . . . . . . . . . . . . . . 35

5 Cooperation with Multiple Sources 365.1 Two-user Cooperative Network: Coded Cooperation . . . . . . . . . . . . . . . . . . 365.2 Multi-source Cooperative Relay Network . . . . . . . . . . . . . . . . . . . . . . . . 37

6 Summary and Conclusions 39

2

Page 3: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

source destination

relay

direct link

Figure 1: The three-node relay channel. A source communicates with a destination with the helpof a relay.

1 Introduction

Since Marconi’s first radio link between a land-based station and a tugboat, wireless communica-tions have witnessed a tremendous flourishing and have become central in our everyday life. Inthe past decades, wireless communications have expanded at an unprecedented pace. The numberof worldwide mobile subscribers has increased from a few million in 1990 to more than 4 billionin 2010. To enable an ever-increasing number of wireless devices and applications, the challengeof researchers and engineers has always been to design communication systems that achieve highreliability, spectral and power efficiency, and are able to mitigate fading. A way to tackle thischallenge is by exploiting diversity in time, frequency or space. A well-known technique to exploitspatial diversity consists of employing more than one antenna at the transmitter. However, manywireless devices have limited size or hardware capabilities, therefore it is not always possible toemploy multiple antennas. Cooperative communications is a new concept that offers an alternativeto achieve spatial diversity.

In traditional wireless communication networks, communication is performed over point-to-point links and nodes operate as store-and-forward packet routers. In this scenario, if a sourcecommunicates with a destination and the direct link cannot provide error free transmission, thecommunication is performed in a multi-hop fashion through one or multiple relay nodes. However,this model is unnecessarily wasteful, because, due to the broadcast nature of the wireless channel,nodes within a certain range may overhear the transmission of other nodes. Therefore, it seemsreasonable that these nodes help each other, i.e., cooperate somehow, to transmit information tothe destination. This paradigm is known as cooperative communication. Similarly to multiple-antenna systems, cooperative communications achieve transmit diversity by generating a virtualmultiple-antenna transmitter, where the antennas are distributed over the wireless nodes. Coop-erative communications have been shown to yield significant improvements in terms of reliability,throughput, power efficiency, and bandwidth efficiency.

The basic principles of cooperative communications can be traced back to the 70s, when van derMeulen introduced the relay channel [1], depicted in Fig. 1. It is a simple three node cooperativenetwork where one source communicates to a destination with the help of a relay, yet capturingthe main features and characteristics of cooperation. In a conventional single-hop system, thedestination in Fig. 1 would decode the message transmitted by the source solely based on the directtransmission. However, due to the broadcast nature of the channel, the device at the top overhearsthe transmission from the source. Therefore, it can help in improving the communication betweenthe source and the destination by forwarding additional information about the source message.The destination can then decode the source message based on the combination of the two signals

3

Page 4: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

from the source and the relay. As each transmission undergoes a different path, spatial diversityis achieved.

For the classical three-node relay channel, Cover and El Gamal [2] described two fundamentalrelaying strategies where the relay either decodes (decode-and-forward), or compresses (compress-and-forward) the received source transmission before forwarding it to the destination. As analternative, the relay may simply amplify and retransmit the signal received from the source, astrategy known as amplify-and-forward. Cover and El Gamal also derived inner and outer boundson the capacity of the relay channel [2]. The key result of this pioneering work is that, in manyinstances, the overall capacity is better than the capacity of the source-to-destination channel. Intheir work, it was assumed that all nodes operate in the same frequency band. Hence, the systemcan be decomposed into a broadcast channel from the viewpoint of the source and a multipleaccess channel from the viewpoint of the destination, leading to interference at the destination.They also assumed full-duplex operation at the relay, i.e., the relay is able to transmit and receivesimultaneously in the same frequency band.

Despite the early works by van der Meulen and Cover and El Gamal in the 70s, relaying andcooperative communications in wireless networks remained mostly unexplored for three decades.However, it has probably been one of the most intensively researched topics in the informationtheory and communication theory communities in the last ten years. The boom in research oncooperative communications occurred in the early 2000s and was triggered by the seminal paper byLaneman, Tse and Wornell [3] on cooperative diversity, and the work by Hunter and Nosratinia [4].The goal of these works was to provide transmit diversity to single-antenna nodes in wirelessnetworks through cooperation. To achieve cooperation, nodes typically exchange their messagesin a first step and perform a cooperative transmission of all messages in a second step. Theseworks triggered also a large amount of work in the information theory community, identifying thefundamental limits of cooperative strategies [5]. Nevertheless, although a great deal of work hasbeen done in this field, it is remarkable that even the capacity of the basic three-node relay channelis only known for special cases. For example, for the degraded relay channel, decode-and-forwardrelaying achieves capacity.

From an information theoretic point of view, the highest gains can be achieved when thesource and the relay transmit over the same channel and full-duplex operation at the relay isconsidered [2]. Nevertheless, due to practical constraints, it is considered a challenge to provide full-duplex operation at the relay [6]. Likewise, without enforcing further multiple-access constraints,interference becomes another significant practical challenge [7]. It is therefore relevant to consider ascenario where transmission take place over orthogonal channels (using, e.g., time division multipleaccess (TDMA)), and the relay operates in half-duplex mode. Most of the relevant literaturein cooperative communications make this assumption. Fundamental limits of the scenario withorthogonal channels have been derived in [3]-[9].

Distributed Channel Coding

Since the early works on cooperative communications, the concept of cooperation has been ex-tended to a myriad of communication networks. Many different cooperative strategies and networktopologies, consisting of one or multiple transmitters, relays, and receivers, have been consideredand studied in recent years. These cooperative strategies are often based on multiple-antennatechniques like, e.g., distributed space-time coding or beamforming. Other approaches have theirroots in channel coding techniques. To harvest the potential gains of cooperative communications,point-to-point channel coding can indeed be extended to the network scenario, a concept that is

4

Page 5: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

known as distributed channel coding. Consider for example that the source in Fig. 1 transmitsan uncoded message and that the relay, after decoding it, forwards another copy of the sourcemessage. The destination receives two (noisy) versions of the same message, therefore, repetitioncoding distributed between the source and the relay has been realized. This trivial concept canbe generalized to more sophisticated coding structures. Assume that each transmit node in thenetwork uses an error correcting code, which may be very simple, e.g., a short block code, or veryadvanced, e.g., a low-density parity-check (LDPC) code. The main idea of distributed channelcoding is that a more powerful code, distributed over the network nodes, can be constructed byproperly joining together the codes used by each node. The way channel coding for cooperation isimplemented in communication networks depends heavily on the network topology, the consideredcooperative strategy, the channel model, and the purpose of the cooperation. Some code designsfollow intuitively from the topology of the network, while other approaches are directly inspiredby communication strategies proposed in the information theory literature. Yet another set ofsolutions use channel coding as a tool to implement distributed source coding schemes that are anintegral part of compress-and-forward schemes. Depending on the network topology, ideas fromnetwork coding may be integrated, and depending on the purpose of the cooperation, the schemesmay be optimized to perform close to the highest achievable rates or they may be optimized fordiversity and outage.

This chapter provides an introductory survey on distributed channel coding techniques forcooperative communications. From a code design perspective, decode-and-forward relaying isthe most attractive relaying strategy, as it guarantees that the transmitted messages are knownat the relay nodes such that a distributed coding scheme can be set up. Hence, our focus inthis chapter is on decode-and-forward relaying. For pedagogical purposes, the main principlesunderlying distributed channel coding are developed for the basic three-node relay channel. Weintroduce the main information-theoretic concepts and show how these concepts translate in termsof channel code designs. We discuss code design and optimization taking LDPC block codes andthe recently introduced spatially-coupled LDPC codes as examples. We also provide an overviewof distributed channel coding based on convolutional and turbo-like codes and discuss extensionsof the code constructions to other cooperative network topologies.

The chapter is organized as follows. Section 2 introduces the basic model of the three-noderelay channel, and gives an overview of the fundamental coding strategies for decode-and-forwardrelaying. In Section 3, a survey on distributed channel coding for the relay channel is provided, withfocus on LDPC block codes, spatially-coupled LDPC codes, and turbo-like codes. In Section 4,we briefly discuss relaying strategies when reliable decoding cannot be guaranteed at the relay.In Section 5 we discuss generalizations of the distributed channel coding schemes of Section 3 tomulti-source cooperative relay networks. Finally, Section 6 concludes the chapter and highlightssome of the challenges of distributed channel coding.

Notations

To ease the presentation in the remainder of the chapter, we introduce the following notation.Throughout the chapter, we use bold lowercase letters a to denote vectors, bold uppercase lettersA to denote matrices, and uppercase letters A to denote random variables. We assume all vectorsto be row vectors.

5

Page 6: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

XR

Encoder Decoderp(YR, Y |XS, XR)

Relay

W WXS YYR

Figure 2: Three-node relay channel model.

2 The Three-node Relay Channel

In this section, we focus on the simplest cooperative channel model, i.e. the three-node relay chan-nel [1]. After introducing the general model, we briefly describe the three main relaying strategies:amplify-and-forward, decode-and-forward, and compress-and-forward. We then summarize funda-mental bounds on the achievable rates under the decode-and-forward relaying strategy, and discussthe fundamental communication strategies proposed in the information theory literature to achievethese bounds for both half-duplex and full-duplex relaying.

2.1 Basic Model

The three-node relay channel describes the scenario where a source node conveys a message W ∈{0, . . . , 2k − 1} to a destination with the help of a single relay. The message W , which mayequivalently be represented by a length-k binary vector b, is encoded into a codeword xS, andtransmitted by the source. The corresponding vectors of channel observations at the relay and thedestination are denoted as yR and y, respectively. The codeword transmitted from the relay isdenoted by xR. The three-node relay channel is illustrated in Figure 2.

In the most general case, the relation between the two channel input symbols XS and XR fromthe source and the relay, respectively, and the channel output symbols YR and YD at the relay andthe destination is described by the conditional distribution p(YR, YD|XS, XR). For independentchannel observations YR and YD, we obtain

p(YR, YD|XS, XR) = p(YR|XS, XR)p(YD|XS, XR).

Note that in some cases (e.g., for the full-duplex relay channel with decode-and-forward relaying)the channel observations at the relay YR depend on previously transmitted symbols by the relayXR due to the chosen transmit strategy.

AWGN Relay ChannelFor the additive white Gaussian noise (AWGN) relay channel we can refine the model and char-acterize the input-output relation of the channel as

YR = HSRXS + ZR,

Y = HSDXS +HRDXR + ZD,

where ZR and ZD are independent real-valued white Gaussian noise samples with zero mean andunit variance, HSR, HSD, and HRD are channel coefficients on the source-relay, source-destination,and relay-destination links, and XS and XR are the code symbols with power constraints E{X2

S} =PS and E{X2

R} = PR which are transmitted from the source and the relay, respectively. In this

6

Page 7: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

BEC(ǫSD)

Encoder

Relay

Decoder

Encoder

Relay

Encoder

Relay

(a)

W Y

ZDHSD

XS

HSR ZR HRD

W

YR XR

YR

W

HSR ZR

HSD

(b)

Z1

XS

HRD Z2

DecoderW

XR Y2

Y1

W

(c)

XS

DecoderW

XRYR Y2

Y1

BEC(ǫSR) BEC(ǫRD)

Figure 3: AWGN relay channel with competing transmissions from the source and the relay (a),AWGN relay channel with orthogonal transmissions from the source and the relay in (b), andbinary erasure relay channel (c).

model, the two competing transmissions from the source and the relay form a multiple-accesschannel (MAC). The model is depicted in Figure 3(a).

As handling interference is in general a practical challenge [7], in some cases it is convenient toassume that transmissions from the source and the relay are carried out on orthogonal channels(see Figure 3(b)). In this case, the channel observation Y may be replaced by a vector of channelobservations Y = [Y1, Y2], with

Y1 = HSDXS + Z1,

Y2 = HRDXR + Z2,

where Z1 and Z2 denote independent real-valued white Gaussian noise samples with zero meanand unit variance.

Binary Erasure Relay ChannelFrom a code design point of view, it is also convenient to consider the binary erasure relay channelas shown in Figure 3(c). This model considers again orthogonal channels for the links to thedestination, and the source-relay, source-destination, and relay-destination links are given by binaryerasure channels (BECs) with erasure probabilities ǫSR, ǫSD, and ǫRD, respectively.

7

Page 8: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

2.2 Relaying Strategies

According to the way the information is processed at the relay, it is possible to define severalcooperative strategies. The three major relaying strategies, amplify-and-forward, decode-and-forward, and compress-and-forward, are briefly described in the following.

• Amplify-and-forward: Amplify-and-forward is perhaps conceptually the most easy to un-derstand cooperative strategy. The relay simply retransmits a scaled version of the signalit receives from the source, subject to a power constraint. The destination receives twoindependently-faded versions of the information and is thus able to make better decisions.The main drawback of this strategy is that it leads to a noise amplification.

• Decode-and-forward: The relay attempts to decode the received signal, then generates anestimate of the source message and re-encodes it prior to forwarding to the destination. Thedecode-and-forward strategy performs very well in the case of successful decoding at the relay.However, when the relay fails to correctly decode the received signal, an error propagationphenomenon is observed, and the decode-and-forward strategy may not be beneficial. For thisreason, adaptive decode-and-forward methods have been proposed, where the relay detectsand forwards the source information only in the case of high instantaneous source-to-relaylink signal-to-noise ratio.

• Compress-and-forward: The relay is no longer required to decode the information transmittedby the source but simply to describe its observation to the destination. The compress-and-forward strategy is used when the relay cannot decode the information sent by the source.The relay compresses the received signal using the side information from the direct linkand forwards the compressed information to the destination. Unlike decode-and-forward,compress-and-forward remains beneficial even when the source-to-relay link is not error-free.Furthermore, as opposed to decode-and-forward, in compress-and-forward the relay does notuse any knowledge of the codebook used by the source.

In [10], a comparison of decode-and-forward and compress-and-forward was performed accord-ing to the relay location. It was shown that the achievable rate of decode-and-forward is higherwhen the relay is close to the source while compress-and-forward outperforms decode-and-forwardwhen the relay gets closer to the destination. In this chapter, as our main focus is on distributedcoding, we consider only the decode-and-forward strategy.

2.3 Fundamental Coding Strategies for Decode-and-Forward Relaying

Among the different relaying strategies described in the previous section, decode-and-forward is themost relevant one when distributed coding is considered. In this section, we summarize fundamen-tal coding strategies for decode-and-forward relaying for the AWGN relay channel of Figure 3(a).We consider both full-duplex and half-duplex relaying. We also discuss the corresponding achiev-able rates. Here, the proofs of achievability are typically based on random-coding arguments, andthey do not directly provide practical coding schemes. However, as we will see later in this section,the achievability proofs provide guidance on how practical coding schemes can be designed.

8

Page 9: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

2.3.1 Full-duplex Relaying

A relay operating in full-duplex mode is capable of simultaneously transmitting and receivingon the same frequency band. Full-duplex relaying is beneficial since it leads to the most efficientutilization of the resources (compared to half-duplex relaying) and it enables the highest achievablerates. Unfortunately, hardware implementations of full-duplex relaying are still considered to be achallenge since the received power level of the self-interference exceeds by far the received powerlevel of the desired signal. This issue has recently been addressed, e.g., in [11] where spatialfiltering is proposed to mitigate the effect of self-interference. For further details, we refer thereader to [11] and the references therein. In the following, we follow the commonly used approachin the information and coding theory literature and do not explicitly address the issue of self-interference.

For decode-and-forward relaying, all rates R up to [2]

RFD−DF = supp(XS ,XR)

min{I(XS;YR|XR), I(XS, XR;Y )} (1)

are achievable. Here, we say that a rate is achievable if there exists a sequence of (2nR, n) codes

for which the error probability P(n)e can be made arbitrarily small for sufficiently large n. In the

special cases of the physically degraded relay channel, the reversely degraded relay channel, therelay channel with feedback, and the deterministic relay channel, the rate in (1) coincides with thechannel capacity [2]. For the general relay channel, however, (1) establishes an achievable rate,and it is not known whether it can be improved or not.

In the following, we summarize two important strategies that show how the boundary RFD−DF

of the set of achievable rates R can be approached. Both strategies follow the same generalapproach. However, they differ in the rate allocation at the source and the relay. The mainprinciple underlying both strategies is block Markov superposition coding, which requires that:

1. The message W , of length nRB bits, is split into B blocks W1, . . . ,WB of length nR bitseach, i.e., Wk ∈ {0, . . . , 2nR − 1}, which are transmitted in successive time slots,

2. The codes CS and CR that are used by the source and the relay, respectively, are designed fol-lowing the factorization p(XS|XR)p(XR) of the joint distribution p(XS, XR). This is achievedby explicitly designing a code CS(xR) for every codeword realization xR ∈ CR.

The general steps for transmitting the B messages are as follows: consider the transmissionof the message wk during the k-th block and assume that the relay has successfully decoded thepreviously transmitted messages w1, . . . , wk−1 and the destination has successfully decoded themessages w1, . . . , wk−2. For both strategies, the source transmits the current message wk by usinga codeword xS[k] that is chosen from the code CS(xR[k]). Here, the code CS(xR[k]) is selected bythe codeword xR[k] that is simultaneously sent from the relay during the k-th block. The sourcehas knowledge of this codeword since the relay processes the received messages with a delay of oneblock. Accordingly, the codeword xR[k] carries information on the previous message wk−1. At theend of the k-th block, the destination decodes the messages wk−1 based on the channel observationsy[k−1] and y[k] and by using its knowledge on the previously decoded messages w1, . . . , wk−2. Thetransmission of B subsequent blocks using the two different strategies is illustrated in Figure 4 andFigure 5. Here, we assumed that the transmission is initialized by a predefined message w0 = 0. Itis important to note that the transmission is carried out over B + 1 blocks, leading to a reductionin rate by a factor B/(B + 1). This rate loss can however be made small for sufficiently large Band is therefore neglected in the following.

9

Page 10: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

⇒ wB

xS(w1|0)

xR(w1)yR(1)

y(2) y(3)y(1)

xS(w2|w1)

w1 w2

xS(w3|w2)

w3

xR(w2)yR(2)

wB

xS(wB|wB−1)

y(B)

xR(wB−1) yR(B) xR(wB)

y(B + 1)

⇒ w1⇒ w2

⇒ wB−1

Figure 4: Full-duplex decode-and-forward relaying with regular encoding and sliding window de-coding.

Strategy 1 (Regular encoding and sliding window decoding)The first strategy (see also [12] for a more detailed description) is based on regular encoding andsliding window decoding. The codes CS and CR employed by the source and the relay, respectively,have the same rate R. They are designed in two steps and used in the following way.

1. The relay generates a rate-R code CR of length n (with i.i.d. symbols following the distributionp(XR)). The code is used for encoding the message wk−1 in time slot k to codeword xR[k] =xR(wk−1) ∈ CR, where the notation x(w) is used to denote that message w is encoded tocodeword x.

2. Since the source knows the previously transmitted message wk−1, which is transmitted in thek-th time slot from the relay, it generates the codebook of a length-n rate-R code CS(wk−1)conditioned on wk−1, and it uses the code for transmitting the message wk. That is, xS[k] =xS(wk|wk−1).

The destination decodes wk−1 based on the channel observations y[k], which depend on wk andwk−1, and y[k−1], which depend on wk−1 and wk−2 (see Figure 4). In order to do so, the destinationmakes use of the fact that it already has knowledge of the previously decoded message wk−2. Onthe other hand, the presence of the message wk is treated as interference.

Strategy 2 (Binning)The second strategy employs a so-called binning scheme at the relay (see, e.g., [2]), which leadsto an irregular rate assignment at the source and the relay with rates R and R0, respectively.To implement the binning, the relay splits the message alphabet W = {0, . . . , 2nR − 1} into 2nR0

disjoint sub-sets {S0, . . . ,S2nR0−1}, so-called bins, such that each message w ∈ W is assignedrandomly according to a uniform distribution to the bins Ss. Then, instead of directly forwardingthe message wk−1 during time slot k, as for the previous strategy, the relay forwards the index sk−1

that identifies the bin Ssk−1, which contains the message wk−1.

The codes CS and CR that are now used for transmission from the source and the relay, respec-tively, are constructed as described above. We have however to take into account the irregularrate assignment, that is, the relay transmits xR[k] = xR(sk−1), with sk−1 satisfying wk−1 ∈ Ssk−1

,drawn from a rate-R0 code CR of length n. For every possible bin index s, the source constructs a

10

Page 11: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

sB

xS(w1|0)

{S0, . . . ,S2nR0−1} {S0, . . . ,S2nR0−1}{S0, . . . ,S2nR0−1}

xR(s1)yR(1)

y(3)

xS(w2|s1)

w1 w3

yR(2) xR(s2)

Ss1y(1)

⇒ w1 ⇒ w2

Ss2y(2) Ss3

⇒ w3

w2

s1 s2

wB

yR(B) xR(sB)

y(B + 1)SsB

⇒ wB

y(B)

sB−1

xR(sB−1)

xS(w3|s2) xS(wB|sB−1)

s1 s2 s3

Figure 5: Full-duplex decode-and-forward relaying with irregular encoding.

rate-R code CS(s). Then, the source lets the bin index sk−1 select the code CS(sk−1) that is usedfor transmitting wk using the codeword xS[k] = xS(wk|sk−1).

In a first step, the destination decodes the codeword xR(sk−1) sent by the relay based on theobservation y[k] to recover the bin index sk−1. Again, the message wk is treated as interference. Ina second step, the receiver recovers wk−1 by using a list decoder based on y[k− 1] and intersectingthe resulting list with the bin Ssk .

2.3.2 Half-Duplex Relaying

In the case of half-duplex relaying, it is assumed that the relay cannot simultaneously transmitand receive on the same frequency band. Half-duplex relaying is therefore considered to be morepractical since the self-interference issue is avoided. While in the full-duplex case coding over alarge number of blocks is required in order to optimally utilize the capabilities of the full-duplexrelay and to mitigate the loss due to processing delay at the relay, only two blocks are required forthe transmission in the half-duplex case. The achievable rates are accordingly reduced comparedto the full-duplex case.

In the following, we assume that a total of n channel uses for the transmission, and the fractionsof channel uses allocated to the first and second time slots are given by the time-sharing parametersα ∈ [0, 1] and α = 1− α. For this setup, it has been shown in [13] that all rates up to

RHD−DF = supα∈{0,1},p(XS ,XR|T )

min

{

αI(XS;YR|XR, T = 1) + αI(XS;Y |XR, T = 2),αI(XS;Y |XR, T = 1) + αI(XS, XR;Y |T = 2))

}

(2)

are achievable. Here, the random variable T indicates whether the first time slot (T = 1) orthe second time slot (T = 2) is considered. Note that p(T = 1) = α and p(T = 2) = α. Thisdistinction is relevant since the source may allocate power differently to the two time slots. It isfurthermore convenient to keep track of the fact that XR[1] = 0 due to the half-duplex constraint.

In order to achieve this rate, the message W ∈ {0, . . . , 2nR − 1} is first split into two messagesU ∈ U , with U = {0, . . . , 2nRU − 1}, and V ∈ {0, . . . , 2nRV − 1}, with R = RU + RV , such thatW = [U, V ]. The message U is transmitted during the first phase. The transmission is overheardby the relay and the destination. After successfully decoding, the relay uses a binning schemeas described in the previous section to split the message set into bins. In the second phase, therelay forwards the bin index to the destination. The source simultaneously transmits the messageV using the same channel. Even though this strategy is very similar to the second full-duplexstrategy presented in the previous section, we summarize the three different channel codes and thebinning scheme that are used during the transmission for completeness:

11

Page 12: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

1. The source employs a rate-RS,1 code CS,1 of length αn for transmitting the message U tothe relay and the destination during the first transmission phase, xS[1] = xS,1(u). Clearly,RU = αRS,1.

2. The relay splits the message set U into 2nR0 bins {S0, . . . ,S2nR0−1} of equal size. For everymessage u that is successfully decoded at the end of the first phase, the relay determines thebin index s of the bin Ss that contains the message u.

3. The relay transmits the bin index s using a length-αn rate-RR code CR in the second phase,xR[2] = xR(s). Accordingly, we get the following relation between the binning rate R0 andthe rate RR: R0 = αRR.

4. Since the source knows the bin index s, it chooses to cooperate with the relay by generatingfor each realization s of the bin index S a code CS,2(s), with rate RS,2 and length αn that isused for encoding V , xS[2] = xS,2(v|s). Clearly, we have RV = αRS,2.

The source starts decoding in the second time slot. It first decodes the bin index s and then thesecond message v. In a second step, the message u is decoded by utilizing knowledge of the binindex. All steps are summarized in Figure 6.

v

xS,1(u)

y(2)y(1)

{S0, . . . ,S2nR0−1}s

vu

w

xS,2(v|s)

Ss

yR(1) xR(s)

s

⇒ u

Figure 6: Half-duplex decode-and-forward relaying with irregular encoding.

2.3.3 Design Objectives: Achieving the Optimal Decode-and-Forward Rates

In the following, we discuss the requirements that need to be fulfilled in order to achieve theoptimal decode-and-forward rates and formulate design objectives for distributed channel coding.

Full-Duplex Relaying Using Regular EncodingIn this case, two codes, CS and CR, need to be designed, which are used by the source and the relay,respectively, and produce a desired joint distribution p(XS, XR). This is achieved by exploiting thefactorization p(XS|XR)p(XR). As we can see from (1), the joint distribution p(XS, XR) is a designparameter that provides a generic description of the set of parameters (e.g., symbol alphabets,power allocation, time sharing, and correlation) that need to be optimized for maximizing theoverall rate.

12

Page 13: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

To identify the challenges in the design of the codes CS and CR we consider the case where thedisturbances introduced by the channels are of a non-binary nature. In this case, a certain class ofjoint distributions can be realized by using superposition coding as follows. Assume that the relayemploys a rate-R code CR of length n with power constraint E{X2

R} ≤ PR for transmitting wk−1

in the k-th block. Assume also that the source has available a rate-R code C∗S of length n, with

symbols X∗S independent of the code symbols XR sent from the relay, for encoding wk in the k-th

block. For convenience, we assume that this code has unit power. The codewords xS(wk|wk−1)are then generated as a weighted superposition of the codewords xS

∗(wk) and xR(wk−1),

xS(wk|wk−1) =√

PS

(√ρxS

∗(wk) +

1− ρ

PR

xR(wk−1)

)

, (3)

where E{X2S} ≤ PS defines the power constraint at the source. The factor ρ controls the allocation

of the power at the source that is spent for the transmission of the message wk and the cooperativetransmission of xR(wk−1).

To get further insights, we specialize to the AWGN relay channel and assume the realizationsof the channel coefficients hSR, hSD, and hRD to be fixed for the duration of n ·B channel uses. Inthis setup, the channel outputs at the source and the relay at the end of the k-th block are givenby

yR[k] =√

ρPShSRxS∗(wk) +

(1− ρ)PS

PR

hSRxR(wk−1) + zR[k]

y[k] =√

ρPShSDxS∗(wk) +

(1− ρ)PS

PR

hSD + hRD

xR(wk−1) + z[]k,

respectively. Under the assumption that the previous decoding stages have been successful, inter-ference from previously transmitted symbols can be removed. Thus, the relay decodes wk basedon

yR[k] =√

ρPShSRxS∗(wk) + zR[k]. (4)

Similarly, the destination decodes wk based on

y[k + 1] =√

ρPShSDxS∗(wk+1) +

(1− ρ)PS

PR

hSD + hRD

xR(wk) + z[k + 1] (5)

y[k] =√

ρPShSDxS∗(wk) + z[k], (6)

treating the interference from the codeword xS∗(wk+1) as noise. The overall code structure that

results from this coding strategy is illustrated in Figure 7. It can be interpreted as the con-catenation of the codes C∗

S and CR, which defines a length-2n rate-R/2 code C with codewordsx = [xS

∗(wk),xR(wk)], where the first and second segments of the codewords are transmitted overdifferent channels.

Note that the first constraint on the achievable rate in (1) is induced by the channel that isdescribed in (4), and the second constraint results from the channels described in (5) and (6).

For Gaussian codebooks, it is now straightforward to show that (1) can be reformulated as

RFD−DF = supρ∈[0,1]

min

{

12log(1 + ρSNRSR)

12log(1 + SNRSD + 2

(1− ρ)SNRSDSNRRD + SNRRD)

}

, (7)

13

Page 14: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

C

CR

C∗S

wDecoder

Channel 2

Channel 1

x∗S

xR

y

y

w

Figure 7: Overall code structure resulting from regular encoding.

where we defined SNRij = h2ijPi. We observe that the first bound is monotonically increasing in

ρ, starting from zero for ρ = 0, and the second bound is monotonically decreasing. We can makethree interesting observations regarding the optimal power allocation ρ⋆, which affect the codedesign:

1. By evaluating the expression in (7) for ρ = 1, we see that whenever SNRSR < SNRSD +SNRRD, the optimal power allocation is ρ⋆ = 1. In this case, the link between the sourceand the relay limits the performance while the second constraint is inactive. It follows thatthe code C∗

S has to be capacity achieving for the source-to-relay channel. Since the secondbound in (7) is not tight, the overall code C does not need to be capacity achieving as longas it is decodable at the destination.

2. For SNRSR ≥ SNRSD + SNRRD, the optimal power allocation can be found by equatingthe first constraint with the second constraint. In other words, both constraints have to besatisfied simultaneously. This implies that both the code C∗

S and the overall code C need tobe capacity achieving.

3. Finally, if SNRSR ≥ SNRSD + SNRRD and the power allocation is not optimally chosen, twodifferent cases can occur:

(a) If ρ < ρ⋆, the first bound is tight while the second bound is loose. Hence, the codeC∗S has to be capacity achieving while the overall code C is not required to be capacity

achieving as long as it is decodable at the destination.

(b) If ρ > ρ⋆, the second bound is tight while the first bound is loose. In this case, theoverall code C has to be capacity achieving while the code C∗

S only needs to be decodableat the relay (without achieving the capacity of the source-to-relay link).

In the above discussion, the exact SNR constraints are only valid for Gaussian inputs, and they mayhold in good approximation for other input distributions if very low SNR regimes are considered.Nevertheless, the three different design objectives for the different regimes identified above, namely

Case 1: Capacity-achieving component code C∗S and capacity achieving overall code C;

Case 2: Capacity-achieving component code C∗S and decodability of the overall code C;

Case 3: Capacity-achieving overall code C and decodability of the component code C∗S;

will be relevant in other cases as well. For a related discussion on the special case of binary-inputadditive white Gaussian relay channels, we refer the reader to [14].

From the structure of the distributed code (see Figure 7) and the decoding schedule, it isapparent that the problem of designing good codes for the full-duplex relay channel with regular

14

Page 15: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

encoding is closely related to the design of parallel concatenated codes and rate-compatible codes.This shows that distributed turbo-codes (see Section 3.2), which are typically considered to bean engineering approach to distributed channel coding, can indeed be related to the fundamentalcoding strategies provided by the information theory literature. As we will see in Section 3.1,rate-compatible code structures play an important role for the LDPC code design for the relaychannel.

Full-Duplex Relaying Using Irregular EncodingWe start the discussion with the code CR that is used by the relay to forward the bin index sk−1

to the destination in time slot k. We assume again that superposition coding is employed forgenerating the codewords of the codes CS(sk−1), as described in (3), and that the code C∗

S, whichis used for encoding wk, is decodable at the relay but not at the destination.

The code CR is solely used as a point-to-point code in order to forward the bin index to thedestination. In contrast to the previous case, it does not become part of an extended code structure.The destination will decode the bin index sk−1 based on

y[k] =√

ρPShSDxS∗(wk) +

(1− ρ)PS

PR

hSD + hRD

xR(sk−1) + z[k], (8)

treating the interference from the codeword xS∗(wk) as noise. The optimization of the code CR can

be done by using standard tools like extrinsic information transfer (EXIT) charts [15] or densityevolution [16], taking into account the accurate distribution of the noise-plus-interference.

After successfully decoding the bin index sk−1 and after removing the interference due toxS

∗(wk−2) from the channel output y(k − 1), the destination decodes wk−1 using

y(k − 1) =√

ρPShSDxS∗(wk−1) + z(k − 1). (9)

Since the code C∗S is not directly decodable at the destination, decoding wk−1 is performed consider-

ing the code CS(sk−1), which contains codewords corresponding to the set of messages contained inthe bin Ssk−1

, CS(sk−1) = {xS(w) ∈ C∗S|w ∈ Ssk−1

}. The code CS(sk−1) has the following properties:

1. Since each bin Ss contains 2n(R−R0) messages (the 2nR messages are grouped into 2nR0 binsdue to the binning) and codewords of length n are considered, it follows that the code CS(s)has rate R = R−R0.

2. Since for a given s all codewords xS ∈ CS(s) are as well codewords of the code C∗S, the codes

C∗S and CS(s) form a pair of nested codes1, C∗

S being the fine code and CS(s) being the coarsecode.

In Section 3.1.1, we will see how nested codes can be implemented with linear codes.We can now identify the requirements that need to be satisfied to approach the boundary of

the set of achievable rates in (1):

1. Whenever the first bound on the achievable rate is tight, the fine code C∗S has to be capacity

achieving for the source-to-relay channel. This follows from the same arguments as in theregular-encoding case.

1We say that two codes C and C are nested if C ⊂ C, i.e. each codeword of C is also a codeword of C. We call Cthe fine code and C the coarse code.

15

Page 16: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

2. Whenever the second bound in (1) is tight, the code CR has to be capacity achieving for therelay-destination link specified in (8) and the coarse code CS(s) has to be capacity achievingfor the source-to-destination link described in (9). As a consequence, the binning rate R0

has to be equal to the capacity of the relay-to-destination link.

Clearly, whenever one of the constraints in (1) is loose, the capacity achieving properties of therespective codes can be relaxed and sub-optimal code designs are sufficient.

Half-Duplex RelayingFor the half-duplex relay channel, the optimization of the achievable rate in (2) involves theoptimization of both the time-sharing parameter α and the power allocation at the source (thesource has to distribute its power between the first and second time slot; for the second timeslot, the source has to allocate power for its own transmission as well as for the cooperativetransmission). Under the assumption that I(XS;YR|XR, T = 1) > I(XS;Y |XR, T = 2), we cansee that the first constraint in (2) is an increasing linear function in α. This assumption requiresthat the channel to the relay supports higher rates compared to the channel to the destination.It is a reasonable assumption for decode-and-forward relaying since the relay has to be able todecode at higher rates compared to the destination in order to be of any help. Since furthermoreI(XS;Y |XR, T = 1) < I(XS, XR;Y |T = 2), it is easy to conclude that the second constraint in (2)is a decreasing function in α. For a given power allocation, the optimal time-sharing parameter α⋆

is therefore found by equating the first constraint in (2) with the second one. As a consequence,both bounds in (2) are always tight under half-duplex relaying if the time-sharing parameter ischosen optimally. This is in contrast to the full-duplex case, where the second constraint may beloose. If a suboptimal split of the channel uses for the first and second time slots is considered, thesituation becomes similar to full-duplex relaying with a suboptimal power allocation, as discussedabove: for α < α⋆, only the first constraint needs to be considered when specifying the code design,and for α > α⋆, only the second constraint needs to be taken into account.

The optimal code design can now be found using the same arguments as in the previousdiscussion. The source uses the code CS,1 during the first time slot for transmitting u to the relay.In the second time slot, the relay uses the code CR for transmitting the bin index s, and the sourceencodes v using the code C∗

S,2. The code CS,2(s) is then obtained by superposing codewords of thecodes C∗

S,2 and CR similarly to Equation (3). The destination decodes u by considering the code

CS(s), which is obtained by restricting the codeword set of CS,1 to codewords that are included in

the bin Ss. CS,1 and CS(s) form a pair of nested codes similar to the previous case. We can nowconclude the following design objectives:

1. In order to achieve the first bound in (2), CS,1 and C∗S,2 have to be designed to be capac-

ity achieving for the interference-free source-to-relay channel in the first time slot and theinterference-free source-to-destination channel in the second time slot, respectively.

2. Since C∗S,2 is designed to achieve the capacity of the interference-free source-to-destination

link in the second time slot, CR has to achieve the capacity of the relay-destination link in thepresence of interference from the codewords transmitted from the source during the secondtime slot in order to reach the second constraint in (2).

3. Achieving the second constraint in (2) requires furthermore that the code CS(s) with rateR = RS,1 − R0/α is capacity achieving over the interference-free source-to-destination link

16

Page 17: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

during the first time slot. Therefore, both the fine code CS,1 and the coarse code CS(s) haveto be designed to be capacity achieving.

As a final remark, we note that the code structure that is illustrated in Figure 7 and that wediscussed in the full-duplex case, can also be adopted for the half-duplex scenario. That is, thehalf-duplex rates are also achievable without explicit binning. Similarly to the full-duplex case,the destination considers the code C, with rate R = αRS,1 and codewords x(u) = [xS,1(u),xR(u)],when decoding the message u, and it decodes using the channel observations of both time slotswhile treating the interference from x∗

S,2 as noise. After successfully decoding u, the message vis decoded based on the interference-free channel outputs in the second time-slots. This codingscheme leads to the highest achievable rates if both the code CS,1 as well as the extended code Care capacity achieving for the considered channels and the respective rates.

3 Distributed Coding for the Three-node Relay Channel

In Section 2.3, we summarized the fundamental coding strategies that achieve the decode-and-forward rates in the three-node relay channel. We identified the different component codes thathave to be used during the transmission, and we stated fundamental constraints that limit theachievable rates. In this section, we discuss distributed coding for the three-node relay channel.In particular our focus is on the extension of the two main families of modern codes, LDPC codesand turbo codes, to the relaying scenario.

3.1 LDPC Code Designs for the Relay Channel

Distributed coding for relaying based on LDPC codes is highly inspired by the information theoricanalysis addressed in Section 2.3. In this section, we discuss different code structures based onLDPC codes that are useful for implementing the coding strategies introduced in Section 2.3. Wealso discuss optimization of irregular LDPC codes and spatially-coupled LDPC (SC-LDPC) codes.

3.1.1 Code Structures for Decode-and-Forward Relaying

In Section 2.3, we showed that in both the full-duplex and the half-duplex cases, the highestachievable rate under decode-and-forward relaying can be achieved either by using rate-compatiblecode structures or by using a binning scheme, which leads to a nested codes design. In the following,we introduce code structures that can be employed to realize the desired coding schemes. We startwith different implementations of the binning scheme.

Nested CodesBinning was introduced in Section 2.3.1 as a random partitioning of the message set of the source,performed at the relay. In Section 2.3.3, we have then shown that restricting the code used by thesource to codewords contained in the bin, which is indicated by the relay, defines a pair of nestedcodes. Since we are interested in the design of linear codes, the question that arises is how pairsof good nested linear codes can be constructed. The answer to this question is given in [17], andwe summarize the main points here.

Let us consider a pair of length-n nested codes (C, C) with rates R and R, respectively, suchthat C ⊂ C. It follows R < R. Let in the following H1 denote the (n−k1)×n parity-check matrix

17

Page 18: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

H2H1

Figure 8: Tanner graph of a bilayer/two-edge type expurgated code.

that defines C and let H be the (n − k2) × n parity check matrix of the code C. Accordingly,H1x

T1 = 0, for all x1 ∈ C, and HxT

2 = 0, for all x2 ∈ C. Then, if

H =

[

H1

H2

]

, (10)

where H2 is a (k1 − k2)× n matrix, the codes C and C indeed form a pair of nested linear codes,satisfying C ⊂ C. This follows directly from the fact that the parity-check matrix H1 is includedin H , hence H1x

T2 = 0, for all x2 ∈ C. On the other hand, it is clear that only some codewords

x1 ∈ C are also included in the coarse code C.Since the additional constraints defined by H2 remove codewords from the codeword set of

C, the code C is referred to as an expurgated code. Furthermore, the parity-check matrix H , asspecified in (10), describes a bilayer linear block code, also referred to as two-edge type code. Thesetwo terms are motivated by the structure of the parity-check matrix and the corresponding Tannergraph, which is illustrated in Figure 8 for a simple example. As it can be seen from the figure,the Tanner graph consists of three different types of nodes, the variable nodes, the check nodesassociated with the parity-check matrix H1, and the check nodes associated with the parity-checkmatrix H2, which are connected through two different types/layers of edges. The two layers aredistinguished by solid and dashed lines in Figure 8. In the considered example, H1 is the parity-check matrix of a rate-2/3 code with regular variable node degree dv,1 = 2 and check node degreedc,1 = 6. By adding the check nodes specified by the matrix H2, an overall rate-1/2 code withregular node degree dv = 3 and check node degree dc = 6 is obtained.

In order to implement a binning scheme based on this code structure, we can assign to everycodeword x1 ∈ C a length-(k1 − k2) syndrome s, defined as s = HxT

1 . Since there exist 2k1−k2

unique syndrome vectors s, we can define a set of cosets of the coarse code C, with elements C(s)labeled by the syndrome vectors s as follows:

C(s) ={

x | HxT =

[

H1

H2

]

xT =

[

0

s

]}

.

The union of all cosets C(s) reproduces the fine code C, i.e.,

C =⋃

s∈{0,1}k1−k2

C(s).

We conclude that the cosets C(s) provide a structured approach for partitioning the fine code Cinto 2k1−k2 disjoint bins of equal size. The bin index for a given codeword x ∈ C is then given bythe corresponding syndrome s, and it can easily be calculated as s = H2x

T .This code structure can now be applied to the relay channel in the following way (see, e.g.,

[14, 18–21]): The source uses the fine code C for transmitting its message (i.e., C corresponds to

18

Page 19: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

the code C∗S in the full-duplex case and to the code CS,1 in the half-duplex case). After successfully

decoding the transmitted codeword x at the relay, the relay calculates the syndrome s = H2xT

and forwards it to the destination. The destination only considers codewords that are included inthe coset C(s), i.e., it searches for codewords x that are compatible with the channel observationsand that satisfy

HxT =

[

H1

H2

]

xT =

[

0

s

]

.

If sparse-graph codes like, e.g., LDPC codes are considered, this decoding step can easily beimplemented by using the message passing decoder on the graph that is defined by H and bytaking into account the non-zero check constraints that are provided by the syndrome bits s.

An alternative implementation of the decode-and-forward binning was proposed in [18] anddeveloped further in, e.g., [22, 23]. The strategy is based on the assumption that the parity-checkmatrix H of the code C, which is used by the source and that is to be modified through the binning(again, C corresponds to the code C∗

S in the full-duplex case and to the code CS,1 in the half-duplexcase), has the following structure:

H = [H1,H2] . (11)

Assuming that the rate of the code C is R = k/n, then H is of dimension k×n, H1 is of dimensionk × n1, and H2 is of dimension k × n2, where n = n1 + n2. The codewords of the code C canaccordingly be written as x = [x1,x2]. If x1 is now a valid codeword of a lower-rate code, thecode C is referred to as a lengthened code. An example for a Tanner graph for this code structureis shown in Figure 9. Again, we have a bilayer or two-edge type code.

x2

H1 H2

x1

Figure 9: Tanner graph of a bilayer/two edge type linear block code with parity-check matrixH = [H1,H2].

For transmissions over the relay channel, the source uses a channel code that is structured asshown in (11). Then the relay uses the parity-check matrix HSR of a capacity achieving codefor the interference-free source-relay channel to generate a syndrome s for the second segmentof the codeword x, i.e., s = HSRx

T2 . The syndrome s is forwarded to the destination. The

destination uses the syndrome to first decode the segment x2 based on its channel observationsusing the decoder defined by the parity-check matrix HSR and considering the syndrome bits s.In a second step, the destination decodes x1 using the code defined by H1 and by consideringanother syndrome s = H2x

T2 . The second syndrome follows from the fact that

Hx = [H1,H2] ·[

xT1

xT2

]

= H1xT1 +H2x

T2 = 0.

In order to guarantee reliable transmission using this strategy, it is required that H1 is the parity-check matrix of a capacity-achieving code for the interference-free source-relay link while the codedefined by H is capacity achieving over the source-relay channel.

19

Page 20: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

H2

H1

H3

Figure 10: Tanner graph of a three-edge type linear rate compatible block code.

Rate-compatible Extended CodesIn the literature, there are mainly two different approaches to rate-compatible code design: Rate-compatible puncturing (e.g., [24–26]) and code extension (e.g., [27, 28]). Puncturing is a simpleapproach, which however suffers from the drawback that the gaps between the decoding thresholdsand the capacity limit increase with increasing rates. This performance loss may be acceptable ifa suboptimal power allocation in the full-duplex case or a suboptimal time-sharing parameter forhalf-duplex relaying is chosen. Code extension methods on the other hand overcome this drawbackat the cost of an increased optimization overhead if irregular LDPC codes are considered. Theycan be designed to have an approximately uniform gap to the capacity limit for different rates.Motivated by this benefit, we focus in this chapter on code designs that are based on graphextension (see, e.g., [29,30]). Alternative design approaches that use puncturing as in [31] or thatuse the same code at the source and the relay as in [32,33] are not considered.

Let a code C with rate R1 = k1/n1 and length n1 be given and let H1 denote its (n1− k1)×n1

parity-check matrix. The goal of code extension is to construct a code C with rate R2 = k1/n2 < R1

and length n2 > n1 such that its codewords x2 are obtained by appending nE additional codesymbols xE to the codewords x1 ∈ C, i.e., x2 = [x1,xE] and n2 = n1 + nE. In the context ofthe relay channel, x1 is associated with the codeword transmitted from the source, and xE isassociated with the code symbols sent from the relay. Since the code C is fixed and the codewordsx1 by definition satisfy H1x

T1 = 0, it can be expected that the parity-check matrix H1 will be a

sub-matrix of the (n2 − k1) × n2 parity-check matrix H of the code C. It is easy to see that thefollowing structure of the parity-check matrix H of the extended code C is compatible with thisconstraint,

H =

[

H1 0

H2 H3

]

. (12)

Since n2 = n1 + nE, the dimensions of the parity-check matrix H can be rewritten as (n1 − k1 +nE)× (n1 + nE), and we observe that both the number of rows and the number of columns in H

grow linearly in nE. It follows that H3 has to be an nE × nE square matrix. In order to obtaina proper parity-check matrix H , it is furthermore required that H3 is full rank. Hence, it followsthat the dimensions of H2 are nE × n1. The structure of the parity-check matrix is illustrated inFigure 10, which shows an example of the corresponding Tanner graph. The Tanner graph includesfour different types of nodes, two types of check nodes and two types of variable nodes, which areconnected through three different types of edges. The code structure in (12) defines a three-edgetype LDPC code accordingly.

It is interesting to note that the structure of the parity-check matrix H allows us to make aconnection to the structured binning scheme described above. This becomes obvious if we consider

20

Page 21: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

the encoding for a given codeword x1: In a first step, a syndrome vector s is generated using theparity-check matrix H2: s = H2x

T1 . As above, the syndrome vector plays the role of a bin index.

The syndrome s is then mapped into the code symbols xE by utilizing the fact that H3 is fullrank by xT

E = H−13 s.

3.1.2 Irregular LDPC Codes

In the previous section, we introduced code structures that are useful for implementing the funda-mental decode-and-forward relaying strategies. In all cases, the code structures lead to multi-edgetype codes. In the following, we introduce degree distributions for multi-edge type LDPC blockcodes in order to specify ensembles of codes. We also briefly explain the multi-edge density evolu-tion method, and discuss code design and optimization.

Degree Distributions of Multi-edge Type CodesMulti-edge type codes are characterized by structured parity-check matrices that consist of differentsub-matrices. Each sub-matrix in the overall parity-check matrix defines a specific type of edges.To define ensembles of multi-edge type codes, degree distributions need to be introduced for eachtype of edges. The degree distributions are defined to take into account that variable and/or checknodes may have connections to edges of different types. Degree distributions of multi-edge typecodes can now be defined as follows:

1. We first consider variable and check nodes that are connected to edges of a single typek. The corresponding variable degree distribution defined from a node perspective specifiesthe fractions Λ

(k)i of variable nodes of degree i that are connected to type k edges. It is

convenient to express the degree distribution as a polynomial Λ(k)(x) =∑

i

Λ(k)i xi. The check

degree distribution Γ(k)(x) =∑

i

Γ(k)j xj from a node perspective is defined in a similar way,

with coefficients Γ(k)j defining the fraction of degree-j check nodes. For the performance

analysis of the ensemble, it is furthermore helpful to introduce the degree distributions froman edge perspective. For edges of type k, we have

λ(k)(x) =∑

i

λ(k)i xi−1 and ρ(k)(x) =

j

ρ(k)j xj−1,

with coefficients λ(k)i and ρ

(k)j specifying the fractions of edges that are connected to degree-

i variable nodes and degree-j check nodes, respectively. It is easy to see that the degreedistributions from an edge perspective are obtained as the normalized first derivative of thenode degree distributions, i.e., λ(k)(x) = Λ(k)′(x)/Λ(k)′(1) and Γ(k)(x) = Γ(k)′(x)/Γ(k)′(1).

2. For the code structures that are considered in this section, degree distributions for nodes thatare connected to two different types of edges k and l are relevant. Similar to the previouscase we define the variable and check degree distribution from a node perspective as

Λ(k,l)(xk, xl) =∑

ik,il

Λ(k,l)ik,il

xikk x

ill and Γ(k,l)(xk, xl) =

jk,jl

Γ(k,l)jk,jl

xjkk xjl

l .

Here, the coefficients Λ(k,l)ik,il

(respectively, the coefficients Γ(k,l)jk,jl

) give the fractions of variablenodes (respectively check nodes) that are connected to ik type k edges and il type l edges (re-spectively, jk type k edges and jl type l edges). The corresponding degree distributions from

21

Page 22: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

an edge perspective are obtained as normalized partial derivatives of the degree distributionsfrom a node perspective, i.e.,

λ(k)(xk, xl) =∑

ik,il

λ(k)ik,il

xik−1k xil

l =∂

∂xk

Λ(k,l)(xk, xl)∂

∂xk

Λ(k,l)(1, 1)

and

ρ(k)(xk, xl) =∑

jk,jl

ρ(k)jk,jl

xjk−1k xjl

l =∂

∂xk

Γ(k,l)(xk, xl)∂

∂xk

Γ(k,l)(1, 1)

with coefficients λ(k)ik,il

(coefficients ρ(k)jk,jl

) indicating the fractions of type k edges that are con-nected to variable nodes (check nodes) with ik connections to type k edges and il connectionsto type l edges (with jk connections to type k edges and jl connections to type l edges).

It is important to note that the single layer degree distributions ΛHk(x) and ΓHk

(x) that character-ize the sub-matrix Hk corresponding to the layer k edges can be obtained from the degree distribu-tions above by marginalization, i.e., we can write ΛHk

(x) = Λ(k,l)(x, 1) and ΓHk(x) = Γ(k,l)(x, 1).

This relation is important in order to formulate constraints for the code optimization. Anotherimportant parameter of an ensemble of codes is the design rate. It is defined through the numberof variable nodes NV and the number of check nodes NC as follows:

R =NV −NC

NV

. (13)

By evaluating the overall number of variable and check nodes in the code structure this definitioncan also be applied to multi-edge type codes.

Multi-edge Type Density EvolutionTo assess the performance of an ensemble of codes that is defined by a certain set of degreedistributions, the density evolution method (see, e.g., [16]) can be applied to predict the decodingthreshold under BP decoding (i.e., the limit channel parameter for which successful decodingis possible). The conventional density evolution tracks the probability density functions of themessages that are exchanged along the edges in the graph during BP decoding. For irregular codes,the densities are averaged over the different node degrees by considering the degree distributionsfrom an edge perspective. Density evolution is complex for general channel models; however, forthe BEC, density evolution is equivalent to tracking the average erasure probability of the messagesthat are exchanged during the iterations. Convenient closed-form expressions can be obtained inthis case.

When applying density evolution to multi-edge type codes, the structure of the graph requiresthat densities are evaluated for each type of edges separately in order to account for the differencesin the statistical properties of the edges of different types [16]. This is demonstrated in the followingexamples which show the multi-edge density evolution recursions for the three code structuresdefined in Section 3.1.1. Here, we define the considered code ensemble by giving the set of degreedistributions from a node perspective. The degree distributions from an edge perspective that areused to calculate the average erasure probabilities are obtained as normalized derivatives of thedegree distributions from a node perspective.

1. Ensembles of expurgated codes are defined by the set of degree the distributions{Γ(1)

E (x1),Γ(2)E (x2),Λ

(1,2)E (x1, x2)}. Here, we associate the type-1 edges with the sub-matrix

22

Page 23: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

H1 in (10) and the type-2 edges with the sub-matrix H2. Now, let p(k)m denotes the average

erasure probability for the messages sent from the variable nodes along the type-k edgesduring the m-th iteration. Then, the two-dimensional density evolution recursion is

p(1)m = ǫλ

(1)E

(

1− ρ(1)E (1− p

(1)m−1), 1− ρ

(2)E (1− p

(2)m−1)

)

,

p(2)m = ǫλ

(2)E

(

1− ρ(2)E (1− p

(2)m−1), 1− ρ

(1)E (1− p

(1)m−1)

)

,(14)

where ǫ is the erasure probability of the channel.

2. An ensemble of lengthened codes can be specified by the set of degree distributions{Γ(1,2)

L (x1, x2),Λ(1)L (x1),Λ

(2)L (x2)}, where we again associate the type-1 edges with the sub-

matrix H1 in (11) and the type-2 edges with the sub-matrix H2. The two-dimensionaldensity evolution recursion is obtained as

p(1)m = ǫλ

(1)L

(

1− ρ(1)L

(

1− p(1)m−1, 1− p

(2)m−1

))

p(2)m = ǫλ

(2)L

(

1− ρ(2)L

(

1− p(2)m−1, 1− p

(1)m−1

))

.

3. Ensembles of rate-compatible extended codes are finally defined by the degree distributions{Γ(1)

RC(x1),Λ(1,2)RC ,Γ

(2,3)RC (x2, x3),Λ

(3)RC(x3)}, where the types 1, 2 and 3 are associated with the

sub-matrices H1, H2, and H3, respectively, in (12). The following three-dimensional densityevolution recursion is obtained

p(1)m = ǫ1λ

(1)RC

(

1− ρ(1)RC(1− p

(1)m−1), 1− ρ

(2)RC(1− p

(2)m−1, 1− p

(3)m−1)

)

p(2)m = ǫ1λ

(2)RC

(

1− ρ(2)RC(1− p

(2)m−1, 1− p

(3)m−1), 1− ρ

(1)RC(1− p

(1)m−1)

)

p(3)m = ǫ2λ

(3)RC

(

1− ρ(3)RC(1− p

(3)m−1, 1− p

(2)m−1)

)

.

Two different channel parameter ǫ1 and ǫ2 show up in the density evolution recursion. Thisis due to the fact that the two segments of the code are transmitted over two differentchannels. The source-destination link is characterized by ǫ1 and the relay-destination link ischaracterized by ǫ2.

Code Optimization for Expurgated CodesThe goal of the code optimization is now to find degree distributions {Γ(1)

E (x1),Γ(2)E (x2),Λ

(1,2)E (x1, x2)}

that satisfy the design objectives identified in Section 2.3.3. In the following, we focus on the mostrestrictive case, where the code used by the source has to be capacity achieving while the lower-rate sub-codes are capacity achieving for the source-destination channel. A natural approach is tostart from a good set of degree distributions Λ⋆

H1(x) and Γ⋆

H1(x), that specify the fine code used

by the source, and to extend the graph in order to obtain a two-edge type code with the desiredproperties. In this way, we directly obtain Γ

(1)E (x1) = Γ⋆

Hk(x) and a constraint

Λ(1,2)E (x1, 1) = Λ⋆

H1(x). (15)

To formulate the optimization problem for finding Γ(2)E (x2) and Λ

(1,2)E (x1, x2) it is helpful to

note that the design rate of the fine code RH1 and the coarse code RH are related as

RH =NV −N

(1)C −N

(2)C

NV

= 1− d(1)v

d(1)c

− d(2)v

d(2)c

= RH1 −d(2)v

d(2)c

, (16)

23

Page 24: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

where N(k)C is the number of check nodes connected to type k edges and

d(k)v =∑

i1,i2

ikΛE(1,2)i1,i2

and d(k)c =∑

jk

jkΓE(k)jk, with k ∈ {1, 2},

are respectively the average variable node and check node degrees with respect to type k edges.For a given channel parameter of the source-destination link, the goal of the optimization is nowto maximize the design rate RH in (16).

Since a joint optimization of the degree distributions Γ(2)E (x2) and Λ

(1,2)E (x1, x2) for the type 2

edges is complicated, a common approach is to fix either Γ(2)E (x2) or Λ

(1,2)E (x1, x2) and optimize

the other distribution. Furthermore, since concentrated check-degree distributions are sufficient todesign capacity achieving codes, it is a standard approach to restrict the optimization to check-degree distributions of the from

Γ(2)E (x2) = ΓE

(2)− x

⌊d(2)c ⌋

2 + ΓE(2)+ x

⌈d(2)c ⌉

2 , (17)

where ⌊·⌋ and ⌈·⌉ are the floor function and the ceiling function, respectively, and ΓE(2)− and ΓE

(2)+

are appropriately chosen to obtain the average check degree d(2)c . Based on this, we can now

formulate the optimization in two different ways:

1. Select an average check node degree d(k)c and generate Γ

(2)E (x2) using (17). To maximize the

design rate

minimize∑

i1,i2

i2Λ(1,2)i1,i2

subject to constraint (15), the normalization constraint Λ(1,2)E (1, 1) = 1, and a convergence

constraint for BP decoder. For the BEC, the convergence constraint ensures that the densityevolution recursion given above converges to zero erasure probability. For other channelmodels, convergence criteria are formulated as requirements for reaching a given performancethreshold within a given number of iterations.

2. Generate a variable degree distribution that satisfies constraint (15) at random. To maximizethe design rate

maximize d(2)c

subject to Γ(2)E (1) = 1 and a convergence constraint for the BP decoder.

The first optimization problem and variations of it are often solved iteratively (see, e.g., [18,34]).For example, sampling points of the density evolution recursion that are obtained from codes inprevious optimization stages can be used to approximate the convergence constraints by linearconstraints. The second optimization problem can be solved more easily. However, the performancewill heavily rely on the initial variable node degree distribution. An efficient approach has beenproposed in [21] to generate an initial set of codes which are refined in a second optimization stepby using a differential evolution algorithm that simulates mutation, recombination, and selectionof codes.

In the related literature, it has been observed that expurgated irregular LDPC codes sufferfrom a relatively large gap to the ultimate decoding threshold, especially if concentrated checkdegree distributions are chosen. This observation has motivated the alternative binning strategyby using lengthened codes. This problem has furthermore been addressed in [22] where irregularcheck degree distributions are considered. In [21], a recursive approach was proposed, where thesub-matrix H2 in (10) is split into stacked sub-matrices that are recursively optimized.

24

Page 25: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

Code Optimization for Lengthened CodesFor lengthened codes, the goal of the code design is to optimize the overall code structure to becapacity approaching for the source-relay channel while the code defined by the sub-matrix H1

simultaneously approaches the capacity of the source-destination link. As in the previous case,we start by selecting a single-layer code with check matrix H1 that is characterized by the degreedistributions Λ⋆

H1(x) and Γ⋆

H1(x). By this choice, we fix the variable degrees for edges of type 1,

i.e., Λ(1)L (x1) = Λ⋆

H1(x), and we obtain the constraint

Γ(1,2)L (x1, 1) = Γ⋆

H1(x). (18)

To formulate the optimization problem, we consider again the design rate

RH = 1− NC

N(1)V +N

(2)V

= 1− 1

d(1)c

d(1)v

+ d(2)c

d(2)v

. (19)

Here, the average node degrees are obtained from the degree distributions Γ(1,2)L (x1, x2), Λ

(1)L (x1),

and Λ(2)L (x2) as

d(k)v =∑

ik

ikΛL(k)ik

and d(k)c =∑

j1,j2

jkΓL(1,2)j1,j2

, with k ∈ {1, 2}.

Since the node degrees for type 1 edges are already fixed, the design rate can be maximized usingone of the following two optimization problems similar to the optimization of expurgated codes:

1. Select a check degree distribution Γ(1,2)L (x1, x2) that satisfies (18). To maximize the design

rateminimize

i2

i2ΛL(2)i2

subject to the normalization constraint Λ(2)L (1) = 1 and a convergence constraint for the BP

decoder.

2. Generate a degree distribution Λ(2)L (x2) at random. To maximize the design rate

maximize d(2)c

subject to Γ(1,2)L (1, 1) = 1 and a convergence constraint for the BP decoder.

As in the previous case, the first optimization problem can be solved iteratively. If concentratedcheck degree distributions are considered during the optimization, for example by imposing

Γ(2)L (1, x2) = ΓL

(2)− x

⌊d(2)c ⌋

2 + ΓL(2)+ x

⌈d(2)c ⌉

2 ,

then the second problem is simplified to finding the maximum value of the scalar d(2)c for which

convergence is reached. Again, it can be employed for generating an initial set of codes that arefurther refined using differential evolution.

25

Page 26: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

Code Optimization for Extended CodesAs in the two previous cases, the goal of the design is to optimize the code structure such that thesub-matrix H1 determines a capacity approaching code for the source-relay link while the overallcode approaches the capacity of the channel that is composed by the channel observations of thesource-destination link and the relay-destination link. As for the expurgated code we select a gooddegree distribution Λ⋆

H1(x) and Γ⋆

H1(x) for H1 and obtain Γ

(1)RC(x1) = Γ⋆

Hk(x) and the constraint

Λ(1,2)RC (x1, 1) = Λ⋆

H1(x). (20)

Before we express the design rate of the code, it is useful to identify relationships between thenumber of edges, variable nodes, and check nodes in the different layers, N

(k)E , N

(k)V , and N

(k)C ,

respectively, and the average node degrees d(k)v and d

(k)c ,

N(k)E = N

(k)C d(k)c = N

(k)V d(k)v .

Since N(3)C = N

(3)V , it follows immediately that d

(3)v = d

(3)c . Furthermore, since N

(1)V = N

(2)V and

N(2)C = N

(3)C = N

(3)V , we can express the design rate as

RH = 1− N(1)C +N

(2)C

N(1)V +N

(3)V

= 1−d(1)v

d(1)c

+ d(2)v

d(2)c

1 + d(2)v

d(2)c

. (21)

Similar to the case of expurgated codes, the design rate can be maximized by minimizing the ratiod(2)v /d

(2)c . It is interesting to note that the average node degrees d

(3)v and d

(3)c of the sub-matrix H3

do not affect the design rate, which is due to the fact that H3 is a square matrix and its dimensionis controlled by the number of layer-2 check nodes. Nevertheless, d

(3)v and d

(3)c are still important

design parameters that have impact on the convergence of the overall code.Different optimization approaches become now possible. For example, regular node degrees

d(3)v = d

(3)c may be chosen for the sub-matrix H3. In this case, the optimization procedures that

were described for expurgated codes become applicable. Alternatively, if the sub-matrix H2 ischosen to be check regular and H3 is the check matrix of an accumulator, the type 2 and type 3edges form an irregular repeat-accumulate code. The variable node degree distribution can then beoptimized to minimize the d

(2)v using for example the first optimization method that we described

for expurgated codes.

3.1.3 Spatially-coupled LDPC Codes

In the previous section, we gave an overview of design approaches for irregular LDPC codes appliedto the relay channel. Schemes based on irregular LDPC codes present a major drawback: For everyset of channel conditions, the degree distributions of multi-edge type codes need to be optimized,which may be a complex task. If strong codes with small gaps to the ultimate decoding thresholdare desired, iterative optimization approaches are required.

As an alternative to irregular LDPC code constructions, in this section, we discuss code designand optimization for SC-LDPC codes, which have been proposed recently for the relay channel in,e.g., [33, 35–37]. SC-LDPC codes can be seen as a generalization of LDPC convolutional codes,which were first introduced in [38] as a time-varying periodic LDPC code family. The codes in [38]are characterized by a parity-check matrix which has a convolutional-like structure. For this reason,they were originally nick-named LDPC convolutional codes. In [39], it was shown that the BP

26

Page 27: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

decoding threshold of regular LDPC convolutional code ensembles on the BEC closely approachesthe maximum a posteriori (MAP) decoding thresholds of the underlying regular LDPC blockcode ensemble (i.e., the block code ensemble with the same node degrees). This result, knownas threshold saturation, was analytically proven in [40] for a more general ensemble of SC-LDPCcodes. Furthermore, as the MAP threshold of the underlying block code ensembles tends to theShannon limit when the node degrees grow large, this implies that SC-LDPC codes achieve theBEC capacity in the limit of infinite node degrees under BP decoding. This result was recentlygeneralized in [41], where it is shown that SC-LDPC codes universally achieve capacity for thefamily of binary memoryless symmetric (BMS) channels. This property is important since ittremendously simplifies the code optimization: Every code that is shown to be capacity achievingover the BEC can be conjectured to be capacity achieving for the entire family of BMS channels.In the following, we summarize a few important definitions and demonstrate how SC-LDPC codescan be applied to the code structures presented in Section 3.1.1.

Code StructureWe consider terminated SC-LDPC codes. A time-varying binary terminated SC-LDPC code withL termination positions is defined by the parity-check matrix [39]

H =

H0(1)...

. . .

Hw−1(1) H0(t). . .

.... . .

Hw−1(t) H0(L). . .

...Hw−1(L)

,

with sparse binary sub-matrices H i(t) and zero-valued entries otherwise. For a regular SC-LDPC code with variable and check node degrees {dv, dc}, each sub-matrix H i(t) has dimensions(Mdv/dc)×M , where M corresponds to the number of variable nodes in each position and Mdv/dcis the number of check nodes in each position.

In this section, we consider a specific SC-LDPC code ensemble, which was shown in [40] tobe capacity achieving. The ensemble is characterized by the parameter set {dv, dc,M, L, w} [40].In short, the Tanner graph of a code in the ensemble is generated in the following way: In thisensemble, a variable node at position t has dv connections to check nodes at positions from therange [t, t+w− 1], where the parameter w is a positive integer. For each connection, the positionof the check node is uniformly and independently chosen from that range. As a consequence, eachof the dc connections of a check node at position t is uniformly and independently connected tovariable nodes from the range [t−w+1, t]. This randomization results in simple density evolutionequations and thus renders the ensemble accessible to analysis. Here, it is important to notice thatthe check node degrees at the boundaries of the graph show some irregularities, i.e., they havelower degree. This effect is illustrated in Figure 11 which shows a protograph representation ofthe code. Since the code is asymptotically regular in the limit of large L, it is commonly referredto as a regular code.

In the limit of large M , L and w, the ensemble {dv, dc,M, L, w} exhibits the following properties[40]:

27

Page 28: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

t = 9t = 1 t = 2 t = 3 t = 4 t = 5 t = 6 t = 7 t = 8

Figure 11: Protograph representation of a {dv, dc,M, L, w} SC-LDPC code for dv = 3, dc = 6,L = 9, and w = 3.

1. The design rate converges to the design rate of the underlying LDPC block code ensembleof the same node degrees {dv, dc}:

limw→∞

limL→∞

limM→∞

R(dv, dc,M, L, w) = 1− dvdc, (22)

2. The BP threshold on the BEC converges to the MAP threshold of the LDPC block codeensemble of the same node degrees {dv, dc},

limw→∞

limL→∞

limM→∞

ǫBP(dv, dc,M, L, w) = ǫMAP(dv, dc). (23)

These two properties allow us to conclude that the ensemble is in fact capacity achieving: For agiven rate R, the Shannon limit of the BEC is given by ǫSh = 1 − R. If we increase the nodedegrees dv and dc while keeping the ratio λ = dc/dv fixed, we see that the MAP threshold in (23)approaches the Shannon limit,

limdv→∞

ǫMAP(dv, dc = λdv) =1

λ= 1−R = ǫSh. (24)

Thus, in the limit, SC-LDPC code ensembles are capacity achieving for the BEC. This result wasgeneralized to the family of BMS channels in [41].

Structured SC-LDPC CodesIt was shown in [37,42] that structured SC-LDPC codes can be constructed for which sub-matricesHk of the overall parity-check matrix H as well as the overall code structure belong to capacityachieving SC-LDPC codes. This property can be achieved as follows:

1. Each sub-matrix Hk in the overall parity-check matrix H is taken from a SC-LDPC codeensemble with parameters {d(k)v , d

(k)c ,M (k), L, w}, where the number of positions L and the

parameter w are chosen to be the same for all sub-matrices.

2. The node degrees d(k)v and d

(k)c and the number of positions of the M (k) sub-matrices Hk are

chosen such that

(a) The design rate of the overall code structure is identical to the design rate of an equiv-alent single-layer code with parameters {dv, dc,M, L, w};

28

Page 29: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

t = 9t = 1 t = 2 t = 3 t = 4 t = 5 t = 6 t = 7 t = 8

Figure 12: Protograph representation of an expurgated SC-LDPC code. Both layers are specifiedby the parameters {3, 6,M, 9, 3}. Layer 1 edges are depicted by solid lines and layer 2 edges bydashed lines.

(b) The multi-dimensional density evolution recursion reduces to the one-dimensional re-cursion of an equivalent single-layer code with parameters {dv, dc,M, L, w}.

In the following, we show how this approach can be applied to the code structures that wereintroduced in Section 3.1.1.

Expurgated CodesTo construct capacity achieving expurgated codes, we select the sub-matrices H1 and H2 from theensembles {d(1)v , d

(1)c ,M (1), L, w} and {d(2)v , d

(2)c ,M (2), L, w}, respectively. The protograph represen-

tation of the resulting structure is shown in Figure 12. Note that in this example the expurgatedcode has rate R = 0. The main purpose of the example is to illustrate the extension of theprotograph.

Since the sub-matrices are stacked, it is required that M (1) = M (2), i.e., both sub-matriceshave the same number of variable nodes. Furthermore, if d

(1)c = d

(2)c = dc, it can be shown that

the design rate and the decoding threshold of the overall code structure are identical to thoseof a single layer code with parameters {dv = d

(1)v + d

(2)v , dc,M, L, w}. Since this code is capacity

achieving in the limit of infinite parameters, the expurgated code achieves capacity as well.To demonstrate how the node degrees of the ensembles can be selected, we assume that CSR

is the mutual information between the channel input and output on the link between the sourceand relay and CSD is the mutual information between the channel input and output on the linkbetween the source and destination. For a fixed check degree dc = d

(1)c = d

(2)c , the variable node

degree in H1 is given byd(1)v = ⌈(1− CSR)dc⌉.

Likewise, the overall variable node degree dv is obtained, from which d(2)v can be easily computed,

dv = d(1)v + d(2)v = ⌈(1− CSD)dc⌉.

In the limit of infinite code parameters (i.e., M,L,w, dc → ∞), this assignment will provide thedesired pair of nested capacity achieving codes. However, for finite parameters, the node degreesneed to be increased in order to compensate for the unavoidable gap to capacity.

Lengthened CodesAs in the previous case, we select the sub-matrices H1 and H2 from the ensembles {d(1)v , d

(1)c ,M (1), L, w}

and {d(2)v , d(2)c ,M (2), L, w}, respectively. In this case, the overall code structure can be shown to

29

Page 30: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

t = 9t = 1 t = 2 t = 3 t = 4 t = 5 t = 6 t = 7 t = 8

Figure 13: Protograph representation of a lengthened SC-LDPC code. Layer 1 edges (solid lines)are specified by the parameters {3, 6, 2M/3, 9, 3}, and layer 2 edges (dashed lines) are specified bythe parameters {3, 3,M/3, 9, 3}.

exhibit the same performance as a single layer code {dv, dc,M, L, w} if d(1)v = d

(2)v = dv. The re-

maining parameters of the overall code are then related to the parameters of the different layers asfollows: dc = d

(1)c + d

(2)c and M = M (1) +M (2). The protograph of the resulting code is illustrated

in Figure 13.For CSR and CSD as defined above and for a fixed variable degree dv, the check degrees d

(1)c and

d(2)c can be obtained as

d(1)c =

dv1− CSD

and d(1)c + d(2)c =

dv1− CSR

.

Again, this assignment will lead to a pair of capacity achieving codes as the parametersM (1),M (2), L, w, dv → ∞.

Rate-Compatible Extended CodesLet the sub-matrices H1, H2, and H3 be taken from the ensembles {d(1)v , d

(1)c ,M (1), L, w}, {d(2)v , d

(2)c ,M (2), L, w}

and {d(3)v , d(3)c ,M (3), L, w}, respectively. Using the same arguments as above we can state the

following relations between the parameters of the three component ensembles and the resultingsingle-layer ensemble {dv, dc,M, L, w}:

dv = d(1)v + d

(2)v = d

(3)v

dc = d(1)c = d

(2)c + d

(3)c

d(3)c = d

(3)v

M = M (1) +M (3) = M (2) +M (3).

(25)

An example is shown in Figure 14.If we now denote the accumulated mutual information that is provided by the source-destination

and relay-destination links as CD,acc, based on which the overall code is decoded, the node degreescan be computed for a fixed check degree dc as

dv = ⌈(1− CD,acc)dc⌉d(1)v = ⌈(1− CSR)dc⌉.

The remaining parameters d(2)v , d

(3)v , d

(2)c , and d

(3)c can be obtained from the set of equations

in (25). The resulting choice of parameters will lead to a capacity achieving extended code as

30

Page 31: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

t = 9t = 1 t = 2 t = 3 t = 4 t = 5 t = 6 t = 7 t = 8

Figure 14: Protograph representation of a rate-compatible extended SC-LDPC code. Layer 1 edges(solid lines) are specified by {3, 6, 2M/3, 9, 3}, layer 2 edges (dash-dotted lines) are specified by{1, 2, 2M/3, 9, 3}, and layer 3 edges (dashed lines) are specified by {4, 4,M/3, 9, 3}.

M (1), M (2), L, w, dc become sufficiently large. If this code is applied to the relay channel, it isimportant to note that the two segments of the codeword experience two different channels withdifferent channel parameters. Under these conditions it is no longer possible to reduce the three-dimensional density evolution recursion to a one dimensional recursion in a straightforward mannersuch that it becomes difficult to proof the capacity achieving performance analytically for this typeof channel. However, numerical evidence was presented in [36] and the fact that the consideredfamily of codes universally achieves the capacity for the family of BMS channels let us conjecturethat the presented code structure is as well optimal for the relay channel. Recently, an alternativetechnique to prove the threshold saturation phenomenon was introduced in [43] and [44], basedon the notion of potential functions. The proof relies on the observation that a fixed point of thedensity evolution corresponds to a stationary point of the corresponding potential function. In [43],for a class of coupled systems characterized by a scalar density evolution recursion, this techniquewas used to prove that the BP threshold saturates to the conjectured MAP threshold, known asthe Maxwell threshold. This result was later extended in [44] to coupled systems characterizedby vector density evolution recursions. As the density evolution of SC-LDPC codes for the relaychannel where the codeword experiences different channels can be reduced to a vector recursion,the technique in [44] may be useful to prove achievability for this case.

A comparison of decoding thresholds of SC-LDPC codes and the irregular LDPC codes from [18]has been provided in [37]. The results show that for finite node degrees and code parameters ofthe SC-LDPC code (dc = 10, w = 3, L = 100) the decoding thresholds for both code families arecomparable. The advantages of SC-LDPC codes are however that (1) this performance is obtainedwithout requiring any optimization overhead, (2) the decoding thresholds of SC-LDPC codes canbe further improved by increasing the code parameters L and w for fixed node degrees, and (3)SC-LDPC codes are universal for the family of BMS channels.

3.2 Distributed Turbo-Codes and Related Code Structures

In this section, we discuss distributed turbo-coding for the three-node relay channel, and relatedcoding structures. In contrast to the LDPC code constructions discussed in Section 3.1, whichare driven by an information theoretic approach, distributed turbo-codes find their roots in anengineering approach to the relaying problem. Indeed, due to their modular structure, consistingof component encoders, turbo-codes allow for a very intuitive approach to distributed coding aswe will see in the following.

Turbo-codes have gained considerable attention since their introduction by Berrou et al. in

31

Page 32: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

CU

CL

ub

xu

xlΠ

(a)

Cs

C−1s CrΠ

ub

nsr

nsd

nrd

source

relay

(b)

Figure 15: (a) Turbo code, and (b) distributed turbo-code.

C−1s

C−1r

Π-1Π

ys

yr

u

Figure 16: Decoder of a distributed turbo-code.

1993 [1], due to their near-capacity performance and low decoding complexity. The conventionalturbo-code is a parallel concatenation of two identical recursive systematic convolutional encodersseparated by a pseudo-random interleaver. The block diagram of a turbo-code is depicted inFigure 15(a). It consists of two component encoders, CU and CL, linked by an interleaver: Theinformation bits at the input of the upper encoder CU are scrambled by the interleaver beforeentering the lower encoder CL.

Turbo-codes for point-to-point communications can be generalized to the cooperative commu-nications scenario. Distributed turbo-codes were first proposed by Valenti and Zhao in [45], andsubsequently they have been considered by a large number of research groups, see, e.g., [45–66].Figure 15(b) shows the block diagram of a distributed turbo-code for the three-node relay channel.The source uses a convolutional encoder Cs to encode the source data into codeword xs, whichis broadcasted to both the relay and the destination. The relay attempts to decode the receivednoisy codeword and generates an estimate of the source information u. It then interleaves andre-encodes u into the codeword xr using another convolutional encoder Cr, prior to forwardingit to the destination. The destination receives two encoded copies of the original message: Thecodeword transmitted by the source, xs, and the coded interleaved information transmitted by therelay, xr. Therefore, we have realized a turbo-code distributed over the source and relay nodes: Itscomponent encoders are the convolutional encoders at the source and at the relay, Cs and Cr. Theparallel with a conventional turbo-code is apparent by comparing Figure 15(b) with Figure 15(a).It is easy to see that turbo-codes are a natural fit to the relay problem: The source encoder Csand the relay encoder Cr of the distributed turbo-code correspond to the upper encoder CU andthe lower encoder CL, respectively, of the conventional turbo-code. The basic difference betweenstandard turbo-codes and distributed turbo-codes is that for the distributed case (i) errors mayoccur at the relay, and (ii) the codewords xs and xr undergo different channel conditions. Thedecoder of the distributed turbo-code is depicted in Figure 16. The destination receives two noisyobservations, ys and yr, corresponding to the codewords from the source and the relay, respec-tively. Thus, it can decode them jointly to estimate the transmitted data. The decoder consistsof two soft-input soft-output decoders, C−1

s and C−1r , matched to the source encoder Cs and to

32

Page 33: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

Decoder π RSCRelay

Decoder π RSCRelay

...

π RSC

RSC

RSCπ

...

B

X

U

M

C

...

RSCSource

TurboDecoder

......

(b)(a)

B

Destination

B

Figure 17: Comparison of the parallel code structure of a multiple turbo-code (a) and a distributedturbo-code constructed by multiple relays.

the relay encoder Cr, respectively, and performs iterative decoding between the decoders, as for aconventional turbo-code, to estimate the information message.

The concept of distributed turbo-coding can be easily extended to other types of concatena-tions, such as distributed serially concatenated codes or hybrid concatenated codes. If the relay,instead of re-encoding the estimate of the message, re-encodes the decoded codeword xs prior toforwarding it to the destination, then a serially concatenated turbo-code is effectively realized.Furthermore, the distributed turbo-codes can be naturally generalized to the presence of multipleparallel relays operating in a TDMA/FDMA mode, as illustrated in Figure 17. In this case, weobtain a distributed multiple turbo-code (Figure 17(b)). Finally, the combination of distributedturbo-coding with hybrid automatic repeat request (ARQ) techniques for the relay channel hasalso been studied in the literature, see, e.g., [67–70].

3.2.1 Code optimization

Due to the fact that the source-to-destination link and the relay-to-destination link may havedifferent quality, a conventional code design for the point-to-point channel is not necessary a agood choice for the relay channel. The performance of distributed turbo-like codes depends onencoders Cs and Cr and on the time allocation between the source and the relay (i.e., the amount ofredundancy assigned to the source and to the relay) to adapt to link qualities. A joint optimizationof the encoders polynomials and the time allocation to optimize the error floor performance andthe convergence threshold is prohibitively complex. A simpler optimization strategy, yet wellperforming, is to perform a two-step optimization [71, 72]. First, encoders Cs and Cr are chosenaccording to design criteria for the error floor [73]. Then, the time allocation is optimized accordingto other criteria. For instance, the time allocation between the source and the relay may bedetermined using an information theoretic approach, as discussed in Section 2.3. This approach,however, is useful only when capacity approaching codes are used in all links (e.g., for the LDPCcode designs of Section 3.1), which is in general not the case for distributed turbo-coding, where,in general, practical aspects like the use of simple constituent codes and short block lengths areconsidered. This is a fundamental difference between the philosophy behind the distributed codingschemes based on LDPC codes discussed in the previous section, and schemes based on turbo-like codes. While the former are conceived to mimic information theoretic results, i.e., capacityachieving codes are designed for each link in the network, criteria like complexity are at the basis ofdistributed coding schemes based on turbo-like codes. For distributed turbo-coding, methods based

33

Page 34: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

on EXIT charts [15], which are a standard technique for analyzing concatenated coding schemes,can be applied. By applying EXIT charts, the rate allocation between the source and the relaycan be optimized to minimize the convergence threshold of the distributed turbo-like code [71,72].Improved adaptability to different channel qualities can also be obtained by considering moregeneral code concatenations, as proposed in, e.g., [71, 72,74–77].

3.2.2 Noisy Relay

One of the main problems of distributed turbo-coding is error propagation. Indeed, decodingerrors may occur at the relay, which, if not handled properly, may be catastrophic for the over-all performance. For the LDPC code constructions discussed in Section 3.1 error propagationwas avoided by considering capacity achieving codes for the source-relay channel. In contrast, dis-tributed turbo-codes use at the source stand-alone convolutional codes, i.e., non-capacity achievingcodes. For distributed turbo-codes it is therefore usually assumed that the channel between thesource and the relay is of high quality, hence the overall performance is not limited by this link.This assumption holds when the relay is close to the source. However, in practical situations, thismight not be always the case. A severe error floor may appear in this case, which is dictated by theperformance of the source-relay channel. The design of distributed turbo-codes when imperfectdecoding occurs at the relay is a challenging problem. A way to handle it is to properly modify thelikelihood ratios exchanged between the constituent soft-input soft-output decoders to account forerrors at the relay, see, e.g., [78]. Alternative relaying strategies to decode-and-forward for noisyrelaying are discussed in the next section.

4 Relaying with Uncertainty at the Relay

In the previous section, we focused on decode-and-forward relaying, and all considered code designsassumed that the message was perfectly decoded at the relay. This approach has mainly twodrawbacks: requiring that the relay decodes the message is a severe constraint that limits theperformance in terms of achievable rate if the source-relay link is weak. Furthermore, reliabledecoding may not be guaranteed under practical constraints, for example, due to lack of precisechannel state information at the transmitter and for short and moderate block lengths. Theclassical approach to deal with this situation is to employ the so-called compress-and-forwardrelaying strategy. Recently, alternative approaches that are inspired by distributed soft-decodingideas have been considered as well.

4.1 Compress-and-Forward Relaying

The goal of compress-and-forward relaying is, instead of decoding the messages from the source atthe relay, to provide the receiver with a quantized version of the relay’s channel observation. Toimprove efficiency in terms of rate, the relay utilizes the fact that the channel output at the receiveris correlated with the relay’s channel observations (both include the same codeword transmittedfrom the source) and uses Wyner-Ziv compression for its transmission.

Compress-and-forward relaying can be implemented with different families of channel codes(see, e.g., [79–82]). Polar codes are attractive candidates since they provide an optimal solutionto the underlying noisy source coding problem if discrete sources are considered. In the following,we focus on an efficient implementation of compress-and-forward relaying where Wyner-Ziv cod-ing is implemented through subsequent quantization and Slepian-Wolf coding. This approach is

34

Page 35: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

illustrated in Figure 18 where the relay channel with orthogonal receive components is considered.We note that a generalization to other models is straightforward.

s = HbT

CRDecoder

MRC

SW Decoder

CS CSDecoder

sSW Encoder CR

s

Destination

y2

y1

bQ

CF Relay

yR

w

Source

yRwxS

yRs = HbT

Figure 18: Practical implementation of compress-and-forward relaying.

The source uses a point-to-point code CS which is designed to approach the capacity on thechannel XS → (Y1, YR), where YR is a compressed version of the observation of the relay YR. Whencompressing Y R into Y R, the source coding rate must not exceed the rate of communication thatis supported by the link from the relay to the destination. A practical way to implement thecompression at the relay is to apply a standard quantizer in a first step, which is followed by aSlepian-Wolf encoder in the second step. Slepian-Wolf coding can be implemented with linearcodes using the coset coding method. That is, a syndrome s is generated from the bit vector b,which represents the quantized channel observation Y R, by multiplying it with the check matrixH of a linear code. The syndrome is then encoded and transmitted to the destination. Afterdecoding the syndrome, the bit vector b and hence Y R are recovered by decoding b based on thechannel observations Y 1. This is possible since b is included in the coset of the code describedby H which is specified by the syndrome vector s. After decoding b, Y R is combined with thechannel observation Y 1. The resulting channel observation is used for decoding the transmittedmessage w.

From a code design perspective, it is interesting to note that the choice of the quantizer has ahuge impact on the resulting code structure. To see this we first note that the mapping from thequantization bits b to the reconstruction Y R can be interpreted as a (channel) code. This channelcode is observed at the destination through the channel YR → Y1. This is possible since YR andY1 are coupled via XS. Since the quantization bits b are furthermore turned into a codeword of acoset code of H due to the syndrome, the overall code that is decoded in order to recover Y R isa serially concatenated code. If now a scalar quantizer is used, the inner code is equivalent to asimple modulator. If a vector quantizer based on trellis-coded quantization methods is used, theinner code corresponds to a trellis code.

4.2 Soft-Information Forwarding and Estimate-and-Forward

The question of how decoding errors at the relay should be treated when designing distributedchannel coding approaches has opened a new branch of research in the coding community. Inspiredby iterative decoding techniques, soft-information forwarding techniques have been proposed (see,e.g., [78, 83–94]). Instead of discarding erroneously decoded codewords, the idea is to performsome processing at the relay based on the log-likelihood ratios (LLR) that are provided by soft-output decoding algorithms. It has been proposed for example to perform soft re-encoding based

35

Page 36: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

s1

s2

d

Figure 19: The two-user cooperative network: Each user acts as a relay for its partner node.

on LLRs (see, e.g., [84]). Furthermore, different non-linearities have been proposed for efficientlymapping LLRs into transmitted (analog) symbols. This approach has the benefit that implicitlya reliability-based power allocation is obtained.

5 Cooperation with Multiple Sources

Since the pioneering work by Van der Meulen on the three-node relay channel, the concept ofcooperation has been generalized to a variety of cooperative networks. In this section we brieflydiscuss distributed coding for two cooperative networks widely studied in the literature: Thetwo-user cooperative network, and a multi-source cooperative network.

5.1 Two-user Cooperative Network: Coded Cooperation

A classical form of cooperation is the two-user cooperative network, consisting of two users whichhelp each other in transmitting the information to a common destination. In such a network,each user transmits its own information, and acts also as a cooperative agent, forwarding to thedestination extra parity bits for the partner user. When channel coding is integrated into thiscooperation strategy, the system is known as coded cooperation [95–98].

The network is depicted in Figure 19. Two users s1 and s2 cooperate to communicate data to asingle destination d. Each user can either transmit its own local information (transmission mode)or help the partner node by relaying its information (relaying mode). Both users are equipped withtwo encoders Ca and Cb. In general, it is assumed that the users transmit on orthogonal channels.For clarity purposes, without loss of generality, we focus on the information generated by user s1.The transmission of user s1 data is performed over two phases. In the first phase, referred to asthe broadcast phase, user s1 encodes its data u1 using encoder Ca into codeword x1B of length na

and broadcasts it to the destination and to user s2. Thus, s2 receives a noisy version of x1B. Ifuser s2 can successfully decode x1B, it switches to the relaying mode; in the second phase, referredto as the cooperation phase, s2 encodes u1 into codeword x2C of length nb using encoder Cb, andforwards it to the destination. In this case, the codeword of user s1 is partitioned into two sets:One partition transmitted by the user itself, x1B, and the other partition by s2, x2C. On the otherhand, if u2 cannot correctly decode its partner data, it operates in the transmission mode; in thesecond phase it encodes u2 into codeword x2B and forwards it to the destination. In both caseseach user always transmits a total of n = na + nb bits. However, note that the number of effectivebits transmitted for each user message varies according to whether the nodes are able to decode ornot their partner node codeword. We can distinguish four possible cooperative cases, illustrated inFig. 20. In Case 1, decoding at both nodes s1 and s2 is successful, resulting in the fully cooperativescenario (both users cooperate); a codeword of length n = na + nb is transmitted for each user.

36

Page 37: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

s1

s2

d

(a) Case 1

s1

s2

d

(b) Case 2

s1

s2

d

(c) Case 3

s1

s2

d

(d) Case 4

Figure 20: The four possible cooperative cases depending on whether each source decodes success-fully or not its partner node information.

s1

s2

sm

r d...

Figure 21: A multi-source cooperative relay network: Multiple sources transmit to a single desti-nation with the help of a common relay.

In Case 2, decoding at node s2 is successful, but decoding at node s1 fails. In this case none ofthe users transmits the second set of code bits for s2, and both transmit the second set for s1.Consequently, s2 cooperates and s1 does not, resulting in a codeword of length n = na + 2nb foruser 1 and a codeword of length n = na for user two. Case 3 is similar to Case 2 with reversedroles of s1 and s2. Finally, in Case 4, decoding at nodes s1 and s2 fails and the system reverts tonon-cooperative transmission; a codeword of length n = na + nb is transmitted for each user.

In general, several channel coding strategies can be used for coded cooperation. For example,the overall code may be a convolutional code, a block code, or a more advanced turbo-like code oran LDPC code. In [97], a coded cooperation scheme using rate-compatible punctured convolutionalcodes was proposed. More advanced schemes based on turbo-codes [96] and on LDPC codes [95]can also be found in the literature. A network coding approach for this scenario was proposedin [98]. Coded cooperation with turbo and LDPC codes achieves diversity and noticeable gain overnon-cooperative networks.

5.2 Multi-source Cooperative Relay Network

In emerging networks, such as cellular relaying networks, a relay will be shared by multiple sourcesand utilized to forward messages to one or several common destinations. This scenario is alsorelevant for wireless sensor networks, which are generally built as hierarchical structures consistingof a cluster of sensors, intermediate relays which have less stringent restrictions on resources thanthe sensors and serve exclusively as forwarders, and a central server or access point.

The multi-source cooperative relay network is depicted in Fig. 21. The network consists of Msources, s1 to sM , that transmit to a single destination with the help of a common relay, which usesthe decode-and-forward strategy and typically operates in half-duplex mode. The relay decodesthe received noisy observations from the sources, and properly generates some extra redundancy

37

Page 38: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

bits for each source which are forwarded to the destination. Such a system is modeled by themultiple access relay channel (MARC). Capacity results for the MARC with independent sourceswere given in [99–101]. Information theoretical bounds for the MARC with correlated sourceshave been recently given in [102]. A common assumption for this network is that transmissions areorthogonalized using TDMA. For the TDMA-MARC, several coding schemes have been proposedin the literature based on regular LDPC codes [103], irregular LDPC codes [104, 105], turbo-codes [106], product codes [107], and serially-concatenated codes [71, 72]. As for the three-noderelay channel, the LDPC code constructions find their roots in the information theory and requirelong block sizes to perform well. On the other hand, turbo-like code constructions are bettersuited to deal with short blocks, a scenario relevant for, e.g., sensor networks. For instance, theserially-concatenated code scheme proposed in [71, 72], despite the use of very simple constituentencoders at the source and at the relay (4-state convolutional encoders) and very short blocklengths (96 information bits) achieves very low error rates and offers significant performance gainswith respect to the non-cooperative scenario, even for a very large number of sources (e.g., 100).Furthermore, the amount of redundancy per source generated by the relay can be tuned accordingto performance requirements and/or constraints in terms of throughput, power and quality of thesource-to-relay links, by using EXIT charts or density evolution techniques.

Spatially-coupled LDPC codes

The SC-LDPC codes discussed in Section 3.1.3 for the three-node relay channel can be generalizedto the multi-source cooperative relay network. For the case of two (possibly correlated) sources, arelaying scheme based on regular punctured systematic SC-LDPC codes was proposed in [108,109].The proposed system uses joint source-channel coding for the transmission to both the relay andthe destination to exploit the correlation. Since the SC-LDPC codes used in [108,109] are regular,their design is simple, does not involve the optimization of the degree distributions, and reducesto choosing appropriate node degrees of the component codes for given link qualities.

Code structureSources s1 and s2 use codes from the ensembles with parameters{d(1)v , d

(1)c ,M (1), L, w} and {d(2)v , d

(2)c ,M (2), L, w}, denoted by C1(d(1)v , d

(1)c ,M (1), L, w) and

C2(d(2)v , d(2)c ,M (2), L, w), respectively, with parity check matrices H

1 and H2. The rates of C1

and C2 must be designed such that the relay is able decode the source data error-free. In this case,the relay can recover the codewords x(1) and x(2) transmitted by sources s1 and s2, respectively.It then generates additional syndrome bits according to

s =[

H1synd H

2synd

]

[

x(1)

x(2)

]

, (26)

using parity check matrices H1synd and H

2synd from the code ensembles C1

synd(d(1)v,synd, d

(1)c,synd,M

(1), L, w)

and C2synd(d

(2)v,synd, d

(2)c,synd,M

(2), L, w), respectively.The syndrome bits are transmitted to the destination after encoding by another SC-LDPC code

of rate equal to the capacity of the relay-destination channel (for which the design is independentof the other codes). It is assumed that this code can be decoded error-free, and separately fromthe other codes in the system. With that assumption, the destination can now decode the source

38

Page 39: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

bits using the parity check matrix H of the overall code

H

[

x(1)

x(2)

]

=

H1 0

0 H2

H1corr H

2corr

H1synd H

2synd

[

x(1)

x(2)

]

=

0

0

0

s

. (27)

Here, Hicorr = HZH

isys denotes a matrix which represents the parity-check equations that come

from the correlation model. The correlation between the sources is modeled here in the followingway. Let Z be a binary random variable. The source bits U (1) and U (2) for source 1 and source2, respectively, are independent Bernoulli-1

2if Z = 0 and the same Bernoulli-1

2if Z = 1, with

Pr(Z = 1) = p. The matrices Hisys are used to “extract” the systematic bits of the corresponding

codeword, i.e., Hisysx

(i) = u(i). HZ is a suitable diagonal matrix. Note that the resulting overall

code described by H is a bilayer SC-LDPC code where C1 and C2 constitute the first layer of thebilayer structure and C1

synd and C2synd constitute the second layer of the bilayer code. In addition,

also note that the relay implicitly uses network coding to combine the data from the sources beforeforwarding it to the destination.

It is demonstrated in [108,109] that the bilayer SC-LDPC codes described above approach thecapacity of this relay network for both independent and correlated sources. Furthermore, it isobserved that the typical threshold saturation effect for spatially-coupled systems also applies inthis scenario.

In Figure 22 we give two examples of the performance of SC-LDPC codes for the multi-sourcecooperative relay network with two sources for the case of independent sources (left) and correlatedsources (right). Here, we consider the case of BEC links. To compare the performance of SC-LDPCcodes with the theoretical limits, we define the achievable (ǫs1d, ǫs2d)-region, where ǫsid denotes theerasure probability of the link between source si and the destination, as the region of all pairs oferasure probabilities (ǫs1d, ǫs2d) for which both users can be decoded successfully at the destination.This region can be computed using density evolution [109]. In the figure we plot the achievable(ǫs1d, ǫs2d)-region together with the theoretical limit [109]. There is a uniform gap between thedensity evolution results and theoretical limit of less than 0.02. It can also be seen that a lowerlink quality for one user is compensated by a better quality on the other link. Note that this ispossible only because the data of both sources are combined via network coding at the relay andjointly decoded at the destination. When the correlation is increased the theoretical region aswell as the achievable region of the SC-LDPC code become bigger. For the highly-correlated case,the additional parity checks due to the correlation model in the factor graph of the decoder allowsuccessful decoding at higher channel erasure probabilities.

6 Summary and Conclusions

When designing a communication scheme for the relay channel, we can choose from a large set ofrelaying strategies like, e.g., decode-and-forward, amplify-and-forward, and compress-and-forward.From a code design perspective, decode-and-forward relaying is the most attractive strategy. Itensures that the transmitted messages are known by the cooperating relay nodes such that adistributed coding scheme can be set up. The information theory literature has provided funda-mental communication strategies for such scenarios. In this chapter, we started with the simplestcooperative network, the three-node relay channel under decode-and-forward relaying, to explain

39

Page 40: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

0 0.2 0.4 0.6 0.8 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

εs1d

ε s2d

DE achievable regionTheoretical limit

0 0.2 0.4 0.6 0.8 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

εs1d

ε s2d

DE achievable regionTheoretical limit

Figure 22: Achievable (ǫs1d, ǫs2d)-region for an SC-LDPC code and theoretical limit for independentsources (left) and correlated sources with p = 0.3 (right) over the binary erasure channel.

the main idea of the information theoretic concepts and to show how these concepts can be inter-preted in terms of channel code designs. We considered both half-duplex and full-duplex relayingand explained two different strategies for obtaining the highest possible achievable rates. Thefirst strategy, regular encoding if full-duplex relaying is considered, can be conveniently mimickedby rate-compatible code designs, which also makes an adoption to the half-duplex case possible.The second strategy, a so-called binning strategy, can be realized by nested codes. We identifiedthree linear block code structures that can be employed to implement the different strategies.We discussed code design and optimization issues taking low-density parity-check (LDPC) blockcodes and spatially-coupled LDPC codes as special cases. Accordingly, two different approachesfor designing optimal LDPC codes for the relay channel were presented in this chapter. We alsoprovided a survey on distributed code designs that are based on convolutional codes and turbo-likecodes. These code designs are especially relevant in practical systems. Then, we draw examplesto illustrate how the different code design approaches can be extended to more complex networktopologies like, e.g., the multi-source cooperative relay channel. To complement our survey oncode design for decode-and-forward relaying we also provided a brief summary on implementationaspects of compress-and-forward relaying schemes.

The survey in this chapter has shown that coding for the three-node relay channel under ide-alized conditions is well understood. Unfortunately, these ideal conditions do not apply in reality,and there are challenges that remain to be solved. One of the biggest challenges is scalability. Inthe three-node setup it may be a reasonable assumption that the channel-state information (CSI)that is required to determine the optimal code parameters can be distributed among the cooper-ating terminals. However, this becomes an issue in large network topologies where a distributedcode is constructed over many hops, involving a large number of cooperating nodes. In this setup,techniques are required that allow the cooperating nodes to identify the optimal transmission, de-coding and cooperation schedule in a distributed fashion. We see another challenge in identifyingthe optimal performance of cooperative communication schemes under finite-length and complexity

40

Page 41: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

constraints and to design practical coding schemes that achieve the optimal performance.

Acknowledgements

The authors would like to thank Christian Häger for proof-reading this chapter and helping drawingsome of the figures.

References

[1] E. C. van der Meulen, “Three-terminal communication channels,” Advances in Applied Prob-ability, vol. 3, pp. 120–154, 1971.

[2] T. Cover and A. El Gamal, “Capacity theorems for the relay channel,” IEEE Transactionson Information Theory, vol. 25, no. 5, pp. 572 – 584, sep 1979.

[3] J. Laneman, D. Tse, and G. Wornell, “Cooperative diversity in wireless networks: Efficientprotocols and outage behavior,” IEEE Transactions on Information Theory, vol. 50, no. 12,pp. 3062 – 3080, Dec. 2004.

[4] M. Janani, A. Hedayat, T. Hunter, and A. Nosratinia, “Coded cooperation in wireless com-munications: space-time transmission and iterative decoding,” IEEE Transactions on SignalProcessing, vol. 52, no. 2, pp. 362–371, Feb. 2004.

[5] A. Høst-Madsen and J. Zhang, “Capacity bounds and power allocation for wireless relaychannel,” IEEE Transactions on Information Theory, vol. 51, no. 6, pp. 2020–2040, Jun.2005.

[6] D. M. Pozar, Microwave Engineering. John Wiley & Sons, 1998.

[7] M. L. Honig (Editor), Advances in Multiuser Detection. John Wiley & Sons, 2009.

[8] A. El Gamal and S. Zahedi, “Capacity of a class of relay channels with orthogonal com-ponents,” IEEE Transactions on Information Theory, vol. 51, no. 5, pp. 1815–1817, May2005.

[9] Y. Liang and V. V. Veeravalli, “Gaussian orthogonal relay channels: Optimal resource alloca-tion and capacity,” IEEE Transactions on Information Theory, vol. 51, no. 9, pp. 3284–3289,Sep. 2005.

[10] G. Kramer, P. Gupta, and M. Gastpar, “Cooperative strategies and capacity theorems forrelay networks,” IT, vol. 51, no. 9, pp. 3036–3063, sep 2005.

[11] T. Riihonen, S. Werner, and R. Wichman, “Mitigation of loopback self-interference in full-duplex mimo relays,” IEEE Transactions on Signal Processing, vol. 59, no. 12, pp. 5983–5993,Dec. 2011.

[12] G. Kramer, Topics in multi-user information theory. Now Pubublishers, 2008, vol. 4.

[13] M. A. Khojastepour, A. Sabharwal, and B. Aazhang, “On capacity of Gaussian ‘cheap’ relaychannel,” in Proc. IEEE Global Telecommunications Conference (GLOBECOM), Jan. 2003.

41

Page 42: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

[14] J. Ezri and M. Gastpar, “On the performance of independently designed ldpc codes for therelay channel,” in Proc. IEEE International Symposium on Information Theory (ISIT), 2006,pp. 977–981.

[15] S. ten Brink, “Convergence behavior of iteratively decoded parallel concatenated codes,”IEEE Transactions on Communications, vol. 49, no. 10, pp. 1727–1737, Oct. 2001.

[16] T. Richardson and R. Urbanke, Model Coding Theory. Cambridge University Press, 2008.

[17] R. Zamir, S. Shamai, and U. Erez, “Nested linear/lattice codes for structured multiterminalbinning,” IEEE Transactions on Information Theory, vol. 48, no. 6, pp. 1250–1276, 2002.

[18] P. Razaghi and W. Yu, “Bilayer low-density parity-check codes for decode-and-forward inrelay channels,” IEEE Transactions on Information Theory, vol. 53, no. 10, pp. 3723–3739,2007.

[19] A. Chakrabarti, A. De Baynast, A. Sabharwal, and B. Aazhang, “Low density parity checkcodes for the relay channel,” IEEE-JSAC, vol. 25, no. 2, pp. 280–291, 2007.

[20] J. Cances and V. Meghdadi, “Optimized low density parity check codes designs for halfduplex relay channels,” IEEE Transactions on Wireless Communications, vol. 8, no. 7, pp.3390–3395, 2009.

[21] M. Azmi, J. Yuan, G. Lechner, and L. Rasmussen, “Design of multi-edge-type bilayer-expurgated LDPC codes for decode-and-forward in relay channels,” IEEE Transactions onCommunications, vol. 59, no. 11, pp. 2993 –3006, Nov. 2011.

[22] M. Azmi, J. Yuan, J. Ning, and H. Huynh, “Improved bilayer LDPC codes using irregularcheck node degree distribution,” in Proc. IEEE International Symposium on InformationTheory (ISIT), 2008, pp. 141–145.

[23] T. V. Nguyen, A. Nosratinia, and D. Divsalar, “Bilayer protograph codes for half-duplexrelay channels,” in Proc. IEEE International Symposium on Information Theory (ISIT),Jun. 2010, pp. 948 –952.

[24] J. Ha, J. Kim, and S. McLaughlin, “Rate-compatible puncturing of low-density parity-checkcodes,” IEEE Transactions on Information Theory, vol. 50, no. 11, pp. 2824–2836, 2004.

[25] C. Hsu and A. Anastasopoulos, “Capacity achieving LDPC codes through puncturing,” IEEETransactions on Information Theory, vol. 54, no. 10, pp. 4698–4706, 2008.

[26] M. El-Khamy, J. Hou, and N. Bhushan, “Design of rate-compatible structured LDPC codesfor hybrid ARQ applications,” IEEE Journal on Selected Areas in Communications, vol. 27,no. 6, pp. 965–973, 2009.

[27] G. Yue, X. Wang, and M. Madihian, “Design of rate-compatible irregular repeat accumulatecodes,” IEEE Transactions on Communications, vol. 55, no. 6, pp. 1153–1163, 2007.

[28] N. Jacobsen and R. Soni, “Design of rate-compatible irregular LDPC codes based on edgegrowth and parity splitting,” in IEEE Vehicular Technology Conference, 2007 (VTC-2007Fall). IEEE, 2007, pp. 1052–1056.

42

Page 43: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

[29] C. Li, G. Yue, X. Wang, and M. Khojastepour, “LDPC code design for half-duplex coopera-tive relay,” IEEE Transactions on Wireless Communications, vol. 7, no. 11, pp. 4558 –4567,Nov. 2008.

[30] T. Hashimoto, T. Wakiyama, K. Ishibashi, and T. Wada, “A proposal of new hybrid ARQscheme using rate compatible LDPC codes for multi-hop transmissions,” in Proc. IEEEVehicular Technology Conference (VTC), Sept. 2009.

[31] J. Hu and T. Duman, “Low density parity check codes over wireless relay channels,” IEEETransactions on Wireless Communications, vol. 6, no. 9, pp. 3384–3394, 2007.

[32] C. Li, G. Yue, M. Khojastepour, X. Wang, and M. Madihian, “LDPC-coded cooperative relaysystems: performance analysis and code design,” IEEE Transactions on Communications,vol. 56, no. 3, pp. 485–496, 2008.

[33] H. Uchikawa, K. Kasai, and K. Sakaniwa, “Spatially coupled LDPC codes for decode-and-forward in erasure relay channel,” in Proc. IEEE International Symposium on InformationTheory (ISIT), Aug. 2011, pp. 1474 –1478.

[34] V. Rathi, M. Andersson, R. Thobaben, J. Kliewer, and M. Skoglund, “Performance analysisand design of two edge-type LDPC codes for the BEC wiretap channel,” IEEE Transactionson Information Theory, vol. 59, no. 2, pp. 1048–1064, Feb. 2013.

[35] Z. Si, R. Thobaben, and M. Skoglund, “Bilayer LDPC convolutional codes for half-duplexrelay channels,” in Proc. IEEE International Symposium on Information Theory (ISIT),2011, pp. 1464–1468.

[36] ——, “Dynamic decode-and-forward relaying with rate-compatible LDPC convolutionalcodes,” in Proc. International Symposium on Communications, Control and Signal Process-ing (ISCCSP), May 2012, pp. 1 –5.

[37] ——, “Bilayer LDPC convolutional codes for decode-and-forward relaying,” IEEE Transac-tions on Communications (submitted), 2012.

[38] A. J. Felström and K. S. Zigangirov, “Time-varying periodic convolutional codes with low-density parity-check matrix,” IEEE Transactions on Information Theory, vol. 45, no. 6, pp.2181–2191, Sep. 1999.

[39] M. Lentmaier, A. Sridharan, D. J. Costello, and K. S. Zigangirov, “Iterative decoding thresh-old analysis for LDPC convolutional codes,” IEEE Transactions on Information Theory,vol. 56, no. 10, pp. 5274 – 5289, Oct. 2010.

[40] S. Kudekar, T. Richardson, and R. Urbanke, “Threshold saturation via spatial coupling:Why convolutional LDPC ensembles perform so well over the BEC,” IEEE Transactions onInformation Theory, vol. 57, no. 2, pp. 803 – 834, Feb. 2011.

[41] ——, “Spatially coupled ensembles universally achieve capacity under belief propagation,” inProc. IEEE International Symposium on Information Theory (ISIT), Jul. 2012, pp. 453–457.

[42] Z. Si, R. Thobaben, and M. Skoglund, “Rate-compatible LDPC convolutional codes achievingthe capacity of the BEC,” IEEE Transactions on Information Theory, vol. 58, no. 6, pp.4021–4029, Jun. 2012.

43

Page 44: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

[43] A. Yedla, Y.-Y. Jian, P. Nguyen, and H. Pfister, “A simple proof of threshold saturationfor coupled scalar recursions,” in 7th Intl. Symp. on Turbo Codes and Iterative InformationProcessing, Aug. 2012, pp. 51–55.

[44] ——, “A simple proof of threshold saturation for coupled vector recursions,” in InformationTheory Workshop, Sep. 2012.

[45] M. Valenti and B. Zhao, “Distributed turbo codes: towards the capacity of the relay channel,”in Proc. IEEE Vehicular Technology Conference (VTC), vol. 1, 2003, pp. 322–326.

[46] B. Zhao and M. Valenti, “Distributed turbo coded diversity for relay channel,” ElectronicLetters, vol. 39, no. 10, pp. 786–787, 2003.

[47] M. Janani, A. Hedayat, T. Hunter, and A. Nosratinia, “Coded cooperation in wireless com-munications: space-time transmission and iterative decoding,” IEEE Transactions on SignalProcessing, vol. 52, no. 2, pp. 362–371, 2004.

[48] Z. Zhang, I. Bahceci, and T. Duman, “Capacity approaching codes for relay channels,” inProc. IEEE International Symposium on Information Theory (ISIT), 2004, p. 2.

[49] Z. Zhang and T. Duman, “Capacity approaching turbo coding for half duplex relaying,” inProc. IEEE International Symposium on Information Theory (ISIT), 2005, pp. 1888–1892.

[50] ——, “Capacity-approaching turbo coding and iterative decoding for relay channels,” IEEETransactions on Communications, vol. 53, no. 11, pp. 1895–1905, 2005.

[51] Y. Zhang and M. Amin, “Distributed turbo-blast for cooperative wireless networks,” inProc. IEEE Workshop on Sensor Array and Multichannel Processing, 2006, pp. 452–455.

[52] S. Roy and T. Duman, “Performance bounds for turbo coded half duplex relay systems,” inProc. IEEE International Conference on Communications (ICC), vol. 4, 2006, pp. 1586–1591.

[53] Z. Zhang and T. Duman, “Capacity-approaching turbo coding for half-duplex relaying,” IEEETransactions on Communications, vol. 55, no. 10, pp. 1895–1906, 2007.

[54] Y. Li, B. Vucetic, and J. Yuan, “Distributed turbo coding with hybrid relaying protocols,”in Proc. IEEE International Symposium on Personal, Indoor and Mobile Radio Communi-cations (PIMRC), 2008, pp. 1–6.

[55] M. Karim, J. Yuan, and Z. Chen, “Improved distributed turbo code for relay channels,” inProc. IEEE Vehicular Technology Conference (VTC), 2009, pp. 1–5.

[56] L. Cao, J. Zhang, and N. Kanno, “Relay-coded multi-user cooperative communications foruplink lte-advanced 4g systems,” in Proc. International Conference on Wireless Communi-cations, Networking and Mobile Computing (WiCom), 2009, pp. 1–6.

[57] E. Obiedat, G. Chen, and L. Cao, “Distributed turbo product codes over multiple relays,”in Proc. IEEE Consumer Communications and Networking Conference (CCNC), 2010, pp.1–5.

[58] M. Karim, J. Yuan, and Z. Chen, “Nested distributed turbo code for relay channels,” inProc. IEEE Vehicular Technology Conference (VTC), 2010, pp. 1–5.

44

Page 45: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

[59] P. Tan, C. Ho, and S. Sun, “Design of distributed multiple turbo codes for block-fading relaychannels,” in Proc. IEEE International Conference on Communications (ICC), 2010, pp.1–5.

[60] K. Ishibashi, K. Ishii, and H. Ochiai, “Dynamic coded cooperation using multiple turbo codesin wireless relay networks,” IEEE Journal on Selected Areas in Communications, vol. 5, no. 1,pp. 197–207, 2011.

[61] C. Xu, S. Ng, and L. Hanzo, “Near-capacity irregular convolutional coded cooperative differ-ential linear dispersion codes using multiple-symbol differential decoding aided non-coherentdetection,” in Proc. IEEE International Conference on Communications (ICC), 2011, pp.1–5.

[62] K. Anwar and T. Matsumoto, “Accumulator-assisted distributed turbo codes for relay sys-tems exploiting source-relay correlation,” IEEE Communications Letters, 2012.

[63] R. Liu, P. Spasojevic, and E. Soljanin, “User cooperation with punctured turbo codes,” inProc. Allerton Confonference on Communication, Control, and Computing, vol. 41, no. 3.Citeseer, 2003, pp. 1690–1699.

[64] ——, “Cooperative diversity with incremental redundancy turbo coding for quasi-static wire-less networks,” in Proc. IEEE Workshop on Signal Processing Advances in Wireless Com-munications (SPAWC), 2005, pp. 791–795.

[65] J. Liu and X. Cai, “Cooperative communication using complementary punctured convolu-tional (cpc) codes over wireless fading channels,” in Proc. IEEE Wireless Communicationsand Networking Conference (WCAMAP), 2008, pp. 1290–1293.

[66] K. Ishibashi, K. Ishii, and H. Ochiai, “Design of adaptive coded cooperation using ratecompatible turbo codes,” in Proc. IEEE Vehicular Technology Conference (VTC), 2009, pp.1–5.

[67] A. Agustin, J. Vidal, E. Calvo, M. Lamarca, and O. Muñoz, “Hybrid turbo FEC/ARQ sys-tems and distributed space-time coding for cooperative transmission in the downlink,” inProc. IEEE International Symposium on Personal, Indoor and Mobile Radio Communica-tions (PIMRC), vol. 1, 2004, pp. 380–384.

[68] A. Agustin, J. Vidal, E. Calvo, and O. Muñoz, “Evaluation of turbo H-ARQ schemes forcooperative mimo transmission,” in Proc. International Workshop on Wireless Ad-Hoc Net-works, 2004, pp. 20–24.

[69] A. Agustin, J. Vidal, and O. Muñoz, “Hybrid turbo FEC/ARQ systems and distributedspace-time coding for cooperative transmission,” International Journal of Wireless Informa-tion Networks, vol. 12, no. 4, pp. 263–280, 2005.

[70] I. Khalil, A. Khan et al., “Turbo-coded HARQ with switchable relaying mechanism in singlehop cooperative wireless system,” in Proc. Frontiers of Information Technology (FIT), 2011,pp. 253–257.

[71] R. Youssef and A. Graell i Amat, “Distributed turbo-like codes for multi-user cooperativerelay networks,” in Proc. IEEE International Conference on Communications (ICC), May2010, pp. 1–5.

45

Page 46: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

[72] ——, “Distributed serially concatenated codes for multi-source cooperative relay networks,”IEEE Trans. on Wireless Commun., vol. 10, no. 1, pp. 253–263, Jan. 2011.

[73] S. Benedetto, D. Divsalar, G. Montorsi, and F. Pollara, “Serial concatenation of interleavedcodes: Performance analysis, design and iterative decoding,” IEEE Trans. Inform. Theory,vol. 44, no. 3, pp. 909–926, May 1998.

[74] Y. Cao and B. Vojcic, “Cooperative coding using serial concatenated convolutional codes,”in Proc. IEEE Wireless Communications and Networking Conference (WCAMAP), vol. 2,2005, pp. 1001–1006.

[75] Z. Si, R. Thobaben, and M. Skoglund, “On distributed serially concatenated codes,”in Proc. IEEE Workshop on Signal Processing Advances in Wireless Communications(SPAWC), 2009, pp. 653–657.

[76] ——, “A practical approach to adaptive coding for the three-node relay channel,” inProc. IEEE Wireless Communications and Networking Conference (WCAMAP), 2010, pp.1–6.

[77] ——, “Adaptive channel coding for the three-node relay channel with limited channel-stateinformation,” in Proc. International Symposium on Communications, Control and SignalProcessing (ISCCSP), 2010, pp. 1–5.

[78] R. Thobaben, “On distributed codes with noisy relays,” in Proc. Asilomar Conference onSignals, Systems, and Computers, 2008, pp. 1010–1014.

[79] Z. Liu, V. Stankovic, and Z. Xiong, “Wyner-ziv coding for the half-duplex relay channel,” inProc. IEEE International Conference on Acoustoustics, Speech, Signal Processing (ICASSP),vol. 5, 2005, pp. v–1113.

[80] Z. Liu, M. Uppal, V. Stankovic, and Z. Xiong, “Compress-forward coding with bpsk modu-lation for the half-duplex gaussian relay channel,” in Proc. IEEE International Symposiumon Information Theory (ISIT), Jul. 2008, pp. 2395 –2399.

[81] M. Uppal, Z. Liu, V. Stankovic, and Z. Xiong, “Compress-forward coding with bpsk modu-lation for the half-duplex gaussian relay channel,” IEEE Transactions on Signal Processing,vol. 57, no. 11, pp. 4467 –4481, Nov. 2009.

[82] R. Blasco-Serrano, R. Thobaben, V. Rathi, and M. Skoglund, “Polar codes for compress-and-forward in binary relay channels,” in Proc. Asilomar Conference on Signals, Systems,and Computers, 2010, pp. 1743–1747.

[83] H. Sneesens and L. Vandendorpe, “Soft decode and forward improves cooperative communi-cations,” in Proc. IEEE International Workshop on Computational Advances in Multi-SensorAdaptive Processing, 2005, pp. 157–160.

[84] Y. Li, B. Vucetic, T. Wong, and M. Dohler, “Distributed turbo coding with soft informationrelaying in multihop relay networks,” IEEE Journal on Selected Areas in Communications,vol. 24, no. 11, pp. 2040–2050, 2006.

46

Page 47: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

[85] A. Chakrabarti, A. de Baynast, A. Sabharwal, and B. Aazhang, “Half-duplex estimate-and-forward relaying: Bounds and code design,” in Proc. IEEE International Symposium onInformation Theory (ISIT), 2006, pp. 1239–1243.

[86] H. Sneessens, J. Louveaux, and L. Vandendorpe, “Turbo-coded decode-and-forward strategyresilient to relay errors,” in Proc. IEEE International Conference on Acoustoustics, Speech,Signal Processing (ICASSP), 2008, pp. 3213–3216.

[87] P. Weitkemper, D. Wübben, V. Kühn, and K. Kammeyer, “Soft information relaying forwireless networks with error-prone source-relay link,” in Proc. International ITG Conferenceon Source and Channel Coding (SCC), 2008, pp. 1–6.

[88] G. Al-Habian, A. Ghrayeb, M. Hasna, and A. Abu-Dayya, “Distributed turbo coding usinglog-likelihood thresholding for cooperative communications,” in Proc. Asilomar Conferenceon Signals, Systems, and Computers, 2008, pp. 1005–1009.

[89] E. Obiedat, W. Xiang, J. Leis, and L. Cao, “Soft incremental redundancy for distributedturbo product codes,” in Proc. IEEE Consumer Communications and Networking Conference(CCNC), 2010, pp. 1–5.

[90] E. Obiedat and L. Cao, “Soft information relaying for distributed turbo product codes (sir-dtpc),” IEEE Signal Processing Letters, vol. 17, no. 4, pp. 363–366, 2010.

[91] Y. Peng, M. Wu, H. Zhao, W. Wang et al., “Cooperative network coding with soft informationrelaying in two-way relay cchannels,” Journal of Communications, vol. 4, no. 11, pp. 849–855,2009.

[92] R. Lin, P. Martin, and D. Taylor, “Cooperative signaling with soft information combining,”Journal of Electrical and Computer Engineering, vol. 2010, p. 10, 2010.

[93] M. Molu and N. Goertz, “A study on relaying soft information with error prone relays,” inProc. Allerton Confonference on Communication, Control, and Computing, 2011, pp. 1785–1792.

[94] M. Azmi, J. Li, J. Yuan, and R. Malaney, “Soft decode-and-forward using LDPC coding inhalf-duplex relay channels,” in Proc. IEEE International Symposium on Information Theory(ISIT), 2011, pp. 1479–1483.

[95] D. Duyck, J. Boutros, and M. Moeneclaey, “Low-density graph codes for coded cooperationon slow fading relay channels,” IEEE Transactions on Information Theory, vol. 57, no. 7,pp. 4202–4218, 2011.

[96] M. Janani, A. Hedayat, T. E. Hunter, and A. Nosratinia, “Coded cooperation in wirelesscommunications: space-time transmission and iterative decoding,” IEEE Transactions onSignal Processing, vol. 52, no. 2, pp. 362–371, Feb. 2004.

[97] A. Nosratinia, T. E. Hunter, and A. Hedayat, “Cooperative communication in wireless net-works,” IEEE Commun. Mag., vol. 42, pp. 68–73, Oct. 2004.

[98] L. Xiao, T. Fuja, J. Kliewer, and D. Costello, “A network coding approach to cooperativediversity,” Information Theory, IEEE Transactions on, vol. 53, no. 10, pp. 3714 –3722, oct.2007.

47

Page 48: An Introduction to Distributed Channel Coding · known as distributed channel coding. Consider for example that the source in Fig. 1 transmits an uncoded message and that the relay,

[99] G. Kramer and A. van Wijngaarden, “On the white Gaussian multiple-access relay channel,”in Proc. IEEE Int. Symp. Inf. Theory, jun 2000, p. 40.

[100] G. Kramer, P. Gupta, and M. Gastpar, “Information-theoretic multihopping for relay net-works,” in International Zurich Seminar on Communications, 2004, pp. 192–195.

[101] L. Sankaranarayanan, G. Kramer, and N. Mandayam, “Capacity theorems for the multiple-access relay channel,” in Allerton Conference on Communications, Control and Computing.Citeseer, 2004.

[102] Y. Murin, R. Dabora, and D. Gündüz, “Source-channel coding theorems for the multiple-access relay channel,” CoRR, vol. abs/1106.3713, 2011.

[103] C. Hausl, F. Schreckenbach, I. Oikonomidis, and G. Bauch, “Iterative network and channeldecoding on a tanner graph,” in Proc. Allerton Conf. on Commun., Control and Computing,2005.

[104] J. Li, J. Yuan, R. Malaney, M. Azmi, and M. Xiao, “Network coded LDPC code design fora multi-source relaying system,” vol. 10, no. 5, pp. 1538–1551, May 2011.

[105] Y. Li, G. Song, and L. Wang, “Design of joint network-low density parity check codes basedon the exit charts,” IEEE Communications Letters, vol. 13, no. 8, pp. 600 –602, Aug. 2009.

[106] S. Tang, J. Cheng, C. Sun, R. Suzuki, and S. Obana, “Turbo network coding for efficientand reliable relay,” in Proc. IEEE Singapore International Conference on CommunicationSystems (ICCS), 2008, pp. 1603–1608.

[107] R. Pyndiah, F. Guilloud, and K. Amis, “Multiple source cooperative coding using turboproduct codes with a noisy relay,” in Proc. 6th International Symposium on Turbo Codesand Iterative Information Processing (ISTC), sept. 2010, pp. 98 –102.

[108] S. Schwandter, A. Graell i Amat, and G. Matz, “Spatially coupled LDPC codes for two-user decode-and-forward relaying,” in Proc. 7th International Symposium onTurbo Codesand Iterative Information Processing (ISTC), Aug. 2012, pp. 46–50.

[109] ——, “Spatially-coupled LDPC codes for decode-and-forward relaying of two correlatedsources over the binary erasure channel,” Submitted to IEEE Trans. on Commun.

48