31
1 Source-Channel Coding for Fading Channels with Correlated Interference Ahmad Abou Saleh, Wai-Yip Chan, and Fady Alajaji Abstract We consider the problem of sending a Gaussian source over a fading channel with Gaussian interference known to the transmitter. We study joint source-channel coding schemes for the case of unequal bandwidth between the source and the channel and when the source and the interference are correlated. An outer bound on the system’s distortion is first derived by assuming additional information at the decoder side. We then propose layered coding schemes based on proper combination of power splitting, bandwidth splitting, Wyner-Ziv and hybrid coding. More precisely, a hybrid layer, that uses the source and the interference, is concatenated (superimposed) with a purely digital layer to achieve bandwidth expansion (reduction). The achievable (square error) distortion region of these schemes under matched and mismatched noise levels is then analyzed. Numerical results show that the proposed schemes perform close to the best derived bound and to be resilient to channel noise mismatch. As an application of the proposed schemes, we derive both inner and outer bounds on the source-channel- state distortion region for the fading channel with correlated interference; the receiver, in this case, aims to jointly estimate both the source signal as well as the channel-state (interference). Index Terms Joint source-channel coding, distortion region, correlated interference, dirty paper coding, hybrid digital-analog coding, fading channels. This work was supported in part by NSERC of Canada. A. Abou Saleh and W-Y. Chan are with the Department of Electrical and Computer Engineering, Queen’s University, Kingston, ON, K7L 3N6, Canada (e-mail: [email protected]; [email protected]). F. Alajaji is with the Department of Mathematics and Statistics, Queen’s University, Kingston, ON, K7L 3N6, Canada (e-mail: [email protected]). September 12, 2014 DRAFT

1 Source-Channel Coding for Fading Channels with Correlated …communications/Papers/fdpc-tcom.pdf · 2014. 9. 26. · 1 Source-Channel Coding for Fading Channels with Correlated

  • Upload
    others

  • View
    23

  • Download
    0

Embed Size (px)

Citation preview

  • 1

    Source-Channel Coding for Fading Channels

    with Correlated Interference

    Ahmad Abou Saleh, Wai-Yip Chan, and Fady Alajaji

    Abstract

    We consider the problem of sending a Gaussian source over a fading channel with Gaussian

    interference known to the transmitter. We study joint source-channel coding schemes for the case of

    unequal bandwidth between the source and the channel and when the source and the interference are

    correlated. An outer bound on the system’s distortion is first derived by assuming additional information

    at the decoder side. We then propose layered coding schemes based on proper combination of power

    splitting, bandwidth splitting, Wyner-Ziv and hybrid coding. More precisely, a hybrid layer, that uses

    the source and the interference, is concatenated (superimposed) with a purely digital layer to achieve

    bandwidth expansion (reduction). The achievable (square error) distortion region of these schemes under

    matched and mismatched noise levels is then analyzed. Numerical results show that the proposed

    schemes perform close to the best derived bound and to be resilient to channel noise mismatch. As

    an application of the proposed schemes, we derive both inner and outer bounds on the source-channel-

    state distortion region for the fading channel with correlated interference; the receiver, in this case, aims

    to jointly estimate both the source signal as well as the channel-state (interference).

    Index Terms

    Joint source-channel coding, distortion region, correlated interference, dirty paper coding, hybrid

    digital-analog coding, fading channels.

    This work was supported in part by NSERC of Canada.

    A. Abou Saleh and W-Y. Chan are with the Department of Electrical and Computer Engineering, Queen’s University, Kingston,

    ON, K7L 3N6, Canada (e-mail: [email protected]; [email protected]).

    F. Alajaji is with the Department of Mathematics and Statistics, Queen’s University, Kingston, ON, K7L 3N6, Canada (e-mail:

    [email protected]).

    September 12, 2014 DRAFT

  • 2

    I. INTRODUCTION

    The traditional approach for analog source transmission in point-to-point communications

    systems is to employ separate source and channel coders. This separation is (asymptotically)

    optimal given unlimited delay and complexity in the coders [1]. There are, however, two disad-

    vantages associated with digital transmission. One is the threshold effect: the system typically

    performs well at the design noise level, while its performance degrades drastically when the

    true noise level is higher than the design level. This effect is due to the quantizers sensitivity

    to channel errors and the eventual breakdown of the employed error correcting code at high

    noise levels (no matter how powerful it is). The other trait is the levelling-off effect: as the noise

    level decreases, the performance remains constant beyond a certain threshold. This is due to the

    non-recoverable distortion introduced by the quantizer which limits the system performance at

    low noise levels. Joint source-channel coding (JSCC) schemes are more robust to noise level

    mismatch than tandem systems which use separate source and channel coding. Analog JSCC

    schemes are studied in [2]–[11]. These schemes are based on the so-called direct source-channel

    mappings. A family of hybrid digital-analog (HDA) schemes are introduced in [12]–[14] to

    overcome the threshold and the levelling-off effects. In [15]–[17], HDA schemes are proposed

    for broadcast channels and Wyner-Ziv systems.

    It is well known that for the problem of transmitting a Gaussian source over an additive white

    Gaussian noise (AWGN) channel with interference that is known to the transmitter, a tandem

    Costa coding scheme, which comprises an optimal source encoder followed by Costa’s dirty

    paper channel code [18], is optimal in the absence of correlation between the source and the

    interference. In [19], the authors studied the same problem as in [18] and proposed an HDA

    scheme (for the matched bandwidth case) that is able to achieve the optimal performance (same

    as the tandem Costa scheme). In [20], the authors adapted the scheme proposed in [19] for

    the bandwidth reduction case. In [21], the authors proposed an HDA scheme for broadcasting

    correlated sources and showed that their scheme is optimal whenever the uncoded scheme

    of [22] is not. In [23], the authors studied HDA schemes for broadcasting correlated sources

    under mismatched source-channel bandwidth; in [24], the authors studied the same problem

    and proposed a tandem scheme based on successive coding. In [25], we derived inner and outer

    bounds on the system’s distortion for the broadcast channel with correlated interference. Recently,

    September 12, 2014 DRAFT

  • 3

    [26] studied a joint source channel coding scheme for transmitting analog Gaussian source over

    AWGN channel with interference known to the transmitter and correlated with the source. The

    authors proposed two schemes for the matched source-channel bandwidth; the first one is the

    superposition of the uncoded signal and a digital signal resulting from the concatenation of a

    Wyner-Ziv coder [27] and a Costa coder, while in the second scheme the digital part is replaced

    by an HDA part proposed in [19]. In [28], we consider the problem of [26] under bandwidth

    expansion; more precisely, we studied both low and high-delay JSCC schemes. The limiting

    case of this problem, where the source and the interference are fully correlated was studied

    in [29]; the authors showed that a purely analog scheme (uncoded) is optimal. Moreover, they

    also considered the problem of sending a digital (finite alphabet) source in the presence of

    interference where the interference is independent from the source. More precisely, the optimal

    tradeoff between the achievable rate for transmitting the digital source and the distortion in

    estimating the interference is studied; they showed that the optimal rate-state-distortion tradeoff

    is achieved by a coding scheme that uses a portion of the power to amplify the interference and

    uses the remaining power to transmit the digital source via Costa coding. In [30], the authors

    considered the same problem as in [29] but with imperfect knowledge of the interference at the

    transmitter side.

    In this work, we study the reliable transmission of a memoryless Gaussian source over a

    Rayleigh fading channel with known correlated interference at the transmitter. More precisely,

    we consider equal and unequal source-channel bandwidths and analyze the achievable distortion

    region under matched and mismatched noise levels. We propose a layered scheme based on hybrid

    coding. One application of JSCC with correlated interference can be found in sensor network

    and cognitive radio channels where two nodes interfere with each other. One node transmits

    directly its signal; the other, however, is able to detect its neighbour node transmission and treat

    it as a correlated interference. In [31], we studied this problem under low-delay constraints;

    more specifically, we designed low-delay source-channel mappings based on joint optimization

    between the encoder and the decoder. One interesting application of this problem is to study

    the source-channel-state distortion region for the fading channel with correlated interference;

    in that case, the receiver side is interested in estimating both the source and the channel-state

    (interference). Inner and outer bounds on the source-interference distortion region are established.

    Our setting contains several interesting limiting cases. In the absence of fading and for the

    September 12, 2014 DRAFT

  • 4

    matched source-channel bandwidth, our system reverts to that of [26]; for the uncorrelated source-

    interference scenario without fading, our problem reduces to the one in [20] for the bandwidth

    reduction case. Moreover, the source-channel-state transmission scenario generalizes the setting

    in [29] to include fading and correlation between source and interference. The rest of the paper

    is organized as follows. In Section II, we present the problem formulation. In Section III, we

    derive an outer bound and introduce linear and tandem digital schemes. In Section IV, we

    derive inner bounds (achievable distortion region) under both matched and mismatched noise

    levels by proposing layered hybrid coding schemes. We extend these inner and outer bounds to

    the source-channel-state communication scenario in Section V. Finally, conclusions are drawn

    in Section VI.

    Throughout the paper, we will use the following notation. Vectors are denoted by characters

    superscripted by their dimensions. For a given vector XN = (X(1), ..., X(N))T , we let [XN ]K1and [XN ]NK+1 denote the sub-vectors [X

    N ]K1 , (X(1), ..., X(K))T and [XN ]NK+1 , (X(K +

    1), ..., X(N))T , respectively, where (·)T is the transpose operator. When there is no confusion,

    we also write [XN ]K1 as XK . When all samples in a vector are independent and identically

    distributed (i.i.d.), we drop the indexing when referring to a sample in a vector (i.e., X(i) = X).

    II. PROBLEM FORMULATION AND MAIN CONTRIBUTIONS

    We consider the transmission of a Gaussian source V K = (V (1), ..., V (K))T ∈ RK over

    a Rayleigh fading channel in the presence of Gaussian interference SN ∈ RN known at the

    transmitter (see Fig. 1). The source vector V K represents the first K samples of V max (K,N); SN

    is similarly defined. The source vector V K , which is composed of i.i.d. samples, is transformed

    into an N dimensional channel input XN ∈ RN using a nonlinear mapping function, in general,

    α(.) : RK × RN → RN . The received symbol is Y N = FN(XN + SN) + WN , where addition

    and multiplication are component-wise, FN represents an N -block Rayleigh fading that is

    independent of (V K ;SN ;WN) and known to the receiver side only, XN = α(V K , SN), SN

    is an i.i.d. Gaussian interference vector (with each sample S ∼ N (0, σ2S)) that is considered to

    be the output of a side channel with input V max (K,N) as shown in Fig. 1, and each sample in the

    additive noise WN is drawn from a Gaussian distribution (W ∼ N (0, σ2W )) independently from

    both the source and the interference. Unlike the typical dirty paper problem which assumes

    an AWGN channel with interference (that is uncorrelated to the source) [18], we consider a

    September 12, 2014 DRAFT

  • 5

    fading channel and assume that V K and SN are jointly Gaussian. Since the fading realization

    is known only at the receiver, we have partial knowledge of the actual interference FNSN at

    the transmitter. In this work, we assume that only V (i) and S(i), i = 1, ...,min (K,N), are

    correlated according to the following covariance matrix

    ΣV S =

    σ2V ρσV σSρσV σS σ

    2S

    (1)where σ2V , σ

    2S are, respectively, the variance of the source and the interference, and ρ is the source-

    interference correlation coefficient. The system operates under an average power constraint P

    E[||α(V K , SN)||2]/N ≤ P (2)

    where E[(·)] denotes the expectation operator. The reconstructed signal is given by V̂ K =

    γ(Y N , FN), where the decoder is a mapping from RN × RN → RK . The rate of the system is

    given by r = NK

    channel use/source symbol. When r = 1, the system has an equal-bandwidth

    between the source and the channel. For r < 1 (r > 1), the system performs bandwidth reduction

    (expansion). According to the correlation model described above, note that for r < 1, the first

    N source samples [V K ]N1 and SN are correlated via the covariance matrix in (1), while the

    remaining K − N samples [V K ]KN+1 and SN are independent. For r > 1, however, V K and

    [SN ]K1 are correlated via the covariance matrix in (1), while VK and [SN ]NK+1 are uncorrelated.

    V K

    SN

    α(.) +

    WN

    XN Y N γ(.)V̂ K

    + x

    FN

    V max(K,N) SideChannel

    Smax(K,N)

    Fig. 1. A K : N system structure over a fading channel with interference known at the transmitter side. The interference

    Smax (K,N) is assumed to be the output of a noisy side channel with input V max (K,N). V K represents the first K samples of

    V max (K,N) (SN is defined similarly). The fading coefficient is assumed to be known at the receiver side; the transmitter side,

    however, knows the fading distribution only.

    In this paper, we aim to find a source-channel encoder α and decoder γ that minimize the mean

    square error (MSE) distortion D = E[||V K − V̂ K ||2]/K under the average power constraint in

    (2). For a particular coding scheme (α, γ), the performance is determined by the channel power

    constraint P , the fading distribution, the system rate r, and the incurred distortion D at the

    September 12, 2014 DRAFT

  • 6

    receiver. For a given power constraint P , fading distribution and rate r, the distortion region is

    defined as the closure of all distortions Do for which (P,Do) is achievable. A power-distortion

    pair is achievable if for any δ > 0, there exist sufficiently large integers K and N with N/K = r,

    a pair of encoding and decoding functions (α, γ) satisfying (2), such that D < Do + δ. In this

    work, we analyze the distortion for equal and unequal bandwidths between the source and the

    channel with no constraint on the delay (i.e., both N and K tend to infinity with NK

    = r fixed).

    Our main contributions can be summarized as follows

    • We derive inner and outer bounds for the system’s distortion region for a Gaussian source

    over fading channel with correlated interference under equal and unequal source-channel

    bandwidths. The outer bounds are found by assuming full/partial knowledge of the inter-

    ference at the decoder side. The inner bounds are derived by proposing hybrid coding

    schemes and analyzing their achievable distortion region. These schemes are based on

    proper combination of power splitting, bandwidth splitting, Wyner-Ziv and hybrid coding; a

    hybrid layer that uses the source and the interference is concatenated (superimposed) with a

    purely digital layer to achieve bandwidth expansion (reduction). Different from the problem

    considered in [26], we consider the case of fading and mismatch in the source-channel

    bandwidth. Our scheme offers better performance than the one in [26] under matched

    bandwidth (when accommodating the Costa coder in their scheme for fading channels).

    Moreover, our scheme is optimal when there is no fading and when the source-interference

    are either uncorrelated or fully correlated.

    • As an application of the proposed schemes, we consider source-channel-state transmission

    over fading channels with correlated interference. In such case, the receiver aims to jointly

    estimate both the source signal as well as the channel-state. Inner and outer bounds are

    derived for this scenario. For the special case of uncorrelated source-interference over

    AWGN channels, we obtain the optimal source-channel-state distortion tradeoff; this result

    is analogous to the optimal rate-state distortion for the transmission of a finite discrete source

    over a Gaussian state interference derived in [29]. For correlated source-interference and

    fading channels, our inner bound performs close to the derived outer bound and outperforms

    the adapted scheme of [29].

    September 12, 2014 DRAFT

  • 7

    III. OUTER BOUNDS AND REFERENCE SYSTEMS

    A. Outer Bounds

    In [26] and [32], outer bounds on the achievable distortion were derived for point-to-point

    communication over Gaussian channel with correlated interference under matched bandwidth

    between the source and the channel. This was done by assuming full/partial knowledge of the

    interference at the decoder side. In this section, for the correlation model considered above, we

    derive outer bounds for the fading interference channel under unequal source-channel bandwidth.

    Since S(i) and V (i) are correlated for i = 1, ...,min (K,N), we have S(i) = SI(i) + SD(i),

    with SD(i) = ρσSσV V (i) and SI ∼ N (0, (1− ρ2)σ2S) are independent of each other. To derive an

    outer bound, we assume knowledge of both (S̃K , [SN ]NK+1) and FN at the decoder side for the

    case of bandwidth expansion, where S̃K = η1SKI +η2SKD (the linear combination S̃ is motivated

    by [32]), and (η1, η2) is a pair of real parameters. For the bandwidth reduction case, we assume

    knowledge of S̃N and FN at the decoder to derive a bound on the average distortion for the

    first N samples; the derivation of a bound on the average distortion for the remaining K − N

    samples assumes knowledge of [V K ]N1 in addition to S̃N .

    Definition 1 Let MSE(Y ; S̃) be the distortion incurred from estimating Y based on S̃ using

    a linear minimum MSE estimator (LMMSE) denoted by γlmse(S̃K , fK). This distortion, which

    is a function of η1, η2, E[XSI ] and E[XSD], is given by MSE(Y ; S̃) = E[(Y −γlmse(S̃K , fK))2] =(E[Y 2]− (E[Y S̃])

    2

    E[S̃2]

    ), where E[Y 2] = f 2(P+σ2S+2(E[XSI+XSD]))+σ2W , E[Y S̃] = f

    (E[X(η1SI+

    η2SD)] +E[η1S2I + η2S2D])

    and E[S̃2] = E[η21S2I + η22S2D]. These terms will be used in Lemmas 1

    and 2.

    Lemma 1 For a K : N bandwidth expansion system with N ≥ K (the matched case is treated

    as a special case), the outer bound on the system’s distortion D can be expressed as follows

    D ≥ Dob , supη1,η2

    infX:

    |E[XSI ]|≤√

    E[X2]E[S2I ]

    |E[XSD]|≤√

    E[X2]E[S2D]

    Var(V |S̃)

    exp{EF[log

    ((MSE(Y ;S̃)

    σ2W

    )(f2P+σ2W

    σ2W

    )r−1)]} (3)

    where Var(V |S̃) = σ2V(1− η

    22ρ

    2

    η21(1−ρ2)+η22ρ2)

    is the variance of V given S̃.

    September 12, 2014 DRAFT

  • 8

    Proof: For a K : N system with N ≥ K, we have the following

    K

    2log

    Var(V |S̃)D

    ≤ I(V K ; V̂ K |S̃K , [SN ]NK+1, FN) ≤ I(V K ;Y N |S̃K , [SN ]NK+1, FN)

    = h(Y N |S̃K , [SN ]NK+1, FN)− h(Y N |V K , SN , FN)

    ≤ h(Y K |S̃K , FK) + h([Y N ]NK+1∣∣[SN ]NK+1, [FN ]NK+1)− h(Y N |V K , SN , FN)

    = EF[h(Y K |S̃K , fk) + h([Y N ]NK+1

    ∣∣[SN ]NK+1, [fn]nk+1)]− h(WN)≤ EF

    [K

    2log 2πe(MSE(Y ; S̃)) +

    N −K2

    log 2πe(f 2P + σ2W )

    ]− N

    2log 2πeσ2W

    = EF

    [K

    2log

    (MSE(Y ; S̃)

    σ2W

    )+N −K

    2log

    (f 2P + σ2W

    σ2W

    )](4)

    where we used h(Y K |S̃K , fK) ≤ h(Y K − γlmse(S̃K , fK)) ≤ K2 log 2πe(

    MSE(Y ; S̃))

    . By the

    Cauchy-Schwarz inequality, we have |E[XSI ]| ≤√E[X2]E[S2I ] and |E[XSD]| ≤

    √E[X2]E[S2D].

    For a given η1 and η2, we have to choose the highest value of MSE(Y ; S̃) over E[XSD] and

    E[XSI ]; then we need to maximize the right-hand side of (3) over η1 and η2. Note that most

    inequalities follow from rate-distortion theory, the data processing inequality and the facts that

    conditioning reduces differential entropy and that the Gaussian distribution maximizes differential

    entropy.

    Lemma 2 For K : N bandwidth reduction (K > N ), the outer bound on D is given by

    D ≥ Dob(ξ∗) , supη1,η2

    infξ

    infX:

    |E[XSI ]|≤√

    (1−ξ)PE[S2I ]|E[XSD]|≤

    √(1−ξ)PE[S2D]

    r Var(V |S̃)exp{EF [log ( MSE(Y ;S̃)ξPf2+σ2W )]}

    +(1− r) σ2V

    exp{EF[

    NK−N log

    (ξPf2+σ2W

    σ2W

    )]} (5)

    where ξ ∈ [0, 1].

    Proof: We start by decomposing the average MSE distortion as follows

    D =1

    KE[||V K − V̂ K ||2] = 1

    K

    (E[||V N − V̂ N ||2] + E[||[V K ]KN+1 − [V̂ K ]KN+1||2]

    )=

    N

    K

    (1

    NE[||V N − V̂ N ||2]

    )+K −NK

    (1

    K −NE[||[V K ]KN+1 − [V̂ K ]KN+1||2]

    )= rD1 + (1− r)D2 (6)

    September 12, 2014 DRAFT

  • 9

    where D1 and D2 are the average distortion in reconstructing V N and [V K ]KN+1, respectively. To

    find an outer bound on D, we derive bounds on both D1 and D2. To bound D1, We can write

    the following expressionN

    2log

    Var(V |S̃)D1

    ≤ I(V N ; V̂ N |S̃N , FN) ≤ I(V N ;Y N |S̃N , FN)

    = h(Y N |S̃N , FN)− h(Y N |S̃N , V N , FN)

    = h(Y N |S̃N , FN)− h(Y N |SN , V N , FN)(a)

    ≤ EF[N

    2log 2πe(MSE(Y ; S̃))− N

    2log 2πe(ξPf 2 + σ2W )

    ]≤ sup

    Y ∈AEF

    [N

    2log

    (MSE(Y ; S̃)ξPf 2 + σ2W

    )](7)

    where the set A = {Y : h(Y N |SN , V N , FN) = EF[N2

    log 2πe(ξPf 2 + σ2W )]}. Note that in (7)-

    (a) we use the fact that h(Y N |SN , V N , FN) = EF[N2

    log 2πe (ξPf 2 + σ2W )], for some ξ ∈ [0 1].

    This can be shown by noting that the following inequality holds N2

    log 2πe(σ2W ) = h(WN) ≤

    h(Y N |SN , V N , FN) ≤ h(FNXN +WN |FN) = EF [N2 log 2πe(Pf2 + σ2W )]; as a result, there is

    a ξ ∈ [0 1] such that h(Y N |SN , V N , FN) = EF[N2

    log 2πe (ξPf 2 + σ2W )]. Moreover in (7)-(a),

    we used the fact that

    h(Y N |S̃N , FN) = EF [h(Y N |S̃N , fn)] = EF [h(Y N − γlmse(S̃N , fn)|S̃N , fn)]

    ≤ EF [h(Y N − γlmse(S̃N , fn))] ≤N

    2EF [log 2πe(MSE(Y ; S̃))]. (8)

    Similarly, to derive a bound on D2, we have the followingK −N

    2log

    σ2VD2

    ≤ I([V K ]KN+1; [V̂ K ]KN+1|SN , V N , FN) ≤ I([V K ]KN+1;Y N |SN , V N , FN)

    = h(Y N |SN , V N , FN)− h(Y N |SN , V N , [V K ]KN+1, FN)

    = h(Y N |SN , V N , FN)− h(Y N |SN , V K , FN)

    = E[N

    2log

    (ξPf 2 + σ2W

    σ2W

    )](9)

    where in the last equality, we used h(Y N |SN , V N , FN) = EF[N2

    log 2πe (ξPf 2 + σ2W )]

    as

    shown earlier. Note that since we do not know the value of ξ, the overall distortion has to

    be minimized over the parameter ξ. Now using (7) and (9) in (6), we have the following bound

    D ≥ infξ

    infY ∈A

    r Var(V |S̃)exp{EF [log (MSE(Y ;S̃)ξPf2+σ2W )]} + (1− r)σ2V

    exp{EF[

    NK−N log

    (ξPf2+σ2W

    σ2W

    )]}(10)

    September 12, 2014 DRAFT

  • 10

    where the sup in (7) is manifested as inf on the distortion. Note that the above sequence of

    inequalities in (8) becomes equalities when Y is conditionally Gaussian given F and when

    Y − γlmse(S̃, f) and S̃ are jointly Gaussian and orthogonal to each other given F ; this happens

    when X∗ is jointly Gaussian with S, V and W given F . Hence, the sup in (7) happens when

    X∗ is Gaussian. Now we write X∗ = N∗ξ +X∗ξ , where N

    ∗ξ ∼ N (0, ξP ) is independent of (V, S)

    and X∗ξ ∼ N (0, (1 − ξ)P ) is a function of (V, S). Note that X∗ξ is independent of N∗ξ . As a

    result , the equality h(Y N |SN , V N , FN) = EF[N2

    log 2πe (ξPf 2 + σ2W )]

    still holds and hence

    Y ∗ ∈ A, E[Y 2] = f 2(P+σ2S+2(E[X∗ξSI+X∗ξSD]))+σ2W and E[Y S̃] = f(E[X∗ξ (η1SI+η2SD)]+

    E[η1S2I +η2S2D]). By the Cauchy-Schwarz inequality, |E[X∗SI ]| = |E[X∗ξSI ]| ≤

    √E[(X∗ξ )2]E[S2I ]

    and |E[X∗SD]| = |E[X∗ξSD]| ≤√

    E[(X∗ξ )2]E[S2D]. Hence we maximize the value of MSE(Y ; S̃)

    over X or equivalently over E[X∗ξSI ] and E[X∗ξSD] satisfying the above constraints. Finally, the

    parameters η1 and η2 are chosen so that the right hand side of (5) is maximized.

    B. Linear Scheme

    In this section, we assume that the encoder transforms the K-dimensional signal V K into an

    N -dimensional channel input XN using a linear transformation according to

    XN = α(V K , SN) = TV K + MSN (11)

    where T and M are RN×K and RN×N matrices, respectively. In such case, Y N is condi-

    tionally Gaussian given FN and the minimum MSE (MMSE) decoder is a linear estimator,

    with, V̂ K = ΣV Y Σ−1Y YN , where ΣV Y = E

    [(V K)(Y N)T

    ]and ΣY = E

    [(Y N)(Y N)T

    ]. The

    matrices T and M can be found (numerically) by minimizing the MSE distortion Dlinear =

    EF[

    1Ktr{σ2V IK×K − ΣV Y Σ−1Y ΣTV Y

    }]under the power constraint in (2), where tr(.) is the trace

    operator and IK×K is a K ×K identity matrix. Note that by setting M to be the zero matrix

    and T =√P/σ2V IN×K , the system reduces to the uncoded scheme. Focusing on the matched

    case (K = N), we have the following lemma for finite block length K.

    Lemma 3 For the matched-bandwidth source-channel coding of a Gaussian source transmitted

    over an AWGN fading channel with correlated interference, the distortion lower bound for any

    linear scheme is achieved with single-letter linear codes.

    September 12, 2014 DRAFT

  • 11

    Proof: Recall that since V K and SK are correlated, we have SK = ρσSσVV K +NKρ , where the

    samples in NKρ are i.i.d. Gaussian with common variance σ2S(1−ρ2). As a result and using (11)

    Y K = F

    (T +

    ρσSσV

    M +ρσSσV

    IK×K

    )V K + F (M + IK×K)N

    Kρ +W

    K

    = FT̃V K + FM̃NKρ +WK (12)

    where F = diag(FK) is a diagonal matrix that represents the fading channel, M̃ = (M + IK×K)

    and T̃ =(T + ρσS

    σVM + ρσS

    σVIK×K

    ). After some manipulation, the distortion Dlinear is given by

    Dlinear =1

    KEF[tr

    {(T̃TFT [σ2S(1− ρ2)FM̃M̃TFT + σ2W IK×K ]−1FT̃ + σ−2V IK×K

    )−1}]=

    1

    KEF[tr{(

    QFTRF + σ−2V IK×K)−1}]

    (13)

    where we define Q = T̃T̃T , R = [σ2S(1−ρ2)FM̃M̃TFT +σ2W IK×K ]−1 and use the fact that for

    any square matrices A and B, tr (I + AB)−1 = tr (I + BA)−1 [33]. Now by noting that for

    any positive-definite K × K square matrix D, tr(D−1) ≥∑K

    i=1 D−1ii [33], where Dii denotes

    the diagonal elements in D and equality holds iff D is diagonal, we can write the following

    Dlinear ≥1

    K

    K∑i=1

    1

    Qii|Fii|2Rii + σ−2V. (14)

    Equality in (14) holds iff Q and R are diagonal; hence the optimal solution gives a diagonal T

    and M. Thus, any linear coding can be achieved in a scalar form without performance loss.

    C. Tandem Digital Scheme

    In [34], Gel’fand and Pinsker showed that the capacity of a point-to-point communication

    with side information (interference) known at the encoder side is given by

    C = maxp(u,x|s)

    I(U ;Y )− I(U ;S) (15)

    where the maximum is over all joint distributions of the form p(s)p(u, x|s)p(y|x, s) and U

    denotes an auxiliary random variable. In [18], Costa showed that using U = X + αS, with

    α = PP+σ2W

    over AWGN channel with interference known at the transmitter, the achievable

    capacity is C = 12

    log(

    1 + Pσ2W

    ), which coincides with the capacity when both encoder and

    decoder know the interference S. As a result, this choice of U is optimal in terms of maximizing

    capacity. Next, we adapt the Costa scheme for the fading channel; we choose U = X + αS as

    September 12, 2014 DRAFT

  • 12

    above, where α is redesigned to fit our problem. Using (15) and by interpreting the fading F as

    a second channel output, an achievable rate R is given by

    R = I(U ;Y, F )− I(U ;S) = I(U ;Y |F )− I(U ;S) (16)

    where we used the fact that I(U ;F ) = 0. After some manipulations, the rate R is

    R = EF[

    1

    2log

    (P [f 2(P + σ2S) + σ

    2W ]

    Pσ2Sf2(1− α)2 + σ2W (P + α2σ2S)

    )]. (17)

    To find α, we minimize the expected value of the denominator in (17) (i.e., EF [Pσ2Sf 2(1 −

    α)2 + σ2W (P + α2σ2S)]). As a result, we choose α =

    PE[f2]PE[f2]+σ2W

    for finite noise levels. Note that

    this choice of α is independent of S and depends on the second order statistics of the fading.

    In [35], the authors show that by choosing α = PP+σ2W

    , Costa coding maximizes the achievable

    rate for fading channels in the limits of both high and low noise levels.

    The tandem scheme is based on the concatenation of an optimal source code and the adapted

    Costa coding (described above). The optimal source code quantizes the analog source with a

    rate close to that in (17), and the adapted Costa coder achieves a rate equal to (17). Hence, from

    the lossy JSCC theorem, the MSE distortion for a K : N system can be expressed as follows

    Dtandem =σ2V

    exp{EF[r log

    (P [f2(P+σ2S)+σ

    2W ]

    Pσ2Sf2(1−α)2+σ2W (P+α2σ

    2S)

    )]} (18)where r = N/K is the system’s rate. Note that the performance of this scheme does not improve

    when the noise level decreases (levelling-off effect) or in the presence of correlation between

    the source and the interference.

    Remark 1 For the AWGN channel, the distortion of the tandem scheme in (18) can be simplified

    as follows Dtandem = σ2V/

    (1 + P/σ2W )r. This can be shown by setting α = P/(P + σ2W ) and

    cancelling out the expectation in (18). This scheme is optimal for the uncorrelated case (ρ = 0).

    IV. DISTORTION REGION FOR THE LAYERED SCHEMES

    In this section, we propose layered schemes based on Wyner-Ziv and HDA coding for trans-

    mitting a Gaussian source over a fading channel with correlated interference. These schemes

    require proper combination of power splitting, bandwidth splitting, rate splitting, Wyner-Ziv and

    HDA coding. A performance analysis in the presence of noise mismatch is also conducted.

    September 12, 2014 DRAFT

  • 13

    A. Scheme 1: Layering Wyner-Ziv Costa and HDA for Bandwidth Expansion

    This scheme comprises two layers that output XK1 and XN−K2 . The channel input is obtained by

    multiplexing (concatenating) the output codeword of both layers XN = [XK1 XN−K2 ] as shown in

    Fig. 2. The first layer is composed of two sublayers that are superimposed to produce the first K

    samples of the channel input XK1 = XKa +X

    Kd . The first sublayer is purely analog and consumes

    an average power of Pa; the output of this sublayer is given by XKa =√a(β1V

    K+β2SK), where

    β1, β2 ∈ [−1 1], a = Paβ21σ2V +β22σ2S+2β1β2ρσV σS with 0 ≤ Pa ≤ P . The second sublayer, that outputs

    XKd and consumes the remaining power Pd = P−Pa, encodes the source V K using a Wyner-Ziv

    coder followed by a (generalized) Costa coder. The Wyner-Ziv encoder, which uses the fact that

    an estimate of V K can be obtained at the decoder side, forms a random variable TK1 as follows

    TK1 = αwz1VK +BK1 (19)

    where each sample in BK1 is a zero mean i.i.d. Gaussian, αwz1 and the variance of B1 are defined

    later. The encoding process starts by generating a K-length i.i.d. Gaussian codebook T1 of size

    2KI(T1;V ) and randomly assigning the codewords into 2KR1 bins with R1 defined later. For each

    source realization V K , the encoder searches for a codeword TK1 ∈ T1 such that (V K , TK1 ) are

    jointly typical. In the case of success, the Wyner-Ziv encoder transmits the bin index of this

    codeword using Costa coding. The Costa coder, which treats the analog sublayer XKa in addition

    to SK as interference, forms the following auxiliary random variable UKc1 = XKd +αc1Š

    K , where

    ŠK = (XKa +SK), the samples in XKd are i.i.d. zero mean Gaussian with variance Pd = P −Pa

    and 0 ≤ αc1 ≤ 1 is a real parameter. Note that XKd is independent of V K and SK . The encoding

    process of the Costa coding can be summarized as follows

    • Codebook Generation: Generate a K-length i.i.d. Gaussian codebook Uc1 with 2KI(Uc1 ;Y1,F )

    codewords, where Y K1 is the first K samples of the received signal YN . Every codeword

    is generated following the random variable UKc1 and uniformly distributed over 2KR1 bins.

    The codebook is revealed to both encoder and decoder.

    • Encoding: For a given bin index (the output of the Wyner-Ziv encoder), the Costa encoder

    searches for a codeword UKc1 such that the bin index of UKc1

    is equal to the Wyner-Ziv

    output and (UKc1 , ŠK) are jointly typical. In the case of success, the Costa encoder outputs

    XKd = UKc1− αc1ŠK . Otherwise, an encoding failure is declared. Note that the probability

    of encoder failure vanishes by using R1 = I(Uc1 ;Y1, F )− I(Uc1 ; Š).

    September 12, 2014 DRAFT

  • 14

    Wyner-ZivEncoder1

    CostaEncoder1

    β1

    β2

    +

    +

    a

    XK1

    XKa

    XKdV K

    SK

    Wyner-ZivEncoder2

    CostaEncoder2

    Mux

    XN−K2

    [SN ]NK+1

    XN

    Fig. 2. Scheme 1 (bandwidth expansion) encoder structure.

    The second layer, which outputs XN−K2 , encodes VK using a Wyner-Ziv with rate R2 and

    a Costa coder that treats [SN ]NK+1 as interference. The Wyner-Ziv encoder, which uses the fact

    that an estimate of V K is obtained from the first layer, forms the random variable TK2 as follows

    TK2 = αwz2VK +BK2 (20)

    where the samples in BK2 are i.i.d. and follow a zero mean Gaussian distribution, αwz2 and the

    variance of B2 are defined later. The Costa coder forms the auxiliary random variable UN−Kc2 =

    XN−K2 + αc2 [SN ]NK+1, where the samples in X

    N−K2 are i.i.d. zero mean Gaussian with variance

    P , and the real parameter αc2 is defined later. The encoding process of the Wyner-Ziv and the

    Costa coder for the second layer is very similar to the one described for the first layer; hence,

    no details are provided.

    At the receiver side, as shown in Fig. 3, from the first K components of the received signal

    Y N = [Y K1 , YN−K

    2 ] = FN(XN + SN) + WN , where Y K1 = [Y

    N ]K1 and YN−K

    2 = [YN ]NK+1,

    the Costa decoder estimates the codeword UKc1 by searching for a codeword UKc1

    such that

    (UKc1 , YK

    1 , FK) are jointly typical. By the result of Gelfand-Pinsker [34] (or Costa [18]) and by

    treating the fading coefficient FK as a second channel output, the error probability of encoding

    and decoding the codeword UKc1 vanishes as K →∞ if

    R1 = I(Uc1 ;Y1, F )− I(Uc1 ; Š) = I(Uc1 ;Y1|F )−(h(Uc1)− h(Uc1|Š)

    )= h(Uc1) + h(Y1|F )− h(Uc1 , Y1|F )− h(Uc1) + h(Uc1 |Š)

    = EF

    [1

    2log

    (Pd[f

    2(Pd + σ2Š) + σ2W ]

    Pdσ2Šf2(1− αc1)2 + σ2W (Pd + α2c1σ

    2Š)

    )](21)

    September 12, 2014 DRAFT

  • 15

    where σ2Š

    = E[(Xa + S)2]. We then obtain a linear MMSE estimate of V K (based on Y K1 and

    UKc1 ), denoted by VKa . The distortion from estimating the source using V

    Ka is given by

    Da = EF[σ2V − ΓΛ−1ΓT

    ](22)

    where Λ = E[[Uc1 Y1]T [Uc1 Y1]] is the covariance of [Uc1 Y1] and Γ = E[V [Uc1 Y1]] is the

    correlation vector between V and [Uc1 Y1]. By using rate R1 on the Wyner-Ziv encoder, the

    bin index of the Wyner-Ziv can be decoded correctly (with high probability). The Wyner-Ziv

    decoder then looks for a codeword TK1 in this bin such that (TK1 , V

    Ka ) are jointly typical (as

    K → ∞, the probability of error in decoding TK1 vanishes). A better estimate of V K is then

    obtained based on V Ka and the decoded codeword TK1 . The distortion in the estimated source

    Ṽ K is then

    D̃ =Da

    exp{EF[log(

    Pd[f2(Pd+σ2Š

    )+σ2W ]

    Pdσ2Šf2(1−αc1 )2+σ

    2W (Pd+α

    2c1σ2Š

    )

    )]} . (23)Note that this distortion is equal to the distortion incurred when assuming that the side informa-

    tion V Ka is also known at the transmitter side; this can be achieved by choosing αwz1 =√

    1− D̃Da

    and B1 ∼ N (0, D̃) in (19) and using a linear MMSE estimator based on V Ka and TK1 . In contrast

    to the AWGN channel with correlated interference [26], a purely analog layer is not sufficient to

    accommodate for the correlation over AWGN fading channel with correlated interference; indeed

    using the knowledge of UKc1 as a side information to obtain a better description of the Wyner-Ziv

    codewords TK1 will achieve a better performance. From the last N−K received symbols Y N−K2 ,

    Wyner-Ziv

    Decoder1

    Costa

    Decoder1

    UK

    c1

    VK

    a

    ṼK

    YN

    Wyner-Ziv

    Decoder2Costa

    Decoder2

    V̂K

    LMMSEEstimator

    Demux

    UN−K

    c2Y

    N−K

    2

    YK

    1

    Fig. 3. Scheme 1 (bandwidth expansion) decoder structure.

    the Costa decoder estimates the codeword UN−Kc2 by searching for a codeword UN−Kc2

    such that

    (UN−Kc2 , YN−K

    2 , [FN ]NK+1) are jointly typical. The probability of error in encoding and decoding

    the codeword UN−Kc2 goes to zero by choosing

    R2 = I(Uc2 ;Y2, F )− I(Uc2 ;S) = EF[

    1

    2log

    (P [f 2(P + σ2S) + σ

    2W ]

    Pσ2Sf2(1− αc2)2 + σ2W (P + α2c2σ

    2S)

    )](24)

    September 12, 2014 DRAFT

  • 16

    where αc2 = PE[f 2]/(PE[f 2]+σ2W ) is found in a similar way as done in Sec. III-C. By using this

    rate, the Wyner-Ziv bin index can be decoded correctly (with high probability). The Wyner-Ziv

    decoder then looks for a codeword TK2 in the decoded bin such that TK2 and the side information

    from the first layer Ṽ K are jointly typical. A refined estimate of the source can be found using

    the side information Ṽ K and the decoded codeword TK2 . The resulting distortion is then

    DScheme 1 = infβ1,β2,Pa,αc1

    exp{EF[log(

    P [f2(P+σ2S)+σ2W ]

    Pσ2Sf2(1−αc2 )2+σ

    2W (P+α

    2c2σ2S)

    )r−1]} . (25)

    Note that this distortion is equal to the distortion realized when assuming Ṽ K is also known at

    the transmitter side; this can be achieved using a linear MMSE estimator based on [T1 T2 Y1],

    and by setting αwz2 =√

    1− DScheme 1D̃

    and B2 ∼ N (0, DScheme 1) in (20).

    Remark 2 For AWGN channels with no fading, the same scheme can be used. In this case, the

    distortion from reconstructing the source can be expressed as follows

    DScheme 1 = infβ1,β2,Pa

    {Da/

    [(1 + P/σ2W )r−1(1 + Pd/σ

    2W )]}. (26)

    This distortion can be found by setting the fading coefficient F = 1, αc1 = Pd/(Pd + σ2W ) and

    αc2 = P/(P +σ2W ) in (25). The distortion in (26) can be also achieved by replacing the sublayer

    that outputs XKd by an HDA Costa layer as we proposed in [28]. Note that using only YK

    1 as

    input to the LMMSE estimator in Fig. 3 is enough for the AWGN case. In such case, Da in (26)

    can be simplified as follows

    Da =

    (σ2V −

    (√aβ21σ

    2V + (

    √aβ2 + 1)ρσV σS)

    2

    P + (2√aβ2 + 1)σ2S + 2

    √aβ1ρσV σS + σ2W

    ). (27)

    Moreover, one can check that this scheme is optimal (for the AWGN channel) for ρ = 0 and

    ρ = 1. For ρ = 0, this happens by shutting down the analog sublayer (i.e., Pa = 0) in the scheme

    and using (η1 = 1, η2 = 1) on the outer bound in (3). For the case of ρ = 1, the optimal power

    allocation for the scheme is (Pa = P, Pd = 0). The resulting system’s distortion can be shown

    to be equal to the outer bound in (3) for (η1 = 1, η2 = 0).

    Scheme 1 under mismatch in noise levels: Next, we study the distortion of the proposed

    scheme in the presence of noise mismatch between the transmitter and the receiver. The actual

    channel noise power σ2Wa is assumed to be lower than the design one σ2W (i.e., σ

    2Wa

    < σ2W ).

    September 12, 2014 DRAFT

  • 17

    Under such assumption, the Costa and Wyner-Ziv decoders are still able to decode correctly all

    codewords with low probability of error. After decoding TK1 and TK2 , a symbol-by-symbol linear

    MMSE estimator of V K based on Y K1 , TK1 and T

    K2 is calculated. Hence Scheme 1’s distortion

    under noise mismatch is D(Scheme 1)-mis = EF[σ2V − ΓTΛ−1Γ

    ], where Λ is the covariance matrix

    of [T1 T2 Y1], and Γ is the correlation vector between V and [T1 T2 Y1]. Note that σ2Wa is

    used in the covariance matrix Λ instead of σ2W .

    Remark 3 When σ2Wa > σ2W , all codewords cannot be decoded correctly at the receiver side;

    as a result we can only estimate the source vector V K by applying a linear MMSE estimator

    based on the noisy received signal Y K1 . The system’s distortion in this case is given by

    D(Scheme 1)-mis = EF[σ2V −

    f 2(√aβ21σ

    2V + (

    √aβ2 + 1)ρσV σS)

    2

    f 2(P + (2√aβ2 + 1)σ2S + 2

    √aβ1ρσV σS) + σ2Wa

    ]. (28)

    B. Scheme 2: Layering Wyner-Ziv Costa and HDA for Bandwidth Reduction

    In this section, we present a layered scheme for bandwidth reduction. This scheme comprises

    three layers that are superposed to produce the channel input XN = XNa + XN1 + X

    N2 , where

    XNa , XN1 and X

    N2 denote the outputs of the first, second and third layers, respectively. The

    scheme’s encoder structure is depicted in Fig. 4. Recall that we denote the first N samples of

    V K by V N and the last K − N samples by [V K ]KN+1. The first layer is an analog layer that

    outputs XNa =√a(β1V

    N +β2SN), a linear combination between the V N and SN , and consumes

    Pa ≤ P as average power, where β1, β2 ∈ [−1 1], and a = Paβ21σ2V +β22σ2S+2β1β2ρσV σS is a gain factor

    related to the power constraint Pa. The second layer, which operates on the first N samples

    of the source, encodes V N using a Wyner-Ziv with rate R1 followed by a Costa coder. The

    Wyner-Ziv encoder forms a random variable

    TN1 = αwz1VN +BN1 (29)

    where the samples in BN1 are i.i.d and follow a zero mean Gaussian distribution, the parameter

    αwz1 and the variance of B1 are related to the side information from the first layer and hence

    defined later. The Costa coder that treats both XNa and SN as interference forms the following

    auxiliary random variable UNc1 = XN1 + αc1Š

    N , where the samples in XN1 are i.i.d. zero mean

    Gaussian with variance P1 ≤ P − Pa and independent of the source and the interference,

    ŠN = XNa + SN and 0 ≤ αc1 ≤ 1 is a real parameter. The last layer encodes [V K ]KN+1 using

    September 12, 2014 DRAFT

  • 18

    an optimal source encoder with rate R2 followed by a Costa coder. The Costa encoder, which

    treats the outputs of the first two layers (XNa , XN1 ) as well as S

    N as known interference, forms

    the following auxiliary random variable UNc2 = XN2 + αc2S̃

    N , where S̃N = (XNa + XN1 + S

    N),

    the samples in XN2 are zero mean i.i.d. Gaussian with variance P2 = P − P1 − Pa and αc2 =

    P2E[f 2]/(P2E[f 2] + σ2W ).

    Wyner-Ziv

    Encoder1

    Costa

    Encoder1

    β1

    β2

    +

    +

    aXN

    a

    XN1V K

    SN

    SourceEncoder2

    CostaEncoder2

    XN2

    SN

    XN

    [V K ]KN+1

    V N

    Demux

    Fig. 4. Scheme 2 (bandwidth reduction) encoder structure.

    At the receiver, as shown in Fig. 5, from the received signal Y N the Costa decoder estimates

    UNc1 . By using a rate R1 = I(Uc1 ;Y, F )−I(Uc1 ; Š) = EF[

    12

    log(

    P1[f2(P1+σ2Š+P2)+σ2W ]

    P1(σ2Š)f2(1−αc1 )2+(σ

    2W+f

    2P2)(P1+α2c1σ2Š

    )

    )],

    where σ2Š

    = E[(Xa + S)2], the Costa decoder (of the second layer) is able to estimate the

    codewords UNc1 with vanishing error probability. We then obtain an estimate of VN , denoted by

    V Na , using a linear MMSE estimator based on YN and UNc1 . The distortion from estimating V

    N

    using V Na is then given by

    Da = EF[σ2V − ΓΛ−1ΓT

    ](30)

    where Λ is the covariance of [Uc1 Y ] and Γ is the correlation vector between V and [Uc1 Y ].

    The Wyner-Ziv decoder (of the second layer) then looks for a codeword TN1 such that (TN1 , V

    Na )

    are jointly typical (as N → ∞, the probability of error in decoding TN1 vanishes). A better

    estimate of V N is then obtained based on the side information V Na and the decoded codeword

    TN1 . The distortion from reconstructing VN is then given by

    D1 =Da

    exp(EF[log(

    P1[f2(P1+σ2Š+P2)+σ2W ]

    P1(σ2Š)f2(1−αc1 )2+(σ

    2W+f

    2P2)(P1+α2c1σ2Š

    )

    )]) . (31)Note that the distortion in (31) can be found by choosing αwz1 =

    √1− D1

    Daand B1 ∼ N (0, D1)

    in (29) and using a linear MMSE estimator based on V Na and TN1 . To get an estimate of

    September 12, 2014 DRAFT

  • 19

    [V K ]KN+1, we use a Costa decoder followed by a source decoder. Codewords of this layer can be

    decoded correctly (with high probability) by choosing the rate R2 = I(Uc2 ;Y, F )− I(Uc2 ; S̃) =

    EF[

    12

    log(

    P2[f2(P2+σ2S̃

    )+σ2W ]

    P2σ2S̃f2(1−α2)2+σ2W (P2+α

    22σ

    2S̃

    )

    )], where σ2

    S̃= E[(Xa + X1 + S)2]. The distortion in

    reconstructing [V K ]KN+1 can be found by equating the rate-distortion function to the transmission

    rate R2; this means that K−N2 logσ2VD2

    = (N)R2. As a result, the distortion in reconstructing

    [V K ]KN+1, denoted by D2, is given by

    D2 =σ2V

    exp{EF[

    r1−r log

    (P2[f2(P2+σ2

    S̃)+σ2W ]

    P2σ2S̃f2(1−α2)2+σ2W )(P2+α

    22σ

    2S̃

    )

    )]} . (32)Hence, the system’s distortion is given by DScheme 2 = infβ1,β2,Pa,P1,αc1

    {rD1 + (1− r)D2

    }.

    Wyner-Ziv

    Decoder1

    Costa

    Decoder1

    UN

    c1

    VN

    a

    V̂N

    YN

    Source

    Decoder2Costa

    Decoder2 [V̂ K ]KN+1

    LMMSEEstimator

    UK−N

    c2

    V̂K

    Mux

    Fig. 5. Scheme 2 (bandwidth reduction) decoder structure.

    Remark 4 For the AWGN channel, the distortion D1 and D2 for the reduction case are

    D1 =Da

    1 + P1P2+σ2W

    and D2 =σ2V(

    1 + P2σ2W

    ) r1−r

    . (33)

    Since for AWGN channel, the use of UNc1 as input to the LMMSE estimator in Fig. 5 does not

    improve the performance, the distortion Da admits a simplified expression as given in (27).

    The distortions in (33) can be derived by choosing αc1 =P1

    P1+P2+σ2Wand αc2 =

    P2P2+σ2W

    . Note

    that this scheme is optimal for uncorrelated source-interference and for full correlation between

    the source and the interference. For the uncorrelated case, the analog layer is not needed

    (Pa = 0, Da = σ2V ) and the optimal power allocation between the two other layers can be

    derived by minimizing the resulting distortion with respect to P1; the optimal power P1 is

    P ∗1 = σ2W

    [1−

    (1 + P

    σ2W

    )1−r]+P . For the case of full correlation between the (first N samples

    of the) source and the interference (ρ = 1), the second layer can be shut down (P1 = 0) and

    the optimal P ∗a satisfies

    σ2W

    (1 +

    σW√Pa

    )(1 +

    P − Paσ2W

    ) 11−r(P + σ2W +

    √Paσ2V

    )−(P + σ2W + σ

    2V + 2

    √Paσ2V

    )2= 0.

    September 12, 2014 DRAFT

  • 20

    Scheme 2 under mismatch in noise levels: We next examine the distortion of the proposed

    scheme in the presence of noise mismatch between the transmitter and the receiver. The actual

    channel noise power σ2Wa is assumed to be lower than the design one σ2W (i.e., σ

    2Wa

    < σ2W ).

    Under such assumption, the Costa and Wyner-Ziv decoders can decode all codewords with

    vanishing probability of error. The distortion in reconstructing [V K ]KN+1, D2−mis, is hence the

    same as in the matched noise level case; and the distortion from reconstructing V N is D1−mis =

    EF[σ2V − ΓTΛ−1Γ

    ], where Λ is the covariance matrix of [T1 Y ], and Γ is the correlation vector

    between V and [T1 Y ]. As a result, the system’s distortion is D(Scheme 2)-mis = rD1−mis + (1 −

    r)D2−mis. Note that σ2Wa is used in Λ instead of σ2W when computing D1−mis.

    Remark 5 When σ2Wa > σ2W , all codewords cannot be decoded correctly at the receiver side;

    as a result we can only estimate the source vector V N by applying a linear MMSE estimator

    based on the noisy received signal Y N . The system’s distortion is then given by

    D(Scheme 2)-mis = rEF[(σ2V −

    f 2(√aβ21σ

    2V + (

    √aβ2 + 1)ρσV σS)

    2

    f 2(P + (2√aβ2 + 1)σ2S + 2

    √aβ1ρσV σS) + σ2Wa

    )]+ (1− r)σ2V .

    C. Numerical Results

    In this section, we assume an i.i.d. zero-mean Gaussian source with unitary variance that is

    transmitted over an AWGN Rayleigh fading channel with Gaussian interference. The interference

    power is σ2S = 1, the power constraint is set to P = 1 and the Rayleigh fading has E[F 2] = 1. To

    evaluate the performance, we consider the signal-to-distortion ratio (SDR = E[||V K ||2]/E[||V K−

    V̂ K ||2]); the designed channel signal-to-noise ratio (CSNR , PE[F2]

    σ2W) is set to 10 dB for all

    numerical results. Fig. 6, which considers the AWGN channel, shows the SDR performance

    versus the correlation coefficient ρ for bandwidth expansion (r = 2) and matched noise levels

    between the transmitter and receiver. We note that the proposed scheme outperforms the tandem

    Costa reference scheme (described in Sec. III-C) and performs very close to the “best” derived

    outer bound for a wide range of correlation coefficients. Although not shown, the proposed

    scheme also outperforms significantly the linear scheme of Sec. III-B. For the limiting cases of

    ρ = 0 and 1, we can notice that the SDR performance of the proposed scheme coincides with

    the outer bound and hence is optimal. Figs. 7, 8 and 9 show the SDR performance versus ρ

    for the fading channel with interference under matched noise levels and for r = 1, 2 and 1/2,

    respectively. As in the case of the AWGN channel, we remark that the proposed HDA schemes

    September 12, 2014 DRAFT

  • 21

    0 0.2 0.4 0.6 0.8 120

    21

    22

    23

    24

    25

    26

    27

    28

    29

    ρ

    SD

    R [dB

    ]

    Outer bound with η1=1 and η

    2=1

    Outer bound with η1=1 and η

    2=0

    Outer bound with optimized η1 and η

    2

    HDA Scheme 1

    Tandem Costa scheme

    Fig. 6. Performance of HDA Scheme 1 (r = 2) over the AWGN channel under matched noise levels for different correlation

    coefficients. Tandem scheme and outer bounds on SDR are plotted. The graph is made for P = 1, σ2S = 1 and CSNR=10 dB.

    outperform the tandem Costa and the linear schemes and perform close to the best outer bound.

    In contrast to the AWGN case, the proposed scheme never coincides with the outer bound for

    finite noise levels; this can be explained from the fact that the (generalized) Costa and linear

    scheme are not optimal for the fading case. The sub-optimality (assuming that the outer bound

    is tight) of the generalized Costa coding comes from the form of the auxiliary random variable.

    We choose the same form as the one used for AWGN channels (a form linear in S); it remains

    unclear if such auxiliary random variable is optimal for fading channels. Note that using the

    result in [35], one can easily show that our schemes are optimal for ρ = 0 in the limits of

    high and low noise levels. As a result, the auxiliary random variable used for the Costa coder

    is optimal in the noise level limits.

    Fig. 10 shows the SDR performance versus CSNR levels under mismatched noise levels. All

    schemes in Fig. 10 are designed for CSNR=10 dB, r = 1 and ρ = 0.7. The true CSNR varies

    between 0 and 35 dB. We observe that the proposed scheme is resilient to noise mismatch due

    to its hybrid digital-analog nature. As the correlation coefficient values decreases, the power

    allocated to the analog layer decreases. Hence, the SDR gap between the proposed and the

    tandem Costa scheme under mismatched noise levels decreases and the robustness (which is the

    trait of analog schemes) reduces.

    September 12, 2014 DRAFT

  • 22

    0 0.2 0.4 0.6 0.8 18

    10

    12

    14

    16

    18

    ρ

    SD

    R [

    dB

    ]

    Outer bound with η1=η

    2=1 and r=1

    Outer bound with η1=0, η

    2=1 and r=1

    Outer bound with optimized η1 and η

    2 and r=1

    HDA Scheme 1 with r=1

    Linear scheme

    Tandem Costa scheme for r=1

    Fig. 7. Performance of Scheme 1 (r = 1) over the fading channel under matched noise levels for different correlation coefficient.

    Tandem, linear schemes and outer bounds on SDR are plotted. The graph is made for P = 1, σ2S = 1, CSNR=10 dB and

    E[F 2] = 1.

    0 0.2 0.4 0.6 0.8 115

    20

    25

    30

    ρ

    SD

    R [dB

    ]

    Outer bound with η1=1 and η

    2=1

    Outer bound with η1=1 and η

    2=0

    Outer bound with optimized η1 and η

    2

    HDA Scheme 1 with r=2

    Tandem Costa scheme

    Fig. 8. Performance of Scheme 1 (r = 2) over the fading channel under matched noise levels for different correlation coefficient.

    Tandem scheme and outer bounds on SDR are plotted. The graph is given for P = 1, σ2S = 1, CSNR=10 dB and E[F 2] = 1.

    V. JSCC FOR SOURCE-CHANNEL-STATE TRANSMISSION

    As an application of the joint source-channel coding problem examined in this paper we con-

    sider the transmission of analog source-channel-state pairs over a fading channel with Gaussian

    state interference. We establish inner and outer bounds on the source-interference distortion for

    the fading channel. The only difference between this problem and that examined in the previous

    sections is that the decoder is also interested in estimating the interference SN . For simplicity, we

    focus on the matched bandwidth case (i.e., K = N ); the unequal source-channel bandwidth case

    can be treated in a similar way as in Section IV. We also assume that the decoder has knowledge

    of the fading. We denote the distortion from reconstructing the source and the interference by

    September 12, 2014 DRAFT

  • 23

    0 0.2 0.4 0.6 0.8 12

    3

    4

    5

    6

    7

    8

    9

    10

    11

    ρ

    SD

    R [dB

    ]

    Outer bound with η1=1 and η

    2=1

    Outer bound with η1=1 and η

    2=0

    Outer bound with optimized η1 and η

    2

    HDA Scheme 2

    Tandem Costa scheme

    Fig. 9. Performance of Scheme 2 (r = 1/2) over the fading channel under matched noise levels for different ρ. Tandem scheme

    and outer bounds on SDR are plotted. The graph is given for P = 1, σ2S = 1, CSNR=10 dB and E[F 2] = 1.

    0 5 10 15 20 25 30 350

    2

    4

    6

    8

    10

    12

    14

    16

    18

    CSNR [dB]

    SD

    R [dB

    ]

    HDA Scheme 1 (r=1, ρ=0.7)

    Tandem Costa scheme (r=1, ρ=0.7)

    Fig. 10. Performance of Scheme 1 (r = 1) over the fading channel under mismatched noise levels. Tandem scheme, analog

    and upper bounds on SDR are plotted. The system is designed for P = 1, σ2S = 1, CSNR=10 dB, ρ = 0.7 and E[F 2] = 1.

    Dv =1KE[||V K− V̂ K ||2] and Ds = 1KE[||S

    K− ŜK ||2], respectively. For a given power constraint

    P , a rate r and a Rayleigh fading channel, the distortion region is defined as the closure of all

    distortion pair (D̃v, D̃s) for which (P, D̃v, D̃s) is achievable, where a power-distortion triple is

    achievable if for any δv, δs > 0, there exist sufficiently large integers K and N with N/K = r,

    encoding and decoding functions satisfying (2), such that Dv < D̃v + δv and Ds < D̃s + δs.

    A. Outer Bound

    Lemma 4 For the matched bandwidth case, the outer bound on the distortion region (Dv, Ds)

    can be expressed as follows

    Dv ≥Var(V |S)

    exp{EF[log

    ζP |f |2+σ2Wσ2W

    ]} , Ds ≥ σ2Sexp

    {EF[log

    |f |2(P+σ2S+2

    √(1−ζ)Pσ2S

    )+σ2W

    ζP |f |2+σ2W

    ]} (34)September 12, 2014 DRAFT

  • 24

    where Var(V |S) = σ2V (1− ρ2) is the variance of V given S and 0 ≤ ζ ≤ 1.

    Proof: For the source distortion, we can write the following

    K

    2log

    σ2VDv

    (a)

    ≤ I(V K ; V̂ K |FK)(b)

    ≤ I(V K ; V̂ K |FK) + I(V K ;SK |V̂ K , FK)

    = I(V K ; V̂ K , SK |FK) (c)= I(V K ;SK) + I(V K ; V̂ K |SK , FK)(d)

    ≤ K2

    logσ2V

    Var(V |S)+ I(V K ;Y K |SK , FK)

    =K

    2log

    σ2VVar(V |S)

    + h(Y K |SK , FK)− h(WK)

    (e)=

    K

    2log

    σ2VVar(V |S)

    +K

    2EF[log

    ζP |f |2 + σ2Wσ2W

    ](35)

    where (a) follows from the rate-distortion theorem, (b) follows from the non-negativity of

    mutual information, (c) follows from the chain rule of mutual information and the fact that

    FK is independent of (V K , SK), (d) holds by the data processing inequality and in (e) we used

    h(Y K |SK , FK) = K2EF [log (ζP |f |2 + σ2W )] for some ζ ∈ [0, 1]; this can be proved from the fact

    that K2

    log σ2W = h(WN) ≤ h(Y K |SK , FK) ≤ h(FKXK+WK |FK) = K

    2EF [log (P |f |2 + σ2W )].

    Hence, there exists a ζ ∈ [0, 1] such that h(Y K |SK , FK) = K2EF [log (ζP |f |2 + σ2W )].

    For the interference distortion, we have the following

    K

    2log

    σ2SDs

    (a)

    ≤ I(SK ; ŜK |FK)(b)

    ≤ I(SK ;Y K |FK) = h(Y K |FK)− h(Y K |SK , FK)

    (c)

    ≤ supX∈B

    EF[K

    2log 2πe(|f |2(P + σ2S + 2E[SX]) + σ2W )

    ]−EF

    [K

    2log 2πe(ζP |f |2 + σ2W )

    ](d)= EF

    [K

    2log|f |2(P + σ2S + 2

    √(1− ζ)Pσ2S) + σ2W

    ζP |f |2 + σ2W

    ](36)

    where (a) follows from the rate-distortion theorem, (b) follows from data processing inequality

    for the mutual information, in (c) the set B = {X : h(Y K |SK , FK) = EF[K2

    log 2πe(ζP |f |2 + σ2W )]}

    and the inequality in (c) holds from the fact that Gaussian maximizes differential entropy and

    h(Y K |SK , FK) = K2EF [log (ζP |f |2 + σ2W )] (as used in (35)). Note that the supremum over X

    in (c) happens when Y , S and W are jointly Gaussian given F (i.e., X∗ is Gaussian). Now, we

    represent X∗ = N∗ζ +X∗ζ , where N

    ∗ζ ∼ N (0, ζP ) is independent of X∗ζ ∼ N (0, (1− ζ)P ). Note

    that X∗ζ is a function of S. Using this form of X∗, h(Y K |SK , FK) = K

    2EF [log (ζP |f |2 + σ2W )]

    September 12, 2014 DRAFT

  • 25

    still holds (i.e., X∗ ∈ B) and E[X∗S] = E[X∗ζS]. Maximizing over X is equivalent to maximizing

    over E[XS]; using Cauchy-Schwarz(E[X∗S] = E[X∗ζS] ≤

    √E[(X∗ζ )2]E[S2]

    )we get (d).

    B. Proposed Hybrid Coding Scheme

    The proposed scheme is composed of three layers as shown in Fig. 11. The first layer, which

    is purely analog, consumes an average power Pa and outputs a linear combination between

    the source and the interference XKa =√a1(β11V

    K + β12SK), where β11, β12 ∈ [−1 1] and

    a1 =Pa

    (β211σ2V +2β11β12ρσV σS+β

    212σ

    2S)

    is a gain factor related to power constraint Pa. The second layer

    employs a source-channel vector-quantizer (VQ) on the interference; the output of this layer is

    XKq = µ(SK + UKq ), where µ > 0 is a gain related to the power constraint and samples in U

    Kq

    follow a zero mean i.i.d. Gaussian that is independent of V and S and has a variance Q. A

    similar VQ encoder was used in [21] for the broadcast of bivariate sources and for the multiple

    access channel [36]. In what follows, we outline the encoding process of the VQ

    • Codebook Generation: Generate a K-length i.i.d. Gaussian codebook Xq with 2KRq code-

    words with Rq defined later. Every codeword is generated following the random variable

    XKq ; this codebook is revealed to both the encoder and decoders.

    • Encoding: The encoder searches for a codeword XKq in the codebook that is jointly typical

    with SK . In case of success, the transmitter sends XKq .

    Wyner-Ziv

    Encoder

    Costa

    Encoder

    β11

    β12

    +

    +

    a1

    XK

    XKa

    XKd

    V K

    SK

    +

    SK

    β̃21

    β̃22

    X̃Kwz

    VQXKq

    Encoder

    Fig. 11. Encoder structure.

    The last layer encodes a linear combination between V K and SK , denoted by X̃Kwz, using a Wyner-

    Ziv with rate R followed by a Costa coder. The Costa coder uses an average power of Pd and

    treats XKa , SK and XKq as known interference. The linear combination X̃

    Kwz = β̃21V

    K+β̃22SK =

    √a2(β21V

    K +β22SK), where β21, β22 ∈ [−1 1] and a2 = Pd(β221σ2V +2β21β22ρσV σS+β222σ2S) . The Wyner-

    Ziv encoder forms a random variable TK as follows

    September 12, 2014 DRAFT

  • 26

    TK = αwzX̃Kwz +B

    K (37)

    where the samples in BK are zero mean i.i.d. Gaussian, the parameter αwz and the variance

    of B are defined later. The encoding process of the Wyner-Ziv starts by generating a K-length

    i.i.d. Gaussian codebook T of size 2KI(T ;X̃wz) and randomly assigning the codewords into 2KR

    bins with R defined later. For each realization X̃Kwz, the Wyner-Ziv encoder searches for a

    codeword TK ∈ T such that (X̃Kwz, TK) are jointly typical. In the case of success, the Wyner-

    Ziv encoder transmits the bin index of this codeword using Costa coding. The Costa coder,

    that treats S̃K = XKa +XKq + S

    K as known interference, forms the following auxiliary random

    variable UKc = XKd + αcS̃

    K , where each sample in XKd is N (0, Pd) that is independent of the

    source and the interference and 0 ≤ αc ≤ 1. The encoding process for the Costa coder can be

    described in a similar way as done before.

    At the receiver side, as shown in Fig. 12, from the noisy received signal Y K , the VQ decoder

    estimates XKq by searching for a codeword XKq ∈ Xq that is jointly typical with the received

    signal Y K and FK . Following the error analysis of [37], the error probability of decoding XKqgoes to zero by choosing the rate Rq to satisfy I(S;Xq) ≤ Rq ≤ I(Xq;Y, F ), where

    I(S;Xq) = h(Xq)− h(Xq|S) =1

    2log

    σ2S +Q

    Q

    I(Xq;Y, F ) = I(Xq;F ) + I(Xq;Y |F ) = h(Y |F )− h(Y |Xq, F )

    = EF{

    1

    2log 2πe

    (E[Y 2]

    )− 1

    2log 2πe

    (E[Y 2]− E[XqY ]

    2

    E[X2q ]

    )}. (38)

    The variance Q has to be chosen to satisfy the above rate constraint. Furthermore, to ensure the

    power constraint is satisfied we need µ to satisfy Pa + µ2(σ2S +Q) + 2µE[SXa] + Pd ≤ P . The

    Costa decoder then searches for a codeword UKc that is jointly typical with (YK , FK). Since

    the received signal Y K and the codewords XKq and UKc are correlated with X̃

    Kwz, an LMMSE

    estimate of X̃Kwz, denoted byˆ̃XKwz, can be obtained based on Y

    K and the decoded codewords

    XKq and UKc . Mathematically, the estimate is given by

    ˆ̃Xwz = ΓaΛ−1a [Xq Uc Y ]

    T , where Λa is

    the covariance of [Xq Uc Y ] and Γa is the correlation vector between X̃wz and [Xq Uc Y ]. The

    distortion in reconstructing X̃Kwz is then Da = EF[Pd − ΓaΛ−1a ΓTa

    ]. Moreover, the Wyner-Ziv

    decoder estimates the codeword TK by searching for a TK ∈ T that is jointly typical withˆ̃XKwz. The error probability of decoding both codewords T

    K and UKc vanishes as K →∞ if the

    September 12, 2014 DRAFT

  • 27

    coding rate of the Wyner-Ziv and the Costa coder is set to

    R = EF

    [1

    2log

    (Pd[f

    2(Pd + σ2S̃) + σ2W ]

    Pd(σ2S̃)f2(1− αc)2 + σ2W (Pd + α2cσ2S̃)

    )](39)

    where σ2S̃

    = E[(Xa+Xq+S)2]. A better estimate of X̃Kwz can be obtained by using the codeword

    TK and ˆ̃XKwz. The distortion in reconstructing X̃Kwz can be expressed as follows

    D̃ =Da

    exp{EF[log(

    Pd[f2(Pd+σ2S̃

    )+σ2W ]

    Pd(σ2S̃

    )f2(1−αc)2+σ2W (Pd+α2cσ2S̃

    )

    )]} . (40)This distortion can be achieved using a linear MMSE estimate based on TK , XKq and Y

    K by

    choosing αwz =√

    1− D̃Da

    and B ∼ N (0, D̃) in (37).

    Wyner-Ziv

    Decoder1

    Costa

    Decoder1

    UKc

    ˆ̃XKwz

    TK

    Y K

    LMMSEEstimator

    LMMSE

    Estimator

    (V̂ K , ŜK)

    VQ

    Decoder

    XKq

    Fig. 12. Decoder structure.

    After decoding TK , XKq , a linear MMSE estimator is used to reconstruct the source and the

    interference signals. As a result, the distortion in decoding V K and SK are given as follows

    Dv = EF[σ2V − ΓvΛ−1ΓTv

    ]Ds = EF

    [σ2S − ΓsΛ−1ΓTs

    ](41)

    where Λ is the covariance of [Xq T Y ], Γv is the correlation vector between V and [Xq T Y ]

    and Γs is the correlation vector between S and [Xq T Y ].

    Remark 6 Using a linear combination of the source and the interference X̃Kwz instead of just

    the source V K as an input to the Wyner-Ziv encoder in Fig. 11 is shown to be beneficial in some

    parts of the source-interference distortion region. However, quantizing a linear combination of

    the source and the interference by the VQ encoder (instead of just SK as done in Fig. 11) does

    not seem to give any improvement.

    Remark 7 For the AWGN channel with ρ = 0, using the source itself instead of X̃Kwz as input to

    the Wyner-Ziv encoder, shutting down the second layer and setting β11 = 0 in XKa give the best

    September 12, 2014 DRAFT

  • 28

    possible performance; the inner bound in such case coincides with the outer bound, hence the

    scheme is optimal. This result is analogous to the optimality result of the rate-state-distortion for

    the transmission of a finite discrete source over a Gaussian state interference derived in [29].

    C. Numerical Results

    We consider a source-interference pairs that are transmitted over a Rayleigh fading channel

    (E[F 2] = 1) with Gaussian interference and power constraint P = 1; the CSNR level is set to

    10 dB. For reference, we adapt the scheme of [29] to our scenario. Recall that the source and

    the interference are jointly Gaussian, hence V K = ρσVσSSK +NKρ , where samples in N

    Kρ are i.i.d.

    Gaussian with variance σ2Nρ = (1− ρ2)σ2V and independent of S

    K . Now if we quantize NKρ into

    digital data, the setup becomes similar to the one considered in [29]; hence the encoding is done

    by allocating a portion of the power, denoted by Ps, to transmit SK and the remaining power

    Pd = (P − Ps) is used to communicate the digitized NKρ using the (generalized) Costa coder.

    The received signal of such scheme is given by Y K = FK(√

    Psσ2SSK +XKd + S

    K)

    +WK , where

    XKd denotes the output of the digital part that communicates NKρ . An estimate of S

    K is obtained

    by applying a LMMSE estimator on the received signal; the distortion from reconstructing V K

    is equal to the sum of the distortions from estimating ρσVσSSK and NKρ . Mathematically, the

    distortion region of such reference scheme can be expressed as follows

    Ds = EF[σ2S −

    E[SY ]2

    E[Y 2]

    ]= EF

    [σ2S −

    f 2(√PsσS + σ

    2S)

    2

    f 2(P + σ2S + 2√PsσS) + σ2W

    ],

    Dv = ρ2σ

    2V

    σ2SDs +

    σ2Nρ

    exp{EF[log(

    Pd[f2(Pd+σ2S̃

    )+σ2W ]

    Pd(σ2S̃

    )f2(1−αc)2+σ2W (Pd+α2cσ2S̃

    )

    )]} (42)where σ2

    S̃= E[(

    √Ps/σ2SS + S)

    2] and the parameter αc is related to the Costa coder. To

    evaluate the performance, we plot the outer bound (given by (34)) and the inner bounds (the

    achievable distortion region) of the proposed hybrid coding (given by (41)) and the adapted

    scheme of [29]. Fig. 13, which considers the AWGN channel, shows the distortion region of

    the source-interference pair for ρ = 0.8 and σ2S = 0.5. We can notice that the hybrid coding

    scheme is very close to the outer bound and outperforms the scheme of [29]. Moreover, the use

    of the VQ layer is shown to be beneficial under certain system settings. Fig. 14, which considers

    the fading channel, shows the distortion region of the source-interference pair for ρ = 0.8 and

    σ2S = 1. The hybrid coding scheme performs relatively close to the outer bound.

    September 12, 2014 DRAFT

  • 29

    −9 −8.5 −8 −7.5 −7 −6.5 −6−16.5

    −16

    −15.5

    −15

    −14.5

    −14

    −13.5

    −13

    −12.5

    10 log10(Dv)

    10log 1

    0(D

    s)

    Adapted scheme of [29]

    Hybrid coding without VQ layer

    Hybrid coding

    Outer bound

    Fig. 13. Distortion region for hybrid coding scheme over the AWGN channel. This graph is made for σ2V = 1, P = 1, σ2S = 0.5

    and ρ = 0.8.

    −14 −12 −10 −8 −6 −4

    −14

    −12

    −10

    −8

    −6

    −4

    10 log10 (Dv)

    10log 1

    0(D

    s)

    Adapted scheme of [29]

    Hybrid coding scheme

    Outer bound

    Fig. 14. Distortion region for hybrid coding scheme over the fading channel. This graph is made for σ2V = 1, P = 1, σ2S = 1,

    ρ = 0.8 and E[F 2] = 1.

    VI. SUMMARY AND CONCLUSIONS

    In this paper, we considered the problem of reliable transmission of a Gaussian sources over

    Rayleigh fading channels with correlated interference under unequal source-channel bandwidth.

    Inner and outer bounds on the system’s distortion are derived. The outer bound is derived by

    assuming additional knowledge at the decoder side; while the inner bound is found by analyzing

    the achievable distortion region of the proposed hybrid coding scheme. Numerical results show

    that the proposed schemes perform close to the derived outer bound and to be robust to channel

    noise mismatch. As an application of the proposed schemes, we derive inner and outer bounds

    on the source-channel-state distortion region for the fading interference channel; in this case,

    September 12, 2014 DRAFT

  • 30

    the receiver is interested in estimating both source and interference. Our setting contains several

    interesting limiting cases. In the absence of fading and/or correlation and for some source-channel

    bandwidths, our setting resorts to the scenarios considered in [20], [26], [29].

    ACKNOWLEDGMENT

    The authors sincerely thank the anonymous reviewers for their helpful comments.

    REFERENCES

    [1] C. E. Shannon, “A mathematical theory of communication,” The Bell System Technical Journal, vol. 27, pp. 379–423,

    1948.

    [2] T. A. Ramstad, “Shannon mappings for robust communication,” Telektronikk, vol. 98, no. 1, pp. 114–128, 2002.

    [3] F. Hekland, P. A. Floor, and T. A. Ramstad, “Shannon-Kotel’nikov mappings in joint source-channel coding,” IEEE Trans.

    Commun., vol. 57, no. 1, pp. 94–105, Jan 2009.

    [4] J. (Karlsson) Kron and M. Skoglund, “Optimized low-delay source-channel-relay mappings,” IEEE Trans. Commun.,

    vol. 58, no. 5, pp. 1397–1404, May 2010.

    [5] P. A. Floor, A. N. Kim, N. Wernersson, T. Ramstad, M. Skoglund, and I. Balasingham, “Zero-delay joint source-channel

    coding for a bivariate Gaussian on a Gaussian MAC,” IEEE Trans. Comm., vol. 60, no. 10, pp. 3091–3102, Oct 2012.

    [6] Y. Hu, J. Garcia-Frias, and M. Lamarca, “Analog joint source channel coding using non-linear mappings and MMSE

    decoding,” IEEE Trans. Commun., vol. 59, no. 11, pp. 3016–3026, Nov 2011.

    [7] E. Akyol, K. Rose, and T. Ramstad, “Optimal mappings for joint source channel coding,” in IEEE Inform. Theory Works.,

    Cairo, Egypt, 2010.

    [8] E. Akyol, K. Rose, and K. Viswanatha, “On zero-delay source-channel coding,” IEEE Trans. Inform. Theory, in review.

    [9] M. Kleiner and B. Rimoldi, “Asymptotically optimal joint source-channel coding with minimal delay,” in Proc. IEEE

    Global Telecommunications Conference, Honolulu, HI, 2009.

    [10] X. Chen and E. Tuncel, “Zero-delay joint source-channel coding using hybrid digital-analog schemes in the Wyner-Ziv

    setting,” IEEE Trans. Communications, vol. 62, no. 2, pp. 726–735, Feb 2014.

    [11] A. Abou Saleh, F. Alajaji, and W.-Y. Chan, “Power-constrained bandwidth-reduction source-channel mappings for fading

    channels,” in Proc. of the 26th Bien. Symp. on Comm., Kingston, ON, May 2012.

    [12] U. Mittal and N. Phamdo, “Hybrid digital analog (HDA) joint source channel codes for broadcasting and robust

    communications,” IEEE Trans. Inform. Theory, vol. 48, pp. 1082–1102, May 2002.

    [13] M. Skoglund, N. Phamdo, and F. Alajaji, “Hybrid digital-analog source-channel coding for bandwidth compres-

    sion/expansion,” IEEE Trans. Inform. Theory, vol. 52, no. 8, pp. 3757–3763, Aug 2006.

    [14] Y. Wang, F. Alajaji, and T. Linder, “Hybrid digital-analog coding with bandwidth compression for Gaussian source-channel

    pairs,” IEEE Trans. Communications, vol. 57, no. 4, pp. 997–1012, Apr 2009.

    [15] J. Nayak, E. Tuncel, and D. Gunduz, “Wyner-Ziv coding over broadcast channels: digital schemes,” IEEE Trans. Inform.

    Theory, vol. 56, no. 4, pp. 1782–1799, Apr 2010.

    [16] Y. Gao and E. Tuncel, “Wyner-Ziv coding over broadcast channels: Hybrid digital/analog schemes,” IEEE Trans. Inform.

    Theory, vol. 57, no. 9, pp. 5660–5672, Sept 2011.

    [17] E. Koken and E. Tuncel, “Gaussian HDA coding with bandwidth expansion and side information at the decoder,” in Proc.

    IEEE Int. Symp. on Inform. Theory, Istanbul, Turkey, July 2013.

    September 12, 2014 DRAFT

  • 31

    [18] M. Costa, “Writing on dirty paper,” IEEE Trans. Inform. Theory, vol. 29, no. 3, pp. 439–441, May 1983.

    [19] M. P. Wilson, K. R. Narayanan, and G. Caire, “Joint source-channel coding with side information using hybrid digital

    analog codes,” IEEE Trans. Inform. Theory, vol. 56, no. 10, pp. 4922–4940, Oct 2010.

    [20] M. Varasteh and H. Behroozi, “Optimal HDA schemes for transmission of a Gaussian source over a Gaussian channel with

    bandwidth compression in the presence of an interference,” IEEE Trans. Signal Processing, vol. 60, no. 4, pp. 2081–2085,

    April 2012.

    [21] C. Tian, S. N. Diggavi, and S. Shamai, “The achievable distortion region of sending a bivariate Gaussian source on the

    Gaussian broadcast channel,” IEEE Trans. Inform. Theory, vol. 57, no. 10, pp. 6419–6427, Oct 2011.

    [22] A. Lapidoth and S. Tinguely, “Broadcasting correlated Gaussians,” IEEE Trans. Inform. Theory, vol. 56, no. 7, pp. 3057–

    3068, July 2010.

    [23] H. Behroozi, F. Alajaji, and T. Linder, “On the performance of hybrid digital-analog coding for broadcasting correlated

    Gaussian sources,” IEEE Trans. Commun., vol. 59, no. 12, pp. 3335–3342, Dec 2011.

    [24] Y. Gao and E. Tuncel, “Separate source-channel coding for transmitting correlated Gaussian sources over degraded broadcast

    channels,” IEEE Trans. Inform. Theory, vol. 59, no. 6, pp. 3619–3634, June 2013.

    [25] A. Abou Saleh, F. Alajaji, and W.-Y. Chan, “Hybrid digital-analog coding for interference broadcast channels,” in Proc.

    IEEE Int. Symp. on Inform. Theory, Istanbul, Turkey, July 2013.

    [26] Y.-C. Huang and K. R. Narayanan, “Joint source-channel coding with correlated interference,” IEEE Trans. Commun.,

    vol. 60, no. 5, pp. 1315–1327, May 2012.

    [27] A. D. Wyner and J. Ziv, “The rate-distortion function for source coding with side information at the decoder,” IEEE Trans.

    Information Theory, vol. 22, pp. 1–10, Jan 1976.

    [28] A. Abou Saleh, W.-Y. Chan, and F. Alajaji, “Low and high-delay source-channel coding with bandwidth expansion and

    correlated interference,” in Proc. of the 13th Canadian Workshop on Information Theory, Toronto, ON, June 2013.

    [29] A. Sutivong, M. Chiang, T. Cover, and Y.-H. Kim, “Channel capacity and state estimation for state-dependent Gaussian

    channels,” IEEE Trans. Inform. Theory, vol. 51, no. 4, pp. 1486–1495, Apr 2005.

    [30] B. Bandemer, C. Tian, and S. Shamai, “Gaussian state amplification with noisy state observations,” in Proc. IEEE Int.

    Symp. on Inform. Theory, Istanbul, Turkey, July 2013.

    [31] A. Abou Saleh, F. Alajaji, and W.-Y. Chan, “Low-latency source-channel coding for fading channels with correlated

    interference,” IEEE Wireless Communications Letter, vol. 3, no. 2, pp. 137–140, Apr 2014.

    [32] Y.-K. Chia, R. Soundararajan, and T. Weissman, “Estimation with a helper who knows the interference,” in Proc. IEEE

    Int. Symp. on Inform. Theory, Cambridge, MA, July 2012.

    [33] K. M. Abadir and J. R. Magnus, Matrix Algebra. New York: cambridge University Pressr, 2005.

    [34] S. I. Gelfand and M. S. Pinsker, “Coding for channel with random parameters,” Probl. Inform. Contr., vol. 9, no. 1, pp.

    19–31, 1980.

    [35] W. Zhang, S. Kotagiri, and J. Laneman, “Writing on dirty paper with resizing and its application to quasi-static fading

    broadcast channels,” in Proc. IEEE Int. Symp. on Inform. Theory, Nice, France, June 2007.

    [36] A. Lapidoth and S. Tinguely, “Sending a bivariate Gaussian over a Gaussian MAC,” IEEE Trans. Information Theory,

    vol. 56, no. 6, pp. 2714–2752, 2010.

    [37] S. H. Lim, P. Minero, and Y.-H. Kim, “Lossy communication of correlated sources over multiple access channels,” in

    Proceedings of the 48th Annual Allerton Conf. on Comm., Control and Computing, Monticello, IL, Sep 2010.

    September 12, 2014 DRAFT