Digital Communication Systems 452 - 3 - Discrete Channel - u3.… · 1 Digital Communication...

Preview:

Citation preview

1

Digital Communication SystemsECS 452

Asst. Prof. Dr. Prapun Suksompongprapun@siit.tu.ac.th

3 Discrete Memoryless Channel (DMC)

Office Hours: Check Google Calendar on the course website.Dr.Prapun’s Office:6th floor of Sirindhralai building, BKD

2

Digital Communication SystemsECS 452

Asst. Prof. Dr. Prapun Suksompongprapun@siit.tu.ac.th

3.1 DMC Models

3

Digital communication system

System considered back in CH2

System Under Consideration

4

Digital communication system

System considered in Section 3.1

System Under Consideration

Noisy Channel

5

EquivalentChannel

X: channel input

Y: channel output

10

1111111111111111111111111111111111111111111111111111111111111111111111111111111111000000000000111111…

10

1101101111011110111110111111101010011110111101111111100111111111111001111111011111001000000000101101…

[BSC_demo_image.m]

MATLAB

6

x_raw = imresize(imread('SIIT_25LOGO.png'),[150 150]); x_raw = [rgb2gray(x_raw) > 128]; % convert to 0 and 1n = prod(size(x_raw)); x = reshape(x_raw,1,n); % convert to row vector

figurex_img = reshape(x,size(x_raw));imshow(x_img*255);

[BSC_demo_image.m]

7

MATLAB

8

%% Generating the channel input xx = randsrc(1,n,[S_X;p_X]); % channel input

%% Applying the effect of the channel to create the channel output yy = DMC_Channel_sim(x,S_X,S_Y,Q); % channel output

function y = DMC_Channel_sim(x,S_X,S_Y,Q)%% Applying the effect of the channel to create the channel output yy = zeros(size(x)); % preallocationfor k = 1:length(x)

% Look at the channel input one by one. Choose the corresponding row% from the Q matrix to generate the channel output.y(k) = randsrc(1,1,[S_Y;Q(find(S_X == x(k)),:)]);

end

[DMC_Analysis_demo.m]

[DMC_Channel_sim.m]

Ex 3.3: BSC

9

>> BSC_demoans =1 0 1 1 1 1 1 1 1 1 1 1 0 1 0 1 1 1 1 1ans =1 1 1 1 1 1 1 1 1 0 1 1 0 1 0 1 1 1 1 1

Elapsed time is 0.134992 seconds.

%% Simulation parameters% The number of symbols to be transmittedn = 20; % Channel Input S_X = [0 1]; S_Y = [0 1];p_X = [0.3 0.7];% Channel Characteristicsp = 0.1; Q = [1-p p; p 1-p];

[BSC_demo.m]

p_X =0.3000 0.7000

p_X_sim =0.1500 0.8500

q =0.3400 0.6600

q_sim =0.1500 0.8500

Q =0.9000 0.10000.1000 0.9000

Q_sim =0.6667 0.33330.0588 0.9412

PE_sim =0.1000

PE_theretical =0.1000

Rel. freq. from the simulation

10

%% Statistical Analysis% The probability values for the channel inputsp_X % Theoretical probabilityp_X_sim = hist(x,S_X)/n % Relative frequencies from the simulation% The probability values for the channel outputsq = p_X*Q % Theoretical probabilityq_sim = hist(y,S_Y)/n % Relative frequencies from the simulation% The channel transition probabilities from the simulationQ_sim = [];for k = 1:length(S_X)

I = find(x==S_X(k)); LI = length(I);rel_freq_Xk = LI/n; yc = y(I);cond_rel_freq = hist(yc,S_Y)/LI; Q_sim = [Q_sim; cond_rel_freq];

endQ % Theoretical probabilityQ_sim % Relative frequencies from the simulation

[DMC_Analysis_demo.m]

Ex 3.3: BSC

11

>> BSC_demoans =1 0 1 1 1 1 1 1 1 1 1 1 0 1 0 1 1 1 1 1ans =1 1 1 1 1 1 1 1 1 0 1 1 0 1 0 1 1 1 1 1

Elapsed time is 0.134992 seconds.

%% Simulation parameters% The number of symbols to be transmittedn = 20; % Channel Input S_X = [0 1]; S_Y = [0 1];p_X = [0.3 0.7];% Channel Characteristicsp = 0.1; Q = [1-p p; p 1-p];

[BSC_demo.m]

p_X =0.3000 0.7000

p_X_sim =0.1500 0.8500

q =0.3400 0.6600

q_sim =0.1500 0.8500

Q =0.9000 0.10000.1000 0.9000

Q_sim =0.6667 0.33330.0588 0.9412

PE_sim =0.1000

PE_theretical =0.1000

Because there are only 20 samples, we can’t expect the relative freq. from the simulation to match the specified or calculated probabilities.

Ex 3.3: BSC

12

%% Simulation parameters% The number of symbols to be transmittedn = 1e4; % Channel Input S_X = [0 1]; S_Y = [0 1];p_X = [0.3 0.7];% Channel Characteristicsp = 0.1; Q = [1-p p; p 1-p];

[BSC_demo.m]

>> BSC_demo

p_X =0.3000 0.7000

p_X_sim =0.3037 0.6963

q =0.3400 0.6600

q_sim =0.3407 0.6593

Elapsed time is 0.922728 seconds.

Q =0.9000 0.10000.1000 0.9000

Q_sim =0.9078 0.09220.0934 0.9066

PE_sim =0.0930

PE_theretical =0.1000

Another Noisy Channel

13

EquivalentChannel

X: channel input

Y: channel output

[DMC_demo_2.m]

TTTTTTTTTTTTTTTHTHTTTTTHTTTHTTTTTTTHTTTTHTTHTTTTTTTTTTTTTTTTHTTTHTTTHTHTTTTTTTTHHTTTTTTTTTHTTHTHTTTT

HEHHEHTTTHEETETHETHHHETTHEEHEEHEHTHHEHEETTHTHEETETETEEEEHHTHHETEHEETHTEEHEEHEHEEHHEHTTTEEHTTTHEHEEHH

Ex 3.10 & 3.12: DMC

14

p_X =0.2000 0.8000

p_X_sim =0.2000 0.8000

q =0.3400 0.3600 0.3000

q_sim =0.4000 0.3500 0.2500

Q =0.5000 0.2000 0.30000.3000 0.4000 0.3000

Q_sim =0.7500 0 0.25000.3125 0.4375 0.2500

>> sym(Q_sim)ans =[ 3/4, 0, 1/4][ 5/16, 7/16, 1/4]PE_sim =

0.7500

>> DMC_demoans =1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 0 1 1 0 1ans =1 3 2 2 1 2 1 2 2 3 1 1 1 3 1 3 2 3 1 2

x:

y:

0

1

1

3

0.5

0.3

X Y

%% Simulation parameters% The number of symbols to be transmittedn = 20; % General DMC% Ex. 3.6 amd 3.12 in lecture note% Channel Input S_X = [0 1]; S_Y = [1 2 3];p_X = [0.2 0.8];% Channel CharacteristicsQ = [0.5 0.2 0.3; 0.3 0.4 0.3];

[DMC_demo.m]

2𝒑 𝟎 𝟎. 𝟐

𝒑 𝟏 𝟎. 𝟖

Ex 3.10 & 3.13: DMC

15

>> p_X_sim * Q_simans =

0.4000 0.3500 0.2500

p_X =0.2000 0.8000

p_X_sim =0.2000 0.8000

q =0.3400 0.3600 0.3000

q_sim =0.4000 0.3500 0.2500

Q =0.5000 0.2000 0.30000.3000 0.4000 0.3000

Q_sim =0.7500 0 0.25000.3125 0.4375 0.2500

>> sym(Q_sim)ans =[ 3/4, 0, 1/4][ 5/16, 7/16, 1/4]PE_sim =

0.7500

Observe that1. the sum along any row

of the matrix is 1.2.

Review: Evaluation of Probability from the Joint PMF Matrix

16

Consider two random variables X and Y.

Suppose their joint pmf matrix is

Find

0.1 0.1 0 0 0 0.1 0 0 0.1 00 0.1 0.2 0 00 0 0 0 0.3

2 3 4 5 6xy

1346

3 4 5 6 75 6 7 8 96 7 8 9 108 9 10 11 12

2 3 4 5 6xy

1346

Step 1: Find the pairs (x,y) that satisfy the condition“x+y < 7”

One way to do this is to first construct the matrix of x+y.

,X YP

x y

Review: Evaluation of Probability from the Joint PMF Matrix

17

Consider two random variables X and Y.

Suppose their joint pmf matrix is

Find

0.1 0.1 0 0 0 0.1 0 0 0.1 00 0.1 0.2 0 00 0 0 0 0.3

2 3 4 5 6xy

1346

3 4 5 6 75 6 7 8 96 7 8 9 108 9 10 11 12

2 3 4 5 6xy

1346

Step 2: Add the corresponding probabilities from the joint pmf (matrix)

,X YP

x y𝑃 𝑋 𝑌 7 0.1 0.1 0.1

0.3

Review: Evaluation of Probability from the Joint PMF Matrix

18

Consider two random variables X and Y.

Suppose their joint pmf matrix is

Find

0.1 0.1 0 0 0 0.1 0 0 0.1 00 0.1 0.2 0 00 0 0 0 0.3

2 3 4 5 6xy

1346

,X YP

Review: Sum of two discrete RVs

19

Consider two random variables X and Y.

Suppose their joint pmf matrix is

Find

0.1 0.1 0 0 00.1 0 0 0.1 00 0.1 0.2 0 00 0 0 0 0.3

2 3 4 5 6xy

1346

3 4 5 6 75 6 7 8 96 7 8 9 108 9 10 11 12

2 3 4 5 6xy

1346

x y𝑃 𝑋 𝑌 7 0.1

,X YP

Summary: DMC

20

|

,

Channel Input Alphabet

𝑃 𝑋 𝑥

𝑃 𝑌 𝑦

𝑃 𝑌 𝑦|𝑋 𝑥

𝑃 𝑋 𝑥, 𝑌 𝑦

matrix

matrixrow vector

row vector “AND”

Put together All values at corresponding positions in the matrix.

Q

1

i

m

p x

p x

p x

,

1

,X Y i ji

m

x

x

x

p x y

x 1 j ny y y

jq y q pQ

P

1

i

m

x

x

x

1

j

n

y

y

y

j iQ y x

1

j ii

m

x

x

xx Q y

1 j ny y y

21

p_X_sim =0.2326 0.7674

q =0.4698 0.5302

q_sim =0.4672 0.5328

Q_sim =0.6928 0.30720.3989 0.6011

0

1

0

1

0.3

0.7

0.4

0.6

X Y

S_X = [0 1]; S_Y = [0 1];p_X = [0.5 0.5];% Channel CharacteristicsQ = [0.7 0.3; 0.4 0.6];x_raw = imresize(imread('SIIT_25LOGO.png'),[150 150]); x_raw = [rgb2gray(x_raw) > 128]; % convert to 0 and 1x = reshape(x_raw,1,n);

[BAC_demo_image.m]

Ex 3.16: BAC

>> p = [0.5 0.5];>> Q = [0.7 0.3; 0.4 0.6];>> p*Qans =

0.5500 0.4500>> P = (diag(p))*QP =

0.3500 0.15000.2000 0.3000

>> sum(P)ans =

0.5500 0.4500

Ex 3.17: DMC

22

>> p = [0.2 0.8];>> Q = [0.5 0.2 0.3; 0.3 0.4 0.3];>> p*Qans =

0.3400 0.3600 0.3000>> P = (diag(p))*QP =

0.1000 0.0400 0.06000.2400 0.3200 0.2400

>> sum(P)ans =

0.3400 0.3600 0.3000

23

Digital Communication SystemsECS 452

Asst. Prof. Dr. Prapun Suksompongprapun@siit.tu.ac.th

3.2 Decoder and

for Naïve Decoder

24 [DMC_Analysis_demo.m]

%% Naive Decoderx_hat = y;

%% Error ProbabilityPE_sim = 1-sum(x==x_hat)/n % Error probability from the simulation

% Calculation of the theoretical error probabilityPC = 0;for k = 1:length(S_X)

t = S_X(k);i = find(S_Y == t);if length(i) == 1

PC = PC+ p_X(k)*Q(k,i);end

endPE_theretical = 1-PC

Formula derived in [3.23]

[Ex. 3.24]: Naïve Decoder and BAC

25

p_X =0.5000 0.5000

p_X_sim =0.7000 0.3000

q =0.5500 0.4500

q_sim =0.6500 0.3500

Q =0.7000 0.30000.4000 0.6000

Q_sim =0.7143 0.28570.5000 0.5000

PE_sim =0.3500

PE_theretical =0.3500

>> BAC_demoans =0 0 0 1 1 0 0 1 0 0 0 0 1 0 0 1 0 1 0 0ans =0 0 1 1 0 0 0 1 1 1 0 0 1 0 0 0 0 0 1 0

x:

y:

0

1

0

1

0.3

0.7

0.4

0.6

X Y

%% Simulation parameters% The number of symbols to be transmittedn = 20; % Binary Asymmetric Channel (BAC)% Ex 3.25 in lecture note (11.3 in [Z&T, 2010])% Channel Input S_X = [0 1]; S_Y = [0 1];p_X = [0.5 0.5];% Channel CharacteristicsQ = [0.7 0.3; 0.4 0.6];

[BAC_demo.m]

𝒑 𝟎 𝟎. 𝟓

𝒑 𝟏 𝟎. 𝟓

720

26

p_X =0.5000 0.5000

p_X_sim =0.5043 0.4957

q =0.5500 0.4500

q_sim =0.5532 0.4468

Q =0.7000 0.30000.4000 0.6000

Q_sim =0.7109 0.28910.3928 0.6072

PE_sim =0.3405

PE_theretical =0.3500

0

1

0

1

0.3

0.7

0.4

0.6

X Y

%% Simulation parameters% The number of symbols to be transmittedn = 1e4; % Binary Assymmetric Channel (BAC)% Ex 3.24 in lecture note (11.3 in [Z&T, 2010])% Channel Input S_X = [0 1]; S_Y = [0 1];p_X = [0.5 0.5];% Channel CharacteristicsQ = [0.7 0.3; 0.4 0.6];

𝒑 𝟎 𝟎. 𝟓

𝒑 𝟏 𝟎. 𝟓

10 20 30 40 50 60 70 80 90 100

10

20

30

40

50

60

70

80

90

10010 20 30 40 50 60 70 80 90 100

10

20

30

40

50

60

70

80

90

100

[BAC_demo.m]

[Ex. 3.24]: Naïve Decoder and BAC

[Ex. 3.16]:

27

p_X_sim =0.2326 0.7674

q =0.4698 0.5302

q_sim =0.4672 0.5328

Q =0.7000 0.30000.4000 0.6000

Q_sim =0.6928 0.30720.3989 0.6011

PE_sim =0.3776

PE_theretical =0.3767

0

1

0

1

0.3

0.7

0.4

0.6

X Y

S_X = [0 1]; S_Y = [0 1];p_X = [0.5 0.5];% Channel CharacteristicsQ = [0.7 0.3; 0.4 0.6];x_raw = imresize(imread('SIIT_25LOGO.png'),[150 150]); x_raw = [rgb2gray(x_raw) > 128]; % convert to 0 and 1x = reshape(x_raw,1,n);

[BAC_demo_image.m]

Naïve Decoder and BAC

[Ex. 3.25]: Naïve Decoder and DMC

28

p_X =0.2000 0.8000

p_X_sim =0.2000 0.8000

q =0.3400 0.3600 0.3000

q_sim =0.4000 0.3500 0.2500

Q =0.5000 0.2000 0.30000.3000 0.4000 0.3000

Q_sim =0.7500 0 0.25000.3125 0.4375 0.2500

PE_sim =0.7500

PE_theretical =0.7600

>> DMC_demoans =1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 0 1 1 0 1ans =1 3 2 2 1 2 1 2 2 3 1 1 1 3 1 3 2 3 1 2

x:

y:

0

1

1

3

0.5

0.3

X Y

%% Simulation parameters% The number of symbols to be transmittedn = 20; % General DMC% Ex. 3.16 in lecture note% Channel Input S_X = [0 1]; S_Y = [1 2 3];p_X = [0.2 0.8];% Channel CharacteristicsQ = [0.5 0.2 0.3; 0.3 0.4 0.3];

2𝒑 𝟎 𝟎. 𝟐

𝒑 𝟏 𝟎. 𝟖

20 420

[Same samples as in Ex. 3.6]

[DMC_demo.m]

29

p_X =0.2000 0.8000

p_X_sim =0.2011 0.7989

q =0.3400 0.3600 0.3000

q_sim =0.3387 0.3607 0.3006

Q =0.5000 0.2000 0.30000.3000 0.4000 0.3000

Q_sim =0.4943 0.1914 0.31430.2995 0.4033 0.2972

PE_sim =0.7607

PE_theretical =0.7600

0

1

1

3

0.5

0.3

X Y

%% Simulation parameters% The number of symbols to be transmittedn = 1e4; % General DMC% Ex. 3.16 in lecture note% Channel Input S_X = [0 1]; S_Y = [1 2 3];p_X = [0.2 0.8];% Channel CharacteristicsQ = [0.5 0.2 0.3; 0.3 0.4 0.3];

2𝒑 𝟎 𝟎. 𝟐

𝒑 𝟏 𝟎. 𝟖

10 20 30 40 50 60 70 80 90 100

10

20

30

40

50

60

70

80

90

10010 20 30 40 50 60 70 80 90 100

10

20

30

40

50

60

70

80

90

100

[DMC_demo.m]

[Ex. 3.25]: Naïve Decoder and DMC

Information Source

Destination

Channel

ReceivedSignal

TransmittedSignal

Message

Recovered Message

Source Encoder

Channel Encoder

DigitalModulator

DigitalDemodulator

Transmitter

Remove redundancy

Add systematic redundancy

EquivalentChannel

X: channel input

Y: channel outputSource Decoder

Channel Decoder(Detector)

Receiver

Decodedvalue

ˆ ˆX x Y

System Model for Section 3.2-3.3

Want to minimize 𝑃 𝑋 𝑋

[Figure 11]

Information Source

Destination

Channel

ReceivedSignal

TransmittedSignal

Message

Recovered Message

Source Encoder

Channel Encoder

DigitalModulator

DigitalDemodulator

Transmitter

Remove redundancy

Add systematic redundancy

X: channel input

Y: channel outputSource Decoder

Channel Decoder(Detector)

Decodedvalue

[Ex. 3.12, 3.17, 3.25]

0

1

1

3

0.5

0.3

2

10111110110111100101…

21133122121231131311…

Receiver ˆ ˆX x Y

Information Source

Destination

Channel

ReceivedSignal

TransmittedSignal

Message

Recovered Message

Source Encoder

Channel Encoder

DigitalModulator

DigitalDemodulator

Transmitter

Remove redundancy

Add systematic redundancy

X: channel input

Y: channel outputSource Decoder

Decodedvalue

[Ex. 3.25]: Naïve Decoder

0

1

1

3

0.5

0.3

2

10111110110111100101…

21133122121231131311…

ˆ12

1

3 32

x yy

21133122121231131311…

Naïve Decoder

ˆ 0.76P P X X

ˆ12

1

3 32

x yy Decoding Table

Information Source

Destination

Channel

ReceivedSignal

TransmittedSignal

Message

Recovered Message

Source Encoder

Channel Encoder

DigitalModulator

DigitalDemodulator

Transmitter

Remove redundancy

Add systematic redundancy

X: channel input

Y: channel outputSource Decoder

Decodedvalue

[Ex. 3.26]: DIY Decoder

0

1

1

3

0.5

0.3

2

10111110110111100101…

21133122121231131311…

ˆ01

1

3 02

x yy

ˆ01

1

3 02

x yy

10000011010100000000…

ˆ 0.52P P X X

Decoding Table

[Ex. 3.26]: DIY Decoder

34

>> DMC_decoder_DIY_demoans =1 0 1 1 1 1 1 0 1 1 0 1 1 1 1 0 0 1 0 1ans =2 1 1 3 3 1 2 2 1 2 1 2 3 1 1 3 1 3 1 1ans =1 0 0 0 0 0 1 1 0 1 0 1 0 0 0 0 0 0 0 0PE_sim =

0.5500PE_theretical =

0.5200Elapsed time is 0.081161 seconds.

%% Simulation parameters% The number of symbols to be transmittedn = 20; % General DMC% Channel Input S_X = [0 1]; S_Y = [1 2 3];p_X = [0.2 0.8];% Channel CharacteristicsQ = [0.5 0.2 0.3; 0.3 0.4 0.3];

%% DIY DecoderDecoder_Table = [0 1 0]; % The decoded values corresponding to the received Y

[DMC_decoder_DIY_demo.m]

X

Y

𝑋

ˆ01

1

3 02

x yy

[Ex. 3.26]: DIY Decoder

35

%% DIY DecoderDecoder_Table = [0 1 0]; % The decoded values corresponding to the received Y

% Decode according to the decoder tablex_hat = y; % preallocationfor k = 1:length(S_Y)

I = (y==S_Y(k));x_hat(I) = Decoder_Table(k);

end

PE_sim = 1-sum(x==x_hat)/n % Error probability from the simulation

% Calculation of the theoretical error probabilityPC = 0;for k = 1:length(S_X)

I = (Decoder_Table == S_X(k));q = Q(k,:); PC = PC+ p_X(k)*sum(q(I));

endPE_theretical = 1-PC

[DMC_decoder_DIY_demo.m]

Information Source

Destination

Channel

ReceivedSignal

TransmittedSignal

Message

Recovered Message

Source Encoder

Channel Encoder

DigitalModulator

DigitalDemodulator

Transmitter

Remove redundancy

Add systematic redundancy

X: channel input

Y: channel outputSource Decoder

[Ex. 3.26]: DIY Decoder

0

1

1

3

0.5

0.3

2

ˆ01

1

3 02

x yy

ˆ 0.52P P X X

10 20 30 40 50 60 70 80 90 100

10

20

30

40

50

60

70

80

90

100

10 20 30 40 50 60 70 80 90 100

10

20

30

40

50

60

70

80

90

100

10 20 30 40 50 60 70 80 90 100

10

20

30

40

50

60

70

80

90

100

[Ex. 3.26]: DIY Decoder

37

>> DMC_decoder_DIY_demoPE_sim =

0.5213PE_theretical =

0.5200Elapsed time is 2.154024 seconds.

%% Simulation parameters% The number of symbols to be transmittedn = 1e4; % General DMC% Channel Input S_X = [0 1]; S_Y = [1 2 3];p_X = [0.2 0.8];% Channel CharacteristicsQ = [0.5 0.2 0.3; 0.3 0.4 0.3];

%% DIY DecoderDecoder_Table = [0 1 0]; % The decoded values corresponding to the received Y

10 20 30 40 50 60 70 80 90 100

10

20

30

40

50

60

70

80

90

10010 20 30 40 50 60 70 80 90 100

10

20

30

40

50

60

70

80

90

10010 20 30 40 50 60 70 80 90 100

10

20

30

40

50

60

70

80

90

100

[DMC_decoder_DIY_demo.m]

ˆ 0.52P P X X

𝑋 𝑋𝑌

Recipe for finding of any decoder

38

Use the 𝐏 matrix. If unavailable, can be found by scaling each row of the 𝐐 matrix by its

corresponding 𝑝 𝑥 . Write 𝑥 𝑦 values on top of the 𝑦 values for the 𝐏 matrix. For column 𝑦 in the 𝐏 matrix, circle the element whose corresponding

𝑥 value is the same as 𝑥 𝑦 . 𝑃 the sum of the circled probabilities. 𝑃 1 𝑃 .

0.2

0.8

00 01

0.10 0.04 0.060.24 0.

.5 0.21

0.30.3 0.4 0.3 32 0.24

Q P

1 2 3 1 2 3x y xy1 1 0 x̂ y

0

1

1

3

0.5

0.3

X Y2

ˆ11

1

3 02

x yy

[Ex. 3.27]

[3.28]

39

Digital Communication SystemsECS 452

Asst. Prof. Dr. Prapun Suksompongprapun@siit.tu.ac.th

3.3 Optimal Decoder

40

% Channel InputS_X = [0 1]; S_Y = [1 2 3];%p_X = [0.6 0.4];p_X = [0.2 0.8];% Channel CharacteristicsQ = [0.5 0.2 0.3; 0.3 0.4 0.3];

%% Construct all possible "reasonable" decodersDecoder_Table_ALL = S_X';for k = 2:length(S_Y);

l = size(Decoder_Table_ALL,1);t = [];for i = 1:length(S_X)

t = [t; [S_X(i)*ones(l,1) Decoder_Table_ALL]];endDecoder_Table_ALL = t;

end

Searching for the Optimal Detector

[DMC_decoder_ALL_demo.m]

41

%% Calculate the error probability for each of the decoder PE_ALL = [];for k = 1:size(Decoder_Table_ALL,1)

Decoder_Table = Decoder_Table_ALL(k,:);PC = 0;for k = 1:length(S_X)

I = (Decoder_Table == S_X(k));Q_row = Q(k,:);PC = PC + p_X(k)*sum(Q_row(I));

endPE_theretical = 1-PC;PE_ALL = [PE_ALL; PE_theretical];

end

%% Display the results[Decoder_Table_ALL PE_ALL]

%% Find the optimal detectorsMin_PE = min(PE_ALL)I = [abs(PE_ALL-Min_PE)<1e-10];Optimal_Detector = Decoder_Table_ALL(I,:)

Searching for the Optimal Detector

[DMC_decoder_ALL_demo.m]

Searching for the Optimal Detector

42

>> DMC_decoder_ALL_demoans =

0 0 0 0.80000 0 1 0.62000 1 0 0.52000 1 1 0.34001 0 0 0.66001 0 1 0.48001 1 0 0.38001 1 1 0.2000

Min_PE =0.2000

Optimal_Detector =1 1 1

𝑥 1 𝑥 2 𝑥 3 𝑃 ℰ

Ex. 3.26

Ex. 3.27

[DMC_decoder_ALL_demo.m]

Review: ECS315 (2019)

43 [http://www2.siit.tu.ac.th/prapun/ecs315/ECS315%206.1%20u6.pdf]

Guessing Game 1

44

There are 15 cards. Each have a number on it. Here are the 15 cards:

1 2 2 3 3 3 4 4 4 4 5 5 5 5 5 One card is randomly selected from the 15 cards.

You need to guess the number on the card.

Have to pay 1 Baht for incorrect guess.

The game is to be repeated n = 10,000 times.

What should be your guess value?

45

close all; clear all;

n = 5; % number of time to play this game

D = [1 2 2 3 3 3 4 4 4 4 5 5 5 5 5];X = D(randi(length(D),1,n));

if n <= 10X

end

g = 1cost = sum(X ~= g)

if n > 1averageCostPerGame = cost/nend

>> GuessingGame_4_1_1X =

3 5 1 2 5g =

1cost =

4averageCostPerGame =

0.8000

46

close all; clear all;

n = 5; % number of time to play this game

D = [1 2 2 3 3 3 4 4 4 4 5 5 5 5 5];X = D(randi(length(D),1,n));

if n <= 10X

end

g = 3.3cost = sum(X ~= g)

if n > 1averageCostPerGame = cost/nend

>> GuessingGame_4_1_1X =

5 3 2 4 1g =

3.3000cost =

5averageCostPerGame =

1

47

close all; clear all;

n = 1e4; % number of time to play this game

D = [1 2 2 3 3 3 4 4 4 4 5 5 5 5 5];X = D(randi(length(D),1,n));

if n <= 10X

end

g = ?cost = sum(X ~= g)

if n > 1averageCostPerGame = cost/nend

Guessing Game 1

481 1.5 2 2.5 3 3.5 4 4.5 5

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1

Guess value

AV

erag

e C

ost P

er G

ame

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 60.65

0.7

0.75

0.8

0.85

0.9

0.95

1

Guess value

AV

erag

e C

ost P

er G

ame

Guessing Game 1

49

Optimal Guess: The most-likely value

Summary: MAP vs. ML Decoders

50

Decoder is derived from the 𝐏 matrix Select (by circling) the maximum value

in each column (for each value of 𝑦) in the 𝐏 matrix.

The corresponding 𝑥 value is the value of 𝑥 𝑦 .

MAP ,

optimal

ˆ arg max ,

ˆ

arg max

arg max

X Yx

x

x

x y p x y

x y

P X x Y y

Q y p xx

MLˆ arg maxx

x y Q y x

Decoder is derived from the 𝐐 matrix Select (by circling) the maximum value

in each column (for each value of 𝑦) in the 𝐐 matrix.

The corresponding 𝑥 value is the value of 𝑥 𝑦 .

Once the decoder (the decoding table) is derived 𝑃 and 𝑃 are calculated by adding the corresponding probabilities in the 𝐏 matrix.

Optimal at least when 𝑝 𝑥 is uniform (the channel inputs are equally likely)

a posteriori probability

likelihood funtion

Can be derived without knowing the channel input probabilities.

prior probability

Example: MAP Decoder [Ex. 3.46]

51

0.2

0.8

00 01

0.10 0.04 0.060.24 0.

.5 0.21

0.30.3 0.4 0.3 32 0.24

Q P

1 2 3 1 2 3x y xy1 1 1 MAPx̂ y

0

1

1

3

0.5

0.3

X Y2

( ) 0.24 0.32 0.( ) 1 21 24 0.P P

MAPˆ

13 1

1 12

x yy

Example: ML Decoder [Ex. 3.47]

52

( ) 0.10 0.( 32 0.06) 1 1 0.52PP

( ) 0.10 0.( 32 0.24) 1 1 0.34PP

0.2

0.8

00 01

0.10 0.04 0.060.24 0.

.5 0.21

0.30.3 0.4 0.3 32 0.24

Q P

1 2 3 1 2 3x y xy0 1 0 MLx̂ y

0

1

1

3

0.5

0.3

X Y2

0.2

0.8

00 01

0.10 0.04 0.060.24 0.

.5 0.21

0.30.3 0.4 0.3 32 0.24

Q P

1 2 3 1 2 3x y xy

0

1

1

3

0.5

0.3

X Y2

MLx̂ y0 1 1

Sol 1:

Sol 2:

ML

01

3 12

yy x

ML

01

3 02

yy x[Same as Ex. 3.26]

[Agree with the complete table in Ex. 3.32]

MAP Decoder

53

%% MAP DecoderP = diag(p_X)*Q; % Weight the channel transition probability by the

% corresponding prior probability.[V I] = max(P); % For I, the default MATLAB behavior is that when there are

% multiple max, the index of the first one is returned.Decoder_Table = S_X(I) % The decoded values corresponding to the received Y

%% Decode according to the decoder tablex_hat = y; % preallocationfor k = 1:length(S_Y)

I = (y==S_Y(k));x_hat(I) = Decoder_Table(k);

end

PE_sim = 1-sum(x==x_hat)/n % Error probability from the simulation

%% Calculation of the theoretical error probabilityPC = 0;for k = 1:length(S_X)

I = (Decoder_Table == S_X(k));Q_row = Q(k,:); PC = PC+ p_X(k)*sum(Q_row(I));

endPE_theretical = 1-PC

[DMC_decoder_MAP_demo.m]

ML Decoder

54

%% ML Decoder[V I] = max(Q); % For I, the default MATLAB behavior is that when there are

% multiple max, the index of the first one is returned.Decoder_Table = S_X(I) % The decoded values corresponding to the received Y

%% Decode according to the decoder tablex_hat = y; % preallocationfor k = 1:length(S_Y)

I = (y==S_Y(k));x_hat(I) = Decoder_Table(k);

end

PE_sim = 1-sum(x==x_hat)/n % Error probability from the simulation

%% Calculation of the theoretical error probabilityPC = 0;for k = 1:length(S_X)

I = (Decoder_Table == S_X(k));Q_row = Q(k,:); PC = PC+ p_X(k)*sum(Q_row(I));

endPE_theretical = 1-PC

[DMC_decoder_ML_demo.m]

Different Decoders

55

𝑦 𝑥naïve 𝑦 𝑥DIY 𝑦 𝑥MAP 𝑦 𝑥ML 𝑦

1 1 1 1 0

2 2 1 1 1

3 3 0 1 0

For naïve decoder, 𝑥naïve 𝑦 𝑦. Therefore, we can simply copy the values in the 𝑦-column into its decoding table. For DIY decoder, the decoding table is provided.

(Alternatively, some equation may be given; in which case, we can simply plug-in each of the possible 𝑦values to get 𝑥DIY 𝑦 .)

Deriving the MAP and ML decoder follows almost the same recipe: select the max value in each column and read the corresponding 𝑥-value. The difference is that the MAP decoder uses the Pmatrix but the ML decoder uses the Q

matrix.

0.5 0.2 0.30.3 0.4

\ 1 2

.

3

1 0 30x y

Q

\ 1 2 30

20.10 0.04 0.060.24 0.32 0. 41

x y

P

0.2

0.8

[Ex. 3.25] [Ex. 3.27] [Ex. 3.35] [Ex. 3.46]

Finding

56

Once a decoder 𝑥 𝑦 is defined, we can find its corresponding 𝑃 easily from the P matrix: Write 𝑥 𝑦 values on top of the 𝑦 values for the 𝐏 matrix. For each column 𝑦 in the 𝐏 matrix, circle the element whose

corresponding 𝑥 value is the same as 𝑥 𝑦 .

𝑃 the sum of the circled probabilities.

𝑃 1 𝑃 .

𝑦 𝑥naïve 𝑦 𝑥DIY 𝑦 𝑥MAP 𝑦 𝑥ML 𝑦

1 1 1 1 0

2 2 1 1 1

3 3 0 1 0

𝑃 0.76 0.38 0.20 0.52[Ex. 3.25] [Ex. 3.27] [Ex. 3.35] [Ex. 3.46]

Dnaive MAPIYˆ 1 2 ˆ 1 13 1

0\ 1 2 3

0.10 0.04 0.06 0.10 0.04 0.06

0.2

\ 1 2

30

4 0.3

\

1 2 0.24 0

.24 .0

1

ˆ 1

2

0 3 0

.24

1 x y x y

x y x yx yy x

P

ML 1

2 3 \ 1 2 3

0 01

0

.

ˆ

0.10 0.04 0 064

0 1

0

0.10 0.04 0.060.24 0.32 310.24 0.24 0. 2 .2

x yx y

63

Digital Communication SystemsECS 452

Asst. Prof. Dr. Prapun Suksompongprapun@siit.tu.ac.th

3.4 Optimal Block Decoding for Communications Over BSC

Information Source

Destination

Channel

ReceivedSignal

TransmittedSignal

Message

Recovered Message

Source Encoder

Channel Encoder

DigitalModulator

DigitalDemodulator

Transmitter

Remove redundancy

Add systematic redundancy

EquivalentChannel

X: channel input

Y: channel outputSource Decoder

Channel Decoder(Detector)

Receiver

Decodedvalue

System Model for Section 3.2-3.3

Information Source

Destination

Channel

ReceivedSignal

TransmittedSignal

Message

Recovered Message

Source Encoder

Channel Encoder

DigitalModulator

DigitalDemodulator

Transmitter

Remove redundancy

Add systematic redundancy

EquivalentChannel

Source Decoder

Channel Decoder(Detector)

Receiver

Decodedvalue

System Model for Section 3.4

X: channel input

Y: channel output

Some results from Section 3.3-3.4

66

MAP decoder is optimal. (It minimizes ).

ML decoder is suboptimal. However, in many cases, it can be optimal (same as the

MAP decoder), for example, when the codewords are equally-likely.

ML decoder is the same as the minimum distance decoder when the crossover probability of the BSC is

(which is usually the case).

[Example 3.65]

Summary

67

MAP optimalˆ ˆ

arg max

arg max

P

Q p

x

xx

x y x y

X x Y y

y x

MLˆ arg max

arg max

P

Q

x

x

x y Y y X x

y x minˆ arg min ,d dx

x y x y

,, 1 n ddQ p p x yx yy x

BSC ML decoder is the same as the MAP decoder when the codewords are equally likely.

For BSC with 𝑝 0.5, ML decoder is the same as the min distance decoder

[Example 3.65]

[Ex 3.54]

[Ex 3.56]

68

Digital Communication SystemsECS 452

Asst. Prof. Dr. Prapun Suksompongprapun@siit.tu.ac.th

3.5 Repetition Code in Communications Over BSC

Information Source

Destination

Channel

ReceivedSignal

TransmittedSignal

Message

Recovered Message

Source Encoder

Channel Encoder

DigitalModulator

DigitalDemodulator

Transmitter

Remove redundancy

Add systematic redundancy

EquivalentChannel

X: channel input

Y: channel outputSource Decoder

Channel Decoder(Detector)

Receiver

Decodedvalue

System Model for Section 3.2-3.3

Information Source

Destination

Channel

ReceivedSignal

TransmittedSignal

Message

Recovered Message

Source Encoder

Channel Encoder

DigitalModulator

DigitalDemodulator

Transmitter

Remove redundancy

Add systematic redundancy

EquivalentChannel

Source Decoder

Channel Decoder(Detector)

Receiver

Decodedvalue

System Model for Section 3.4

X: channel input

Y: channel output

Information Source

Destination

Message

Recovered Message

Source Encoder

Channel Encoder

Transmitter

Remove redundancy

Add systematic redundancy

EquivalentChannel

X: channel input

Y: channel outputSource Decoder

Channel Decoder(Detector)

Receiver

Decodedvalue

System Model for Section 3.5

Channel Encoder

[3.61] Block Encoding

k bits k bits k bits n bits n bits n bits

X

𝒏, 𝒌 code

Code rate = 𝒌𝒏

Channel Encoder

[3.62] Block Encoding

k bits k bits k bits n bits n bits n bits

X

𝒏, 𝒌 code

Code rate = 𝒌𝒏

Example: Repetition Code

Channel Encoder

1 bit 1 bit 1 bit 5 bits 5 bits 5 bits

X

𝟓, 𝟏 code

Code rate = 𝟏𝟓

1111100000111111 0 1

Channel Encoder

X

[3.61] Block Encoding

k bits n bits

Codebook

𝒔 𝐬𝐬

𝐬

𝐱

𝐱

𝐱

𝐱

𝑀 2 possibilities

Choose 𝑀 2 from 2 possibilities to be used as codewords.

[Figure 13]

Hypercube

75 https://www.youtube.com/watch?v=Q_B5GpsbSQw

Hypercube

76 https://www.youtube.com/watch?v=Q_B5GpsbSQw

n-bit space

77

n = 1

n = 2

n = 3

n = 4

Repetition Code

78

[From ECS315]

79

𝑛 5

Channel Encoder and Decoder

80

Noise & In

terferen

ce

Information Source

Destination

Channel

ReceivedSignal

TransmittedSignal

Message

Recovered Message

Source Encoder

Channel Encoder

DigitalModulator

Source Decoder

Channel Decoder

DigitalDemodulator

Transmitter

Receiver

Add systematic redundancy

X: channel input

Y: channel output

0

1

0

1

p

1-p

p

1-p

S

𝐒

[Figure 14]

Better Equivalent Channel

81

Noise & In

terferen

ce

Information Source

Destination

Channel

ReceivedSignal

TransmittedSignal

Message

Recovered Message

Source Encoder

Channel Encoder

DigitalModulator

Source Decoder

Channel Decoder

DigitalDemodulator

Transmitter

Receiver

Remove redundancy

Add systematic redundancy Better

EquivalentChannel

S

𝐒

[Figure 14]

Review: Channel Encoder and Decoder

82

Noise & In

terferen

ce

Channel

ReceivedSignal

TransmittedSignal

Message (Data block)

Recovered Message

Channel Encoder

DigitalModulator

Channel Decoder

DigitalDemodulator

Add systematic redundancy

0

1

0

1

p

1-p

p

1-p

k bits k bits k bitsn bits n bits n bits

Binary Symmetric Channel with

𝒑 𝟎. 𝟓

𝒔 𝐬𝐬

𝐬

𝐱

𝐱

𝐱

𝐱

𝑀 2 possibilities

Choose 𝑀 2 from 2 possibilities to be used as codewords.

Repetition Code

83

Original Equivalent Channel:

BSC with crossover probability

New (and Better) Equivalent Channel:

Use repetition code with at the transmitter Use majority vote at the receiver New BSC with 𝑝 105

3 𝑝 1 𝑝 54 𝑝 1 𝑝 5

5 𝑝 1 𝑝

0

1

0

1

0

1

0

1

p

1-p

p

1-p

Repetition Code with n = 5

Majority Vote

0

1

0

1

p

1-p

p

1-p

[Figure 14]

Example: Repetition Code

84

𝒏

1 𝑝 0.1

3 32 𝑝 1 𝑝 3

3 𝑝 0.0280

5 53 𝑝 1 𝑝 5

4 𝑝 1 𝑝 55 𝑝 0.0086

7 0.00279 8.9092 10

11 2.9571 10

𝑝 0.1

0

1

0

1

0

1

0

1

p

1-p

p

1-p

Repetition Code with n = 5

Majority Vote

[ECS315]

[Figure 14]

MATLAB: calc. min dist. dec.

85

On the course website, MATLAB script is provided.

Find of any binary block code.

Assumptions: BSC with 𝑝 0.5.

Codewords are equally likely.

The (optimal) decoder used is the minimum distance decoder.

MAP decoder is optimal

ML decoder is optimal

Codewords are equally likely

Min distance decoder is optimal

BSC with 𝑝 0.5

MATLAB

86

close all; clear all;

% ECS315 2019 Example 6.58% ECS452 2019 Example 3.65C = [0 0 0 0 0; 1 1 1 1 1]; % repetition code

p = (1/100);PE_minDist(C,p)

>> PE_minDist_demo1

ans =9.8506e-06

Code C is defined by putting all its (valid) codewords as its rows. For repetition code, there are two codewords: 00..0 and 11..1.

Crossover probability of the binary symmetric channel.

MATLAB

87

function PE = PE_minDist(C,p)% Function PE_minDist_3 computes the error probability P(E) when code C% is used for transmission over BSC with crossover probability p.% Code C is defined by putting all its (valid) codewords as its rows.M = size(C,1);k = log2(M);n = size(C,2);

% Generate all possible received vectorsY = dec2bin(0:2^n-1)-'0';

% Normally, we need to construct an extended Q matrix. However, because% each conditional probability in there is a decreasing function of the% Hamming distance, we can work with the distances instead of the% conditional probability. In particular, instead of selecting the max in% each column of the Q matrix, we consider min distance in each column.dmin = zeros(1,2^n);for j = 1:(2^n)

% for each received vector y,y = Y(j,:);% find the minimum distance (the distance from y to the closest% codeword)d = sum(mod(bsxfun(@plus,y,C),2),2);dmin(j) = min(d);

end

% From the distances, calculate the conditional probabilities.% Note that we compute only the values that are to be selected (instead of% calculating the whole Q first).n1 = dmin; n0 = n-dmin;Qmax = (p.^n1).*((1-p).^n0);% Scale the conditional probabilities by the input probabilities and add % the values. Note that we assume equally likely input.PC = sum((1/M)*Qmax);PE = 1-PC;end

MATLAB

88

MATLAB

89

close all; clear all;

% ECS315 Example 6.58% ECS452 Example 3.65C = [0 0 0 0 0; 1 1 1 1 1];

syms p; PE = PE_minDist(C,p)pp = linspace(0,0.5,100);PE = subs(PE,p,pp);plot(pp,PE,'LineWidth',1.5)xlabel('p')ylabel('P(E)')grid on

>> PE_minDist_demo2

PE =(p - 1)^5 + 10*p^2*(p - 1)^3 - 5*p*(p - 1)^4 + 1

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.50

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

p

P(E

)

Example 3.66

90

000 001 010 011 100 101 110 111011 2 1 1 0 3 2 2 1100 1 2 2 3 0 1 1 2

000 001 010 011 100 101 110 111011 0.032 0.128 0.128 0.512 0.008 0.032 0.032 0.128100 0.128 0.032 0.032 0.008 0.512 0.128 0.128 0.032

000 001 010 011 100 101 110 111011 0.016 0.064 0.064 0.256 0.004 0.016 0.016 0.064100 0.064 0.016 0.016 0.004 0.256 0.064 0.064 0.016

𝐱\𝒚

𝐱\𝒚 𝐱\𝒚

𝑃 0.8960𝑃 0.1040

Q P0.5

0.5

𝑑 𝐱, 𝒚

𝑄 𝒚 𝒙 𝑝 𝐱,𝒚 1 𝑝 𝐱,𝒚

[Ex 3.66]

[Ex 3.66] [Ex 3.66]

Recommended