32
exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw a graph 1 We want to construct a cyclic code with n = 7, k = 4, and m =3. confirm that G(x) = x 3 + x 2 + 1 can be a generator polynomial encode 0110 decide if 0001100 is a correct codeword or not . 1 0 0 0 1 1 1 0 0 1 0 0 1 1 0 1 0 0 1 0 1 0 1 1 0 0 0 1 0 1 1 1 H

Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

Embed Size (px)

Citation preview

Page 1: Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

exercise in the previous class

Consider the following code C.determine the weight distribution of Ccompute the “three” probabilities (p. 7), and draw a graph

1

.

10001110

01001101

00101011

00010111

H

We want to construct a cyclic code with n = 7, k = 4, and m =3.confirm that G(x) = x3 + x2 + 1 can be a generator polynomialencode 0110decide if 0001100 is a correct codeword or not

Answers: http://apal.naist.jp/~kaji/lecture/

Page 2: Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

today’s class

soft-decision decodingmake use of “more information” from the channel

convolutional codegood for soft-decision decoding

Shannon’s channel coding theorem

error correcting codes of the next generation

2

Page 3: Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

the channel and modulation

We considered “digital channels”.At the physical-layer,

almost all channels are continuous.digital channel=modulator + continuous channel + demodulator

3

modulator( 変調器 )

demodulator( 復調器 )

a naive demodulator translates the waveform to 0 or 1 the risk of possible errors

0010 0010

0010 0010

continuous(analogue)

Page 4: Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

“more informative” demodulator

From the viewpoint of error correction, the waveform contains more information than the binary output of the demodulator.

demodulators with multi-level output can help error correction.

4

0

1

definitely 0maybe 0

definitely 1maybe 1

to make use of this multi-level demodulator,the decoding algorithm must be able to handle multi-level inputs

Page 5: Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

hard-decision vs. soft-decision

hard-decision decodingthe input to the decoder is binary (0 or 1)decoding algorithms discussed so far are hard-decision type

soft-decision decodingthe input to the decoder can have three or more levelsthe “check matrix and syndrome” approach does not work

more powerful, BUT, more complicated

5

Page 6: Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

formalization of the soft-decision decoding

outputs of the demodulator0+ (definitely 0), 0- (maybe 0), 1- (maybe 1), 1+ (definitely 1)

code C = {00000, 01011, 10101, 11110}for a received vector 0- 0+ 1+ 0- 1-,

find a codeword which minimizes the penalty .

6

received0+

0-

1-

1+

penalty of “0”

0123

penalty of “1”

3210

(hard-decision ... penalty = Hamming distance)

0- 0+ 1+ 0- 1-

0 0 0 0 01 0 3 1 2 7

r

c0+ + + + =

1 0 1 0 12 0 0 1 1 4+ + + + =

c2

=

=

Page 7: Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

algorithms for the soft-decision decoding

We just formalized the problem... how can we solve it?by exhaustive search?

... not practical for codes with many codewordsby matrix operation?

... yet another formalization which is difficult to solveby approximation?

... yes, this is one practical approach anyway...

design special codes;whose soft-decision decoding is not “too difficult”

convolutional code ( 畳み込み符号 )

7

Page 8: Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

convolutional codes

the codes we studied so far... block codesa block of k-bit data is encoded to a codeword of length nthe encoding is done independently for block to block

convolutional codesencoding is done in a bit-by-bit mannerprevious inputs are stored in shift-registers in the encoder, and affects future encoding

8

input data

combinatorial logicencoder outputs

Page 9: Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

encoding of a convolutional code

at the beginning, the contents of registers are all 0when a data bit is given, the encoder outputs several bits,

and the contents of registers are shifted by one-bitafter encoding, give 0’s until all registers hold 0

9

r3 r2 r1

encoder exampleconstraint length = 3 ( = # of registers)the output is constrained by three previous input bits

Page 10: Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

encoding example (1)

to encode 1101...

10

0 0 0 1

11 0 0 1 1

10

0 1 1 0

01 1 1 0 1

10

give additional 0’s to push-out 1’s in the register...

Page 11: Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

encoding example (2)

11

1 0 1 0

00 0 1 0 0

00

1 1 0 1

10

1 0 0 0

01

the output is 11 10 01 10 00 00 01

Page 12: Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

encoder as a finite-state machine

constraint length k the encoder has 2k internal states

12

r2 r1

inpu

t

outp

ut

s0

s1 s2

s3

00

11

11

0001

1001

10

01

the encoder is a finite-state machinewhose initial state is s0

makes one transition for each input bitreturns to s0 after all data bits are provided

internal state = (r2, r1)s0=(0, 0), s1=(0, 1), s2=(1, 0), s3=(1, 1)

Page 13: Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

at the receiver’s end

the receiver knows...the definition of the encoder (finite-state machine)the encoder starts and ends at the state s0

the transmitted sequence which can be corrupted by errors

13

encoder receiver

01001... 01100...

errors

to correct errors = to estimate the “real” transition of the encoder... estimation on a hidden Markov model (HMM)

Page 14: Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

trellis diagram

a trellis diagram is obtained by...expanding the transition of the encoder to the time axis

14

possible encoding sequences (of length 5)= the set of paths connecting s0,0 and s5,0

the transmitted sequence = the path which is the most-likely to the received

sequence= the path with the minimum penalty

s0

s1

s2 expansion

s0,0 s5,0 s0s1s2

time 0 1 2 3 4 5

trellis

Page 15: Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

Viterbi algorithm

given a received sequence...the demodulator defines penalties for symbols at each positionthe penalties are assigned to edges of the trellis diagramfind the path with the minimum penalty using a good algorithm

Viterbi algorithmthe Dijkstra algorithm for HMMrecursive width-first search

15

Andrew Viterbi1935-

s0,0

pA

pB

qA

qB

the minimum penaltyof this state ismin (pA+qA, pB+qB)

Page 16: Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

soft-decision decoding for convolutional codes

the complexity of Viterbi algorithm ≈ the size of the trellis diagram

for convolutional codes (with constraint length k):the size of trellis ≈ 2k×data length ... manageablewe can extract 100% performance of the code

for block codes:the size of trellis ≈ 2data length ... too largeit is difficult to extract the full performance

16

block code good performance,but difficult to use

performance

convolutionalcode

moderated performance,full power available

Page 17: Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

summary of the convolutional codes

advantage:the encoder is realized by a shift-register (like as cyclic codes)soft-decision decoding is practically realizable

disadvantage:no good algorithm for constructing good codes

code design = the wire connection of register outputsdesign in the “trial-and-error” manner with computer

17

Page 18: Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

channel capacity

mutual information I(X; Y):X and Y: the input and output of the channelthe average information of X given by Ydepends on the statistical behavior of the input X

channel capacity = max I(X; Y)... the maximum performance achievable by the channel

18

X Y

Page 19: Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

computation of the channel capacity

the channel capacity of BSC with bit error probability p:if the input is X = {0, 1} with P(0) = q, P(1) = 1 – q, then

19

𝐼 ( 𝑋 ;𝑌 )=𝑝 log𝑝+(1−𝑝 ) log (1−𝑝)− (𝑝+𝑞−2𝑝𝑞) log (𝑝+𝑞−2𝑝𝑞 )− (1−𝑝−𝑞+2𝑝𝑞) log (1−𝑝−𝑞+2𝑝𝑞 )

the maximum of I(X; Y) is given when q = 0.5, with

𝐶=max 𝐼 (𝑋 ;𝑌 )

0 1.0

1.0

0.5

0.5p

C

¿𝑝 log𝑝+(1−𝑝 ) log (1−𝑝 )+1

Page 20: Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

Shannon’s channel coding theorem

code rate R = log2(# of codewords) / (code length)

(= k / n for an (n, k) binary linear code)

Shannon’s channel coding theorem:consider a communication channel with capacity C;

a code with rate R ≤ C, with which error at the receiver → 0no such code exists if the code rate R > C

The theorem says that such a code exists at “some place”.... How can we reach there?

20

Page 21: Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

investigation towards the Shannon’s limit

by mid 1990s ... “combine several codes” approach

concatenated code ( 連接符号 )apply multiple error correcting codes sequentiallytypically; Reed-Solomon + convolutional codes

21

encoder 1 encoder 2

decoder 1 decoder 2

channelouter code(RS code)

inner code(conv. code)

Page 22: Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

product code

product code ( 積符号 )2D code with more powerful codes applied for row/column

22

paritiesof code 1

parities of code 2

data bits0 1 01 1 01 0 0

1 10 11 0

1 0 10 1 0

1 00 1

0 1 00 1 01 0 1

1 00 11 0

0 0 01 1 01 0 1decoding

of code 2decodingof code 1

possible problem: in the decoding of code 1,the received information is not

considered

Page 23: Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

idea for the breakthrough

let the decoder see two inputs:the result of the previous stage + received information

feed-back the decoding result of code 1, and try decode code 2

Exchange the decoding results between two decoders iteratively.二台の復号器間で,復号結果を繰り返し交換する

23

paritiesof code 1

parities of code 2

data bits0 1 01 1 01 0 0

1 10 11 0

1 0 10 1 0

1 00 1

0 1 00 1 01 0 1

1 00 11 0

0 0 01 1 01 0 1decoding

of code 2decodingof code 1

Page 24: Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

the iterative decoding

idea: product code + iterative decoding

24

decoder for code 1

decoder for code 2

receivedsequence

decoding result

the decoder is modified to have two inputstwo soft-value inputs and one soft-value outputtrellis-based maximum a-posteriori decoding

the result of one decoder helps the other decoder more # of iteration, more reliable result

Page 25: Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

Turbo code: encoding

Turbo code:Add two sets of parities for one set of data bitsUse convolutional codes for the simplicity of decoding

25

encode: code 1

encode: code 2interleaver

data bits codedsequence

(bit reorder)

data bits parity: code 1 parity: code 2coded sequence

Page 26: Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

Turbo code: decoding

Each decoder sends its estimation to the other decoder.Experiments shows that...

code length must be sufficiently long (> thousands bits)the # of iterations can be small (≈ 10 to 20 iterations)

26

decoder: code 1

decoder: code 2

received sequencedecodingresult

II–1 I

I interleaver I–1 de-interleaver

Page 27: Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

performance of Turbo codes

The performance is much betterthan other known codes.

There is some “saturation ( 飽和 )”of the performance improvement.(error-floor)

27

C. Berrou et al.: Near Shannon Limit Error-Correcting Coding and Decoding: Turbo Codes, ICC 93, pp. 1064-1070, 1993.

Page 28: Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

LDPC code

The study of Turbo codes revealed that “the length is the power”.don’t bother too much on mathematical propertiespursue long and easily decodable codes

LDPC code (Low Density Parity Check code)linear block code with very sparse check matrix

almost all components are 0, with small # of 1discovered by Gallager in 1962, but forgotten for long yearsMacKay’s rediscovery in 1999

28

Robert Gallager1931-

David MacKay1967-

Page 29: Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

the decoding of LDPC codes

An LDPC code is a usual linear block code,but its sparse check matrix allows belief-propagation

algorithmto work efficiently and effectively.

29p3 = p1 p4 p6 + q1 q4 p6 + q1 p4 q6 + p1 q4 q6

Tanner Graph

1 1 1 0 1 0 01 0 1 1 0 1 00 1 1 1 0 0 1

H =

prob. of 0prob. of 1 (= 1 – pi)

p1 p3 p4 p6q1 q3 q4 q6

Page 30: Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

LDPC vs. Turbo codes

performance:both codes show excellent performancethe decoding complexities are “almost linear”LDPC shows “more mild” error-floor phenomenon

realization:O(n) encoder for Turbo, O(n2) encoder for LDPC

code design:LDPC has more varieties and strategies

30

Page 31: Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

summary

soft-decision decodingretrieve more information from the channel

convolutional codegood for soft-decision decoding

Shannon’s channel coding theoremwhat we can and we cannot

error correcting codes of the next generation

31

Page 32: Exercise in the previous class Consider the following code C. determine the weight distribution of C compute the “three” probabilities (p. 7), and draw

exercise

give proof for the discussion in p.19

32