38
DIGITAL COMMUNICATION Error - Correction A.J. Han Vinck

DIGITAL COMMUNICATION Error - Correction A.J. Han Vinck

Embed Size (px)

Citation preview

DIGITAL COMMUNICATION

Error - Correction

A.J. Han Vinck

Position of Error Control Coding

signal

generatorchannel detector

k input bits

k output bits

channelk

channelk input bits

k output bits

k input bits

signal

generator

coded signal generator

detector

detector/decoder

n input bits

n ECC ECC

coding

coded modulation

Encoding

• Replace a message of k information bits by a unique n bit word, called code wordcode word

• The collection of 2k code words is called a CODECODE

Error control code with rate k/n

message estimate

channel decoder

n

Code word in

receive

There are 2k code words of length n

2k

Code book

Code bookcontains all

processing

A pictorial view

2n vectors 2k code words

 

decoder

• Compare received word with all possible codewords

code words

received

 • Decode the code word with minimum # of differences („Most Likely“)

example

code words: 0 0 0 0 0 0 1 0 1 1 1 0 1 0 1 1 1 1 1 0

received: 0 0 0 1 1

difference: 0 0 0 1 1 0 1 0 0 0 1 0 1 1 0 1 1 1 0 1

best guess: 0 1 0 1 1only 1 difference

we have some problems

• Mapping from information to code words– generation of code words (mutually far apart)– storing of code book (2k code words, length n)

• Decoding– Compare a received word with all possible code words

Definitions

• Hamming distance between x and y is

dH := d(x, y) is the # of positions where xi yi

• The minimum distance of a code C is – dmin = min { d(x, y) | x C, y C, x y}

• Hamming weight of a vector x is

- w(x) := d(x, 0 ) is the # of positions where xi 0

example

• Hamming distance d( 1001, 0111) = 3

• Minimum distance (101, 011, 110) = 2

• Hamming weight w(0110101) = 4

Hamming was a famous scientist from Bell-lab and inventer of the Hamming code.

Performance

A code with minimum distance dmin is capable of correcting t errors if

 

dmin 2 t + 1.

Proof: If t errors occur, then since dmin 2 t + 1an incorrect code word has at least t+1 differences with the received word.

 

picture

2t+1 differences

A B

t differences from A

t differences from B

LINEAR CODES

Binary codes are called linear iff

 

   the component wise modulo-2 sum of two code words is again a code word.

 Consequently, the all zero word is a code word.

LINEAR CODE GENERATORThe code words are

- linear combinations of the rows of a binary generator matrix G with dimensions k, n

- G must have rank k!

Example: Consider k = 3, n = 6. 1 0 0 1 1 0

generator matrix G = 1 1 0 0 1 11 0 1 1 0 1

(1,0,1)G = ( 0, 0, 1, 0, 1, 1)

Systematic codesLet in general the matrix G be written as 

1 0 0 | 1 1 0

G = [ Ik P ]; G= 0 1 0 | 1 0 1 0 0 1 | 0 1 1

k = 3, n = 6The code generated is

– linear, systematic – has minimum distance 3. – the efficiency of the code is 3/6.

Example (optimum)

• Single Parity check code dmin = 2, k = n-1

100 0 1

0100 0 1

G = [ In-1 P ]=

00 01 1

All codewords have even weight!

Example (optimum)

• Repetition code: dmin = n, k = 1

• G = [ 1 1 1 ]

Equivalent codes

• Any linear code generator can be brought in “systematic form”

• Gsys = k

n k n

n

Note: the elementary operation have an inverse.Homework: give an example for k = 4 and n = 7

Elementary row operations

Elementary column operations

Non-systematic form

Bounds on minimum distance (Hamming)

• Linear codes have a systematic equivalent G– Minimum Hamming weight n – k + 1

• (Singleton bound)

• # code words * # correctable error patterns 2n

• Homework: show that Hamming codes satisfy the bound with equality!

0

2

2 ; / 2

1log 1 ( )

t pnn

i

nM t n

i

R M h pn

Bounds on minimum distance (Gilbert)

• Start: Select codeword from 2n possible words

• 1. Remove all words at distance < dmin from selected codeword

• 2. Select one of the remaining as next codeword• 3. Goto 1. unless no possibilities left.

• RESULT:

• homework: show that logM/n 1 – h(2p) for dmin -1 = 2t 2pn; p < ¼

min 1

0

2n

d

i

Mn

i

plot

1

0.5

R = log2M/n

p t/n

1-h(p)

1-h(2p)

0

•singleton

Property

The set of distances

from all code words to the all zero code word

is the same as to any other code word.

Proof:

d( x, y ) = d( x x, z = y x ) = d( 0, z ),

by linearity z is also a code word.

Thus!

the determination of the minimum distance of a code is equivalent to

 the determination of the minimum Hamming

weight of the code words. The complexity of this operation is proportional to #

of code words 

example

• Consider the code words

– 00000– 01101– 10011– 11110

Homework: Determine the minimum distance

Linear code generator

I(X) represents the k bit info vector ( i0, i1, ..., ik-1 )

g(X) is a binary polynomial of degree ( n-k ) THEN:

the code vector C of length n can be described by 

C(X) = I(X) g(X) all operations modulo-2.

EX: k = 4, n = 7 and g(X) = 1 + X + X3

• For the information vector (1,0,1,0)  

C(X) = (1 + X2 ) ( 1 + X + X3 ) = 1 + X + X2+ X5 (1,1,1, 0, 0,1, 0).

•  the encoding procedure in (k x n) matrix form:1 1 0 1 0 0 0

G = 0 1 1 0 1 0 0 c = I * G0 0 1 1 0 1 00 0 0 1 1 0 1

Implementation with a shift-register

The following shift register can be used: g(X) = (1 + X + X3 )

ik-1 ... i2 i1 i0

Homework:

give a description of the shift control to obtain the result

Some remarks

• Generators for different k and n

– are constructed using mathematics– listed in many text books

What remains is the decoding!

Hamming codes

• Minimum distance 3• Construction

– G = Im All k-tuples of Hamming weight > 1

– where m =

• Check that the minimum distance is 3!• Give the efficiency of the code

1k2k

Example k = 4, n = 7

1000 110

0100 101G =

0010 011

0001 111

4132m 3

Syndrome decoding

Let G = [ Ik P ] then construct HT = P

In-k

For all code words c = xG, cHT = xGHT = 0

Hence, for a received noisy vector

( c n ) HT = c HT n HT

= n HT

= : S

example

100 110

G = 010 101

001 011

110

101

011

100

010

001

HT =

x = 1 0 1

c = 1 0 1 1 0 1

c HT = 0 0 0

n = 0 1 0 0 0 0

c n = 1 1 1 1 0 1

[c n] HT = S = 1 0 1Obvious fast decoder:

precalculate all syndromes at receiver

for correctable errors

In system form

c n Calculate syndrome

[c n] HT = S

Precalculated syndromes

n*

c n

c n n*

when n = n*

then n n* = 0

Homework: choose parameters that can be implemented

Reed Solomon Codes (CD, DVD)

• Structure:

m

k information symbols n-k check symbols

Properties: minimum distance = n-k+1 (symbols)

length 2m -1

General remarks

• The general problem is the decoding

– RS codes can be decoded using • Euclids algorithm• Berlekamp Massey algorithm

Why error correction?

• Systems with errors can be made almost error free

– CD, DvD would not work without RS codes

Why error correction?

• In ARQ systems systems collaps can be postponed!

troughput

100%

k/n %

Channel error probability

0 1

For Additive White Gaussian Noise Channels

• Error probability p e-Es/No – where Es is energy per transmitted symbol– No the one-sided noise power spectral density

• For an uncoded system p e-Eb/No

• For a coded system with minimum distance d – nEs = kEb and thus pc e-d k/n Eb/No

• CONCLUSION: make Coding gain = d k/n > 1