22
3/4/2018 1 EE140 Introduction to Communication Systems Lecture 23 Instructor: Prof. Xiliang Luo ShanghaiTech University, Spring 2018 1 Architecture of a (Digital) Communication System 2 Source A/D converter Source encoder Channel encoder Modulator Channel Detector Channel decoder Source decoder D/A converter User Transmitter Receiver Absent if source is digital Noise

EE140 Introduction to Communication Systems Lecture 23sist.shanghaitech.edu.cn/faculty/luoxl/class/2018Spr_EE140/Lecture... · • Channel coding (Error control): to encode a signal

  • Upload
    lydat

  • View
    243

  • Download
    4

Embed Size (px)

Citation preview

3/4/2018

1

EE140 Introduction to

Communication Systems

Lecture 23

Instructor: Prof. Xiliang Luo

ShanghaiTech University, Spring 20181

Architecture of a (Digital) Communication System

2

Source A/Dconverter

Sourceencoder

Channelencoder Modulator

Channel

DetectorChanneldecoder

Sourcedecoder

D/Aconverter

User

Transmitter

Receiver

Absent ifsource isdigital

Noise

3/4/2018

2

Contents

• History of coding theory

• Hat puzzle

• Introduction to coding theory

• Linear block code introduction

3

Coding in Communications System• Source coding (Data compression): to reduce

space for the data stream.

• Encryption: to encrypt information for security purpose.

• Channel coding (Error control): to encode a signal so that error occurred can be detected and possibly corrected. Protection is achieved by adding redundancy.

4

3/4/2018

3

Beginning of Information Theory• C. E. Shannon, “A Mathematical

Theory of Communication”, Bell Syst. Tech. Journal, V27, 1948, 379-423, 623-656

5

1950s• Binary

linear block code:– Hamming

codes– Golay

codes– Reed-

Mullurcodes

6

3/4/2018

4

1960s• Non-binary

linear block code– BCH codes– RS codes

7

1969• Soft

decoding

1969~1977 Mariner Mission to Mars

8

3/4/2018

5

1960s

• 1963 LDPC codes

• 1968 convolutional codes

9

1970s

• Viterbi decoder

Maximum likelihood decoding on a trellis of 2

10

3/4/2018

6

1970s• Concatenated

codes

11

1980s

• 1989 Galileo mission to Jupiter

12

3/4/2018

7

1990s• Soft

decoding• GSM,

CDMA (IS-95)

13

1993• Turbo

codes• C. Berrou, A.

Glavieux, and P. Thitimajshima, "Near Shannon limit error-correcting coding and decoding: Turbo-codes," IEEE International Conference on Communications, (ICC '93) vol.2, pp.1064-1070, Geneva, May 1993.

14

3/4/2018

8

1996• LDPC

codes• Robert G.

Gallager(1963)

• David J.C. MacKay and Radford M. Neal, "Near Shannon Limit Performance of Low Density Parity Check Codes," Electronics Letters, July 1996

15

2000s• Polar codes

– linear block error correcting code developed by Erdal Arikan

– The first code with an explicit construction to provably achieve the channel capacity for symmetric binary-input, discrete, memorylesschannels.

– Encoding and decoding complexity

log , practical for many applications. 16

3/4/2018

9

Contents

• History of coding theory

• Hat puzzle

• Introduction to coding theory

• Linear block code introduction

17

Channel Coding Example• Hat puzzle -- Ebert's version

– 3 players– wear red or blue hats with equal probability– can see the color of others’ hat, not himself/herself– Need to guess the color of the hat of himself/herself

• One can choose to guess or skip– Winning rule

• At least one player has to guess• All of those who guess are correct

– How to maximize the chance of winning?

18

3/4/2018

10

Contents

• History of coding theory

• Hat puzzle

• Introduction to coding theory

• Linear block code introduction

19

Binary Symmetric Channel• The simplest, most popular, and occasionally

appropriate channel model is the binary symmetric channel (BSC).

• Transition diagram

20

3/4/2018

11

Binary Erasure Channel• Binary erasure channel (harddisk, puncturing code)• Transition diagram

21

AWGN Channel

• We usually consider noise to be additive

– Binary input– Real input

22

i i iy x n

3/4/2018

12

Error Control Classification• The error control problem can be classified in

several ways.– Type of errors: how much clustering—random, burst,

catastrophic (灾难的)

– Type of modulator output: digital (“hard”) vs. analog (“soft”)

– Type of error control coding: detection vs. correction

– Type of codes: block vs. convolutional

– Type of decoder: algebraic vs. probabilistic

• The first two classifications are used to select a coding scheme according to the last three classifications.

23

Types of Errors: Noise Characteristics

• Random: independent noise symbols, perhaps i.i.d. or Bernoulli. Each noise event affects isolated symbols.

• Burst: a noise event that causes a contiguous sequence of unreliable symbols. Example: Suppose errors occur in clusters of 10–100 bits Interleaving

• Catastrophic: channel becomes unusable for a period of time comparable to or longer than a data packet. Example: a communication channel that is unusable for 9 hours each year (solar storm?) but otherwise noiseless. Retransmission

24

3/4/2018

13

Types of Error Protection• Error detection

– Goal: avoid accepting faulty data.– Lost data may be unfortunate; wrong data may be

disastrous– Solution: checksums are included in messages (packets,

frames, sectors). With high probability the checksums are not valid when any part of the message is altered.

• (Forward) error correction (FEC or ECC)– Use redundancy in encoded message to estimate from the

received data (senseword) what message was actually sent.– The best estimate is usually the “closest” message. The

optimal estimate is the message that is most probable given what is received. (MAP)

• Error correction is more complex than error detection

25

Types of Error Protecting Codes• Block codes

– Data is blocked into -vectors of information digits, then encoded into -digit codewords ( ) by adding redundant check digits.

– There is no memory between blocks. The encoding of each data block is independent of past and future blocks.

– An encoding in which the information digits appear unchanged in the codewords is called systematic.

26

3/4/2018

14

Types of Error Protecting Codes• Convolutional codes

– Time-invariant encoding scheme. Each -bit codewordblock depends on the current information digits and on the past information blocks

– The parameter m is called the memory order. The constraint length is 1 ; this is the number of bits that the decoder must consider.

– For this rate 1/2 convolutional code, 1and 2.– Exercise: determine the error correction ability and a

decoding method for the above convolutional code.27

Contents

• History of coding theory

• Hat puzzle

• Introduction to coding theory

• Linear block code introduction

28

3/4/2018

15

Alphabets• : An alphabet is a discrete (usually finite)

set of symbols.• Examples:

– 0, 1 is the binary alphabet– 1, 0, 1 is the ternary alphabet– 00, 01, . . . , is the alphabet of 8-bit symbols (used in

codes for CDs, DVDs, and most hard disk drives)

• A channel alphabet symbol may be– an indivisible transmission unit, e.g., one point from a

signal constellation, or– a sequence of modulation symbols encoded into a coding

alphabet symbol

• The alphabets encountered in this class usually have 2 symbols.

29

Linear Block Code• : A block code of block length over an

alphabet χ is a non-empty set of -tuples of symbols from χ.– , . . . , , . . . , , . . . ,

– The -tuples of the code are called codewords.

• Rate:– channel alphabet: symbols– information symbols: tuples– number or codewords:

– code length: – rate

1log

– , code30

3/4/2018

16

Systematic Encoder

• An encoder is called systematic if it copies the message symbols unchanged in consecutive locations in the codeword.

• Codewords are of the form or , where is the vector of message symbols and is the vector of redundant or check symbols.

• Almost all codes use systematic encoders. Exception: Reed-Muller codes.

31

Block Codes: Examples• 00010110

– Block length 8, 1, rate 1 8⁄ log 1 0.– Codes with rate 0 are called useless code.– Could be used for error rate analysis or byte sync.

• 00, 01, 10, 11– Block length 2, 4, rate 1 2⁄ log 4 1.– No redundancy. Can neither correct nor detect errors.

• 000, 111– Block length 3, 2, rate 1 3⁄ log 2 0.333.– Repetition code, can correct one bit error.

32

3/4/2018

17

Hamming Distance• The Hamming distance between -tuples is the

number of components in which the n-tuples differ

, ∑ , , where ,1, if0, if

– , 0 with equality if and only if (nonnegativity)– , , (symmetry)– , , , (triangle inequality)– Hamming distance is a coarse or pessimistic measure of

difference.

• Other useful distances in error control coding:– , distance on a circle, is applicable to phase

shift coding.– ,is used with sensewords in .

33

Minimum Distance• The minimum (Hamming) distance ∗ of a block

code is the minimum distance between any two codewords:

∗ min , ∶ , arecodewordsand • Properties of minimum distance:

– ∗ 1since Hamming distance between distinct codewordsis a positive integer.

– ∗ if code has two or more codewords.– ∗ 1or ∗ ∞for the useless code with only one

codeword. (This is a convention, not a theorem.)– ∗ ∗ if ⊆ —smaller codes have larger (or

equal) minimum distance.

• The minimum distance of a code determines both error-detecting ability and error-correcting ability.

34

3/4/2018

18

Error-detecting Ability• Suppose that a block code is used for error

detection only.– If the received -tuple is not a codeword, a detectable error

has occurred– If the received -tuple is a codeword but not the

transmitted codeword, an error has occurred that cannot be detected.

• : The guaranteed error-detecting ability is ∗ 1.

• The error-detecting ability is a worst case measure of the code.

35

Error-correcting Ability• We may think of a block code as a set of vectors

in an -dimensional space. The geometric intuition:

• Nearest-neighbor decoding argmin , ∶ isacodeword

36

3/4/2018

19

Error-correcting Ability (cont’d)• : Using nearest-neighbor decoding, errors of

weight t can be corrected if and only if 2 ∗. (For Hamming distance, equivalently, 2 1 ∗).

• : The “spheres” of radius surrounding the codewords do not overlap. Otherwise, there would be two codewords 2 distant. Therefore when errors occur, the decoder can unambiguously decide which codeword was sent.

37

Decoding Outcomes• A particular block code can have more than one

decoder, depending on– the type of errors expected or observed– the error rate– the computational power available at the decoder

• Suppose that codeword is transmitted, sensewordis received, and ̂ is the decoder’s output. possible

outcomes:– ̂ decoder success Successful correction

(including no errors)– ̂ ? decoder failure Uncorrectable error detected,

no decision (not too bad)– ̂ decoder error Miscorrection (very bad)

38

3/4/2018

20

Decoding Outcomes (cont’d)• The decoder cannot distinguish the outcome ̂

from ̂ .• However, it can assign probabilities to the

possibilities; more bit errors corrected suggests a higher probability that the estimate is wrong.– ̂ ? Pued Error detected but not corrected (decoder

failure)– ̂ Pmc Miscorrection (decoder error)– ̂ Pue Undetectable error (error detection only)

• : A complete decoder is a decoder that decodes every senseword to some codeword; i.e., the decoder never fails (to make hard decisions).– For a complete decoder, Pued 0.

39

Decoding Outcomes (cont’d)• : A bounded-distance decoder corrects all

errors of weight but no errors of weight . More than errors results in failure or error.

• For a fixed code, if we reduce then decoder failure becomes more common while decoder error becomes less likely.

40

3/4/2018

21

Example: Repetition Code• Transmit a single information symbol times.• For the binary alphabet, there are two codewords,

i.e., 00 00, 11 11 .– Rate: 1/– Minimum distance: ∗

– Error detection: up to 1symbol errors– P{undetected error} = – Error correction: up to 1 /2 symbol errors– P{miscorrection} = P{more than n/2 errors} (for binary

alphabet)

• For binary alphabet, any two complementary codewords could be used.– Example: ′ 01010101, 10101010 55, .

41

Example: Parity-check Codes• Append one check bit to data bits so that all

codewords have the same overall parity - either even or odd.

• Even-parity codewords are defined by a single parity-check equation:

⊕ ⊕ ⊕ mod2 0,where ⊕ denotes the exclusive-or operation.

• encoder⊕ ⊕ ⊕

• Note: Any bit can be considered to be the check bit because it can be computed from the other 1bits: ∑

42

3/4/2018

22

Example: Product Codes

• Arrange data bits in a two-dimensional array. Append parity check bits at the end of each row and column.

• Error detection and correction– single error: failure of one row equation and one column

equation.– Double error: can be detected—two rows or two columns

(or both) have the wrong parity—but cannot be corrected.– Exercise: what fraction of errors is not detected?

43

Thanks for your kind attention!

Questions?

44