Upload
others
View
13
Download
0
Embed Size (px)
Citation preview
2004/12/20
1
國立台灣海洋大學國立台灣海洋大學National Taiwan Ocean UniversityNational Taiwan Ocean University
數位通訊
Digital CommunicationsFall 2004
吳家琪 助理教授
Lecture 11: Performance of Convolutional Codes
通訊與導航工程系碩士班
國立台灣海洋大學National Taiwan Ocean University
AnnouncementCourse webpage: http://dcstl.cge.ntou.edu.tw/DCSTL/Web/dicomm.htmReading Assignment:
Chapter 8Homework 7 due today
國立台灣海洋大學National Taiwan Ocean University
Summary of Encoding Convolutional CodesConvolutional Codes are useful for real-time applications because they can be continously encoded and decodedWe can represent convolutional codes as generators, block diagrams, state diagrams, and trellis diagramsWe want to design convolutional codes to maximize free distance while maintaining non-catastrophic performance
國立台灣海洋大學National Taiwan Ocean University
Decoding of Convolutional CodesLet Cm be the set of allowable code sequences of length m.
Not all sequences in {0,1}m are allowable code sequences!Each code sequence c ∈ C can be represented by a unique path through the trellis diagram.What is the probability that the code sequence c is sent and the binary sequence y is received?
Pr(y|c) = pdH(y,c)(1-p)m - dH(y,c)
where p is the probability of bit error of BSC frommodulation
2004/12/20
2
國立台灣海洋大學National Taiwan Ocean University
Decoding Rule for Convolutional CodesMaximum Likelihood Decoding Rule:maxc∈Cm {Pr(y|c)} ⇒ minc∈Cm {dH(y, c)} Choose the code sequence through the trellis which has the smallest Hamming distance to the received sequence!
國立台灣海洋大學National Taiwan Ocean University
The Viterbi AlgorithmThe Viterbi Algorithm (Viterbi, 1967) is a clever way of implementing Maximum Likelihood decoding.
Computer Scientists will recognize the Viterbi Algorithm as an example of a CS technique called "Dynamic Programming."
Reference: G. D. Forney, “The Viterbi Algorithm,”Proceedings of the IEEE, 1973.Chips are available from many manufacturers which implement the Viterbi Algorithm for K
2004/12/20
3
國立台灣海洋大學National Taiwan Ocean University
Implementation of Viterbi DecoderComplexity is proportional to number of states
increases exponentially with constrain length K: 2K
Very suited to parallel implementationEach state has two transitions into itEach state has two transitions out of itEach node must compute two path metrics, add them to previous metric and compareMuch analysis as gone into optimizing implementations of this “Butterfly” calculation
國立台灣海洋大學National Taiwan Ocean University
Continuous OperationWhen continuous operation is desired, decoder will automatically synchronize with transmitted signal without knowing stateOptimal decoding requires waiting until all bits are received to trace back path.In practice, it is usually safe to assume that all paths have merged after approximately 5K time intervals
diminishing returns after delay of 5K
國立台灣海洋大學National Taiwan Ocean University
Frame Operation of Convolutional CodesFrequently, we desire to transfer short (e.g., 192 bit) frames with convolutional codes.When we do this, we must find a way to terminate code trellis
TruncationZero-PaddingTail-biting
Note that the trellis code is serving as a ‘block’ code in this application
國立台灣海洋大學National Taiwan Ocean University
Trellis Termination: Zero PaddingAdd K-1 0’s to the end of the data sequence to force the trellis back to the all zeros statePerformance is good
Now both start and ending state are known by the decoderWastes bits in short frame
2004/12/20
4
國立台灣海洋大學National Taiwan Ocean University
Performance of Convolutional CodesWhen the decoder chooses a path through the trellis which diverges from the correct path, this is called an "error event"The probability that an error event begins during the current time interval is the “first-event error probablity”PeThe minimum Hamming distance separating any two distinct path through the trellis is called the “free distance” dfree
國立台灣海洋大學National Taiwan Ocean University
Calculation of Error Event ProbabilityWhat's the pairwise probability of choosing a path at distance d from the correct path?
國立台灣海洋大學National Taiwan Ocean University
Calculation of First Event Error ProbabilityAn upper (Union) bound:
where ad = # of paths at distance d
A good approximation:Pe ≈ adfreeP2(dfree)
國立台灣海洋大學National Taiwan Ocean University
Evaluating Error ProbabilityUsing the Transfer Function Bound
Transfer function:
T(D) is a polynomial in D. The coefficients of T(D) are the weights ad which we need to determine.Using the Chernoff Bound, the transfer function leads to an easy to evaluate expression for Pe :
2004/12/20
5
國立台灣海洋大學National Taiwan Ocean University
Finding T(D) from State DiagramBreak all 0’s state in two, creating a starting state and a terminating stateRe-label every output 1 as a Dad is the number of distinct paths leading from the starting state to the terminating state while generating the function Dd
國立台灣海洋大學National Taiwan Ocean University
Example of State Diagram
T(D) = D5 + 2D6+ 4D7+ …
國立台灣海洋大學National Taiwan Ocean University
Performance Example for Convolutional CodeSuppose we use a rate 1/2, constraint length 3 convolutional code with BPSK modulation. Find the value of Eb/N0 required for a first event errorprobabilityof Pe ≤ 10-5
國立台灣海洋大學National Taiwan Ocean University
Performance of r=1/2 Convolutional Codeswith Hard Decisions
Bit error probability for rate 1/2 convolutionalcoding using hard decision channel
2004/12/20
6
國立台灣海洋大學National Taiwan Ocean University
Performance of r=1/3 Convolutional Codeswith Hard Decisions
Bit error probability for rate 1/3 convolutionalcoding using hard decision channel
國立台灣海洋大學National Taiwan Ocean University Soft-Decision DecodingBasic Idea: Use unprocessed decision statistics rather than making "hard" decisions on individual bits.
國立台灣海洋大學National Taiwan Ocean University
Simple Example of Soft-Decision DecodingAssume that BPSK modulation is used with rate (5,1) repetition code (i.e., there are two codewords (00000) and (11111) ).Suppose that the codeword c = (11111) is sent, and the following five decision statistics are received:
Z = {1.0, 1.0, -0.01, -0.02, -0.01}A hard decision decoder will look at the individual bits and decide that Y = (11000) are the bits received. Since Y is nearer in Hamming distance to (00000) than (11111), the wrong codeword (00000) will be chosen.
國立台灣海洋大學National Taiwan Ocean University
Simple Example of Soft-Decision DecodingA soft decision decoder will recognize that the first two bits {1.0, 1.0} were clearly received. The next three bits are not very strong (e.g., maybe the channel was in a deep fade) and therefore place less weight on them. A soft decision decoder will choose the correct codeword (11111).
2004/12/20
7
國立台灣海洋大學National Taiwan Ocean University
Soft-Decision Decoding of Convolutional Codes國立台灣海洋大學
National Taiwan Ocean University
Modification of the Viterbi Algorithmfor Soft Decisions
Trellis is labeled transmitted code vectors (show for BPSK here)
Metric is now squared Euclidean distance Viterbi Algorithm operates just as before except:
01
10
國立台灣海洋大學National Taiwan Ocean University
Performance of Soft-Decision DecodingPairwise Error Probability:
where de2 = 4EbrdH (for BPSK)Error Event Probability:
Soft Decisions typically result in 2 dB more coding gain
國立台灣海洋大學National Taiwan Ocean University
Performance of r=1/2 Convolutional Codeswith Soft Decisions
Bit error probability for rate 1/2 convolutionalcoding using soft decision channel
2004/12/20
8
國立台灣海洋大學National Taiwan Ocean University
Performance of r=1/3 Convolutional Codeswith Soft Decisions
Bit error probability for rate 1/3 convolutionalcoding using soft decision channel
國立台灣海洋大學National Taiwan Ocean University
Convolutional InterleaversAlso called Cross-Interleaver
Corresponding Deinterleaver
國立台灣海洋大學National Taiwan Ocean University
Punctured Convolutional CodesPuncturing: delete a parity symbol
(n,k) code -> (n-1,k) codeBits may be deleted from the stream out of generators
produces higher rate codesoften times alternating bits are punctured
Decoding of punctured codes is easypunctured bits count zero towards every paths metric
Rate Compatible Punctured Codesa family of different rate codes that are all generated from the same trellis
國立台灣海洋大學National Taiwan Ocean University
Practical Examples of Convolutional CodesNASA uses a standard r=1/2, K=7 convolutional codeIS-54/136 TDMA Cellular Standard uses a r=1/2, K=6 convolutional codeGSM Cellular Standard uses a r=1/2, K=5 convolutionalcodeIS-95 CDMA Cellular Standard uses a r=1/2, K=9 convolutional code for forward channel and a r=1/3, K=9 convolutional code for reverse channelGallileo space probe used constraint length 15 convolutional code
2004/12/20
9
國立台灣海洋大學National Taiwan Ocean University
Performance Gap for Convolutional Codes國立台灣海洋大學
National Taiwan Ocean University
Motivation for Turbo CodesRandom codes are known to be good for sufficiently long block length. (known from information theory)However, random codes have no structure and are hard to decode.Most known codes with structure do not have as good performance as theory predicts is possible.Goal: Develop codes with enough structure to decode them, but which “appear” random.
國立台灣海洋大學National Taiwan Ocean University
Turbo CodesProposed by team of French Researchers with interest in VLSI design
Berrou, Glavieux, and Thitimajshima , “Near Shannon Limit Error Correcting Coding and Decoding: Turbo Codes” IEEE Trans. Comm., October 1996.
Parallel contactenation of convolutional codes is used to give the codes structure so they can be decodedPsuedorandom interleaving is used to give the codes performance which approaches that for random coding
國立台灣海洋大學National Taiwan Ocean University
Parallel Concatenated CodesInstead of concatenating in serial, codes can also be concatenated in parallel.The original turbo code is a parallel concatenation of two recursive systematic convolutional (RSC) codes.
systematic: one of the outputs is the input.
2004/12/20
10
國立台灣海洋大學National Taiwan Ocean University
Example RSC Convolutional Code國立台灣海洋大學
National Taiwan Ocean University
Encoding of Turbo CodesTwo Recursive Systematic Convolutional Codes are concatenated in parallel (rather than series).Each data bit generates two separate sets of check bits (one for each code).Data is interleaved according to a psuedorandom pattern prior to generation second set of output bits so that the two sets of check bits are independently generated.It may be possible to decode even if some of the check bits are dropped (this is called puncturing).
國立台灣海洋大學National Taiwan Ocean University
Encoder for Basic Rate 1/3 Turbo Code國立台灣海洋大學
National Taiwan Ocean University
Pseudo-random InterleavingThe coding dilemma:
Shannon showed that large block-length random codes achieve channel capacity.However, codes must have structure that permits decoding with reasonable complexity.Codes with structure don’t perform as well as random codes.“Almost all codes are good, except those that we can think of.”
2004/12/20
11
國立台灣海洋大學National Taiwan Ocean University
Pseudo-random InterleavingSolution:
Make the code appear random, while maintaining enough structure to permit decoding.This is the purpose of the pseudo-randominterleaver.Turbo codes possess random-like properties.However, since the interleaving pattern is known, decoding is possible.
國立台灣海洋大學National Taiwan Ocean University
Iterative DecodingThere is one decoder for each elementary encoder.Each decoder estimates the a posteriori probability (APP) of each data bit.The APP’s are used as a priori information by the other decoder.Decoding continues for a set number of iterations.
Performance generally improves from iteration to iteration, but follows a law of diminishing returns.
國立台灣海洋大學National Taiwan Ocean University
Iterative Decoding國立台灣海洋大學
National Taiwan Ocean University
Classes of Decoding AlgorithmsRequirements:
Accept Soft-Inputs in the form of a priori probabilities or log-likelihood ratios.Produce APP estimates of the data.“Soft-Input Soft-Output”
MAP(symbol-by-symbol)
Maximum A PosterioriSOVA
Soft Output Viterbi Algorithm
2004/12/20
12
國立台灣海洋大學National Taiwan Ocean University
Decision MetricsThe Viterbi Algorithm can operate using any additive metricWe have seen two different metricsHamming distance - measures difference between sequencesEuclidean distance - measures difference between signals in Gaussian noiseWe will want to use an even more general metric:
Log- likelihood ratios
國立台灣海洋大學National Taiwan Ocean University
Log-likelihood RatiosLet d be a bit that we want to determine the value ofThen
is the log-likelihood ratio (LLR)Interpretation
Pr[d=1]=0.5 L(d) = 0Pr[d=1]=1 L(d) approaches infinityPr[d=1]=0 L(d) approaches minus infinity
國立台灣海洋大學National Taiwan Ocean University
Relationship to ProbabilityWe can convert from a LLR directly to probability
國立台灣海洋大學National Taiwan Ocean University
Setup of Turbo Decoding ProblemSo far we have been looking at only the LLRs of individual bitsThe constraints applied by convolutional codes allow us to use the structure of the decoder as an additional source of knowledgeThe result is that the output will contain additional extrinsic informationThe sum of the a priori information and the extrinsic information will be the a priori information for the next stage of decoding
2004/12/20
13
國立台灣海洋大學National Taiwan Ocean University
A typical decoder stage國立台灣海洋大學
National Taiwan Ocean University
Decoding AlgorithmThe conventional Viterbi algorithm could use LLR as a metric (since LLRs add).
We would try to find the path which maximized the LLR metric.
The conventional Viterbi decoding algorithm produces hard decision outputs
It cannot produce information for next decoding stageWe need to find a decoding algorithm that accepts LLRs for both input and output
Soft Input Soft Output (SISO)
國立台灣海洋大學National Taiwan Ocean University
PCCC: Number of Iterations國立台灣海洋大學
National Taiwan Ocean University
PCCC: Frame Size
2004/12/20
14
國立台灣海洋大學National Taiwan Ocean University
PCCC: Code Rate(with Various Code Generators)
0.7 dB more coding gain from rate 1/2 to rate 1/3
國立台灣海洋大學National Taiwan Ocean University
PCCC: Code Generator(with Various Frame Size)
PCCCCode rate 1/210 iter. for N=200 & 100018 iter. for N=16000Random interleaver
國立台灣海洋大學National Taiwan Ocean University
Implementation ChallengesComplexity
iterative decoding algorithm more complex than Viterbialgorithm but shorter constraint lengths may be used
Fixed Point DSPOptimal decoding algorith sensitive to numerical precisionsimplified ‘Soft Output Viterbi Alogorithm’ is less sensitive
AdaptationThere are a rich set of performance tradeoffs between code rate, decoding latency, and complexity
Fading ChannelsDecoder requires estimate of instantaneous SNRPerformance is less well understood in fading
國立台灣海洋大學National Taiwan Ocean University
Power Efficiency of Coding Standards
/ColorImageDict > /JPEG2000ColorACSImageDict > /JPEG2000ColorImageDict > /AntiAliasGrayImages false /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict > /GrayImageDict > /JPEG2000GrayACSImageDict > /JPEG2000GrayImageDict > /AntiAliasMonoImages false /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 1200 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict > /AllowPSXObjects false /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile () /PDFXOutputCondition () /PDFXRegistryName (http://www.color.org) /PDFXTrapped /Unknown
/Description >>> setdistillerparams> setpagedevice