Upload
bhagwat-singh-rathore
View
300
Download
13
Embed Size (px)
Citation preview
1 | P a g e
ECE 307 INFORMATION THEORY AND CODING
LDPC ENCODING
A project report submitted for internal assessment
under faculty
S.Revathi
by
Name Reg. No.
Lokesh Jindal 11BEC1043
Bhagwat Singh 11BEC1070
Devanshu Goenka 11BEC1099
Gurpartap Singh 11BEC1124
May, 2014
2 | P a g e
List of content
S. No. Content Page No.
1. Chapter 1. Introduction 3
2. Chapter 2. LDPC Codes 5
3. Chapter 3. Coding 12
4. Chapter 4. Results 17
5. Chapter 5. Industrial applications 18
List of figures
S. No. Name of figure Page No.
1. Communication System Block Diagram 3
2. Length 4 Cycle 7
3. Length 6 Cycle 7
4. LDPC System overview 8
5. Flowchart to create the Parity-Check matrix (H) 9
6. Hamming encoding output for input bits [1 0 1 1] 17
7. LDPC encoding output for input bits [1 0 1 1] 17
3 | P a g e
Chapter 1. Introduction
In 1962, Robert Gallager had originally proposed Low-Density Parity-Check codes, or LDPC
codes as a class of channel coding, but implementation of these codes required a large amount of
computing power due to the high complexity and memory requirements of the encoding/decoding
operations, so they were forgotten. A few years after turbo codes made their appearance, David
MacKay rediscovered LDPC codes, and he showed that LDPC codes were also capable of
approaching the Shannon limit using iterative decoding techniques.
An LDPC code is a linear block code characterised by a very sparse parity-check matrix. This
means that the parity check matrix has a very low concentration of 1’s in it, hence the name is
“low-density parity-check” code. The sparseness of LDPC codes is what as it can lead to excellent
performance in terms of bit error rates.
1.1 Digital Communication
Digital communication is a fundamental requirement of the modern world. Many current analog
transmission systems are converting to digital such cable TV. The advantages allow content to be
dynamic as well as introduce new features that were impossible over an analog system.
Figure 1: Communication System Block Diagram
4 | P a g e
Figure 1 shows a model of a communication system. A digital message originates from the source.
These digital signals are then passed through a source encoder. The source encoder removes the
redundancy of the system; much the same way as computer file compression operates. Following
source encoding, the signal is then passed through the channel encoder which adds controlled
redundancy to the signal, the signal is then modulated and transmitted over the channel. The
reverse process occurs in the receiver.
5 | P a g e
Chapter 2. LDPC Codes
LDPC Codes are a class of linear block codes that approach Shannon’s Channel Capacity Limit.
LDPC Codes are characterized by the sparseness of ones in the parity-check matrix. This low
number of ones allows for a large minimum distance of the code, resulting in improved
performance. Although proposed in the early 1960’s, it has not been since recently that codes have
emerged as a promising area of research in achieving channel capacity. This is part due to the large
amount of processing power required to simulate the code. In the case of any coding scheme larger
blocklength codes provide better performance, but require more computing power.
Performance of a code is measured through its bit error rate (BER) vs. signal to noise ratio (𝐸𝑏
𝑁0) in
dB. The curve of a good code will show a dramatic drop in BER as SNR improves. The best codes
have a cliff drop at an SNR slightly higher than the Shannon’s limit.
2.1 Parity-Check Matrix:
LDPC codes are classified into two different classes of codes: regular and irregular codes. Regular
codes are the set of codes in which there is a constant number of 𝑤𝐶 1’s distributed throughout
each column and a constant number of 𝑤𝑅 1’s per row. For a determined column weight (𝑤𝐶), we
can determine the row weight as 𝑁∗𝑤𝐶
𝑁−𝑘, (N is the block-length of the code and k is the message
length). Irregular codes are those of which do not belong to this set (do not maintain a consistent
row weight).
2.1.1 Methods of generation:
In the 1960’s, Gallager published the existence of the class of LDPC codes, but provided no insight
into how to generate the parity-check matrix (also known as the ‘H’ matrix). There have been
many methods proposed by various researchers as to methods of generation. Several methods
include:
• Random Generation subject to constraints
• Density Evolution
• Finite Geometry
6 | P a g e
In terms of generation there are several key concerns to examine when generating the parity-check
matrix such as minimum distance, cycle length and linear independence.
2.2 Minimum Distance of LDPC Codes
The minimum distance is a property of any coding scheme. Ideally this minimum distance should
be as large as possible, but there is a practical limit on how large this minimum distance can be.
LDPC possess a large problem when calculating this minimum distance efficiently as an effective
LDPC code requires rather large blocklengths. Using random generation it is very difficult to
specify the minimum distance as a parameter, rather minimum distance will become a property of
the code.
2.3 Cyclic Length of LDPC Codes
Using a Tanner Graph it is possible to view the definition of the minimum cycle length of a code.
It is the minimum number of edges travelled from one check node to return to the same check
node. Length 4 and Length 6 cycles with the corresponding parity-check matrix configurations are
shown in Figures 2 and 3 respectively. It has been shown that the existence of these cycles degrade
the performance during iterative decoding process. Therefore when generating the parity-check
matrix, the minimum cycle length permitted must be determined. It is possible control the
minimum cycle length when generating the matrix, however computational complexity and time
increases exponentially with each increase in minimum cycle length.
7 | P a g e
Figure 2: Length 4 Cycle Figure 3: Length 6 Cycle
2.4 Linear Independence
The generator matrix G, is defined such that:
𝒄 = 𝑮𝑻𝒎
Where,
𝑐 = [𝑐1, 𝑐2, 𝑐3, … … … 𝑐𝑁]𝑇- Codeword
𝑚 = [𝑚1, 𝑚2, 𝑚3, … … … 𝑚𝑘]𝑇 – Message Word
G = k by n Generator matrix
In order to guarantee the existence of such a matrix G, the linear independence of all rows of the
parity-check matrix must be assured. In practical random generation, this becomes very difficult.
8 | P a g e
2.5 LDPC System Overview
Figure 4: LDPC System overview
Where,
m - Message
c - Codeword
x – Modulated signal
n – AWGN moise
y – Received signal
ĉ – Estimated codeword
mˆ - Estimated message
9 | P a g e
2.6 Generation of Parity-Check Matrix:
The method used for generating the H matrix in this paper was random generation with constraints.
The algorithm used to generate this routine allows for 4 input parameters:
• N - Block/Codeword Length
• k - Message Bits
• 𝑤𝐶 - Column Weight (no. of 1’s per column)
• reltol - Tolerance Variable used to control regularity
The row weight (𝑤𝑅) is computed as 𝑁∗𝑤𝐶
𝑁−𝑘. In order to guarantee that 𝑤𝑅 is a whole number, the
value is rounded up if it contains a decimal value, setting the maximum allowed number of 1’s per
row. In order to allow for sufficiently fast computation of the H matrix, only cycles of length 4 are
avoided in the algorithm. The algorithm for generation of the matrix is shown in Figure 5 below.
Figure 5: Flowchart to create the Parity-Check matrix (H)
10 | P a g e
2.7 Encoding:
Practical encoding of LDPC can be difficult thing to implement. In terms of simulation, encoding
can be done via matrix multiplication, as memory allotment of most personal computers can handle
these operations with rather large blocklengths. In Section 2.4, it was determined that we can
compute the codeword c using:
𝒄 = 𝑮𝑻𝒎
Now we determine how to generate this matrix G. In order to determine the relationship of the
parity bits to the H matrix, we will use the following definition of the syndrome. The definition is
similar to that of Hamming Code. We define a complete set of successful parity-checks as:
Hc = 0
Where:
𝑐 = [𝑐1, 𝑐2, 𝑐3, … … … 𝑐𝑁]𝑇- Codeword
𝐻(𝑁−𝑘)∗𝑁 = (𝑁 − 𝑘) 𝑏𝑦 𝑁 Parity-Check Matrix
The location of the parity-bits in the codeword is arbitrary, therefore we will form our
codeword such that:
𝑐 = [𝑝: 𝑚]𝑇
Where:
𝑚 = [𝑚1, 𝑚2, 𝑚3, … … … 𝑚𝑘]𝑇 – Message Word
𝑝 = [𝑝1, 𝑝2, 𝑝3, … … … 𝑝𝑁−𝑘]𝑇– Parity Bits
Therefore:
𝐻[𝑝: 𝑚]𝑇 = 0
H can be partitioned as:
𝐻 = [𝑋: 𝑌]
Where:
X = N-k by N-k Sub-matrix
Y = N-k by k Sub-matrix
11 | P a g e
From this we can find:
𝑋𝑝 + 𝑌𝑚 = 0
Using modulo-2 arithmetic we can solve for p as:
𝑝 = 𝑋−1𝑌𝑚
Then we solve for c as:
𝑐 = [(𝑋−1𝑌)𝑇: 𝐼]𝑇𝑚
Where I is the k by k identity matrix and we define G as:
𝐺 = [(𝑋−1𝑌)𝑇: 𝐼]
Where I is the k by k identity matrix
And we define G as:
𝑮 = [(𝑿−𝟏𝒀)𝑻: 𝑰]
12 | P a g e
Chapter 3. Coding
The code of LDPC encoding is written in MATLAB. We also compare the result with Hamming
Encoding. We took generator matrix (G) from “Communication Systems” by Simon Haykin. The
G matrix is given below:
𝐺 = [
1 0 0 1 1 0 1 0 0 00 0 0 1 1 1 0 1 0 00 0 1 1 1 0 0 0 1 00 1 0 1 1 0 0 0 0 1
]
Code is for both Hamming code and LDPC for same G matrix.
% Generator Matrix
G = [1 0 0 1 1 0 1 0 0 0; 0 0 0 1 1 1 0 1 0 0; 0 0 1 1 1 0 0 0 1 0; 0 1 0 1 1 0 0 0 0 1];
Gt=G.';
for i=1:4
for j=1:6
p(i,j)= G(i,j); %because G=[p:I]
end
end
% Make Identity Matrix
for i=1:6
for j=1:6
if i==j
I(i,j)=1;
else
I(i,j)=0;
end
end
end
H = [I,p.']; %Parity-Check Metrix
Ht= H.';
%Choose LDPC or Hamming Encoding
F = menu('Choose an Encoding method','1.Hamming Code','2.LDPC');
13 | P a g e
if F==1
%choose your choice either encoding or decoding
B= menu('Choose your Choice', '1.Encoding', '2.Decoding');
if B==1
%Hamming Encoding
m=input('enter message(4 bits) string: \n');
b=[m]*[p]; %Parity bits
for i=1:length(b)
if rem(b(i),2)==1;
b(i)=1;
else b(i)=0;
end
end
Encoded_Message=[b,m] %Encoded Message
end
if B==2
%Hamming Decoding
code=input('enter received message (10 bits) string: \n');
s=[code]*Ht; %Syndrome
for i=1:length(s)
if rem(s(i),2)==1;
s(i)=1;
else s(i)=0;
end
end
%Error detection and correction using Syndrome
if s==[0 0 0 0 0 0] %no error
Error_Free_Received_msg=code;
Decoded_msg=code(7:10);
end
if s==H(1:6,1).' %error at 1st bit
14 | P a g e
code(1)=xor(code(1),1);
end
if s==H(1:6,2).' %error at 2nd bit
code(2)=xor(code(2),1);
end
if s==H(1:6,3).' %error at 3rd bit
code(3)=xor(code(3),1);
end
if s==H(1:6,4).' %error at 4th bit
code(4)=xor(code(4),1);
end
if s==H(1:6,5).' %error at 5th bit
code(5)=xor(code(5),1);
end
if s==H(1:6,6).' %error at 6th bit
code(6)=xor(code(6),1);
end
if s==H(1:6,7).' %error at 7th bit
code(7)=xor(code(7),1);
end
if s==H(1:6,8).' %error at 8th bit
code(8)=xor(code(8),1);
end
if s==H(1:6,9).' %error at 9th bit
code(9)=xor(code(9),1);
end
if s==H(1:6,10).' %error at 10th bit
code(10)=xor(code(10),1);
end
Error_Free_Received_msg=code; %Received bits after error correction
Decoded_msg=code(7:10) %Decoded Message
15 | P a g e
end
end
if F==2
%choose your choice - encoding or decoding
B= menu('Choose your Choice', '1.Encoding', '2.Decoding');
if B==1
%LDPC Emcoding
m=input('enter message(4-bit) string: \n');
mt=m.';
Encoded_Message=Gt*mt;
for i=1:length(Encoded_Message)
if rem(Encoded_Message(i),2)==1;
Encoded_Message(i)=1;
else Encoded_Message(i)=0;
end
end
Encoded_Message.'
end
if B==2
%LDPC Decoding
y=input('enter received message(10 bits) string: \n');
Decoded_Message=[G*y.'].' %Decoded message
sum=0;
for i=1:length(Decoded_Message)
sum=sum+Decoded_Message(i);
end
%set threshold value
th=sum/length(Decoded_Message)
for i=1:length(Decoded_Message)
if Decoded_Message(i)>=th;
Decoded_Message(i)=1;
16 | P a g e
else Decoded_Message(i)=0;
end
end
Decoded_Message %Decoded message
end
end
17 | P a g e
Chapter 4. Results
In this project we got same encoded output by simulating under LDPC algorithm and Hamming
algorithm for same G matrix.
The snapshots of the output are shown below for Hamming encoding and LDPC encoding in figure
6 and 7 respectively.
Figure 6: Hamming encoding output for input bits [1 0 1 1]
Figure 7: LDPC encoding output for input bits [1 0 1 1]
18 | P a g e
Chapter 5. Industrial Applications
LDPC codes are chosen for their excellent error correcting performance. Sparse and structural
properties of LDPC codes are exploited to reduce computation and memory requirements.
Although continuous progresses in the capacity of batteries, minimizing the energy dissipation still
is one of the key objectives in the design of most sensor devices.
In particular, transmission energy is a relevant component of the overall energy budget of a
wireless sensor. The use of LDPC codes to protect sent information against channel errors, thus
allowing for a lower transmission energy. The energy that is saved at the transmission side depends
on the coding gain of the selected code: more powerful the code, larger the saved energy. However
a decoder is required at the receiver side to reconstruct the original information.
Some of the industrial applications are listed below:
LDPC codes are used over Galois field GF(q) for both single-input single-output
(SISO) and multiple-input multiple-output (MIMO) fading channels using higher
order modulations
Non-binary LDPC codes are used for Small Packet Transmission in Vehicle
Communications
On-the-fly configurable for ACM/VCM applications
Use of LDPC codes in digital video broadcasting
LDPC Codes are also used to the Wiretap Channel
These codes are also used in Telephone communication
LDPC codes are used in video conferencing