49
CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st , 2010 John Kubiatowicz Electrical Engineering and Computer Sciences University of California, Berkeley http://www.eecs.berkeley.edu/~kubitron/ cs252

CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

  • View
    215

  • Download
    1

Embed Size (px)

Citation preview

Page 1: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

CS252Graduate Computer Architecture

Lecture 23

Memory Technology (Con’t)Error Correction Codes

April 21st, 2010

John Kubiatowicz

Electrical Engineering and Computer Sciences

University of California, Berkeley

http://www.eecs.berkeley.edu/~kubitron/cs252

Page 2: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/21/2010 cs252-S10, Lecture 23 2

Main Memory Background• Performance of Main Memory:

– Latency: Cache Miss Penalty

» Access Time: time between request and word arrives

» Cycle Time: time between requests

– Bandwidth: I/O & Large Block Miss Penalty (L2)

• Main Memory is DRAM: Dynamic Random Access Memory– Dynamic since needs to be refreshed periodically (8 ms, 1% time)

– Addresses divided into 2 halves (Memory as a 2D matrix):

» RAS or Row Address Strobe

» CAS or Column Address Strobe

• Cache uses SRAM: Static Random Access Memory– No refresh (6 transistors/bit vs. 1 transistor

Size: DRAM/SRAM 4-8, Cost/Cycle time: SRAM/DRAM 8-16

Page 3: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/21/2010 cs252-S10, Lecture 23 3

DRAM Architecture

Row

Addre

ss

Deco

der

Col.1

Col.2M

Row 1

Row 2N

Column Decoder & Sense Amplifiers

M

N

N+M

bit linesword lines

Memory cell(one bit)

DData

• Bits stored in 2-dimensional arrays on chip

• Modern chips have around 4 logical banks on each chip

– each logical bank physically implemented as many smaller arrays

Page 4: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/21/2010 cs252-S10, Lecture 23 4

1-T Memory Cell (DRAM)

• Write:– 1. Drive bit line

– 2.. Select row

• Read:– 1. Precharge bit line to Vdd/2

– 2.. Select row

– 3. Cell and bit line share charges

» Very small voltage changes on the bit line

– 4. Sense (fancy sense amp)

» Can detect changes of ~1 million electrons

– 5. Write: restore the value

• Refresh– 1. Just do a dummy read to every cell.

row select

bit

Page 5: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/21/2010 cs252-S10, Lecture 23 5

DRAM Capacitors: more capacitance in a small area

• Trench capacitors:– Logic ABOVE capacitor– Gain in surface area of capacitor– Better Scaling properties– Better Planarization

• Stacked capacitors– Logic BELOW capacitor

– Gain in surface area of capacitor

– 2-dim cross-section quite small

Page 6: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/21/2010 cs252-S10, Lecture 23 6

DRAM Operation: Three Steps• Precharge

– charges bit lines to known value, required before next row access

• Row access (RAS)– decode row address, enable addressed row (often multiple Kb in row)– bitlines share charge with storage cell– small change in voltage detected by sense amplifiers which latch

whole row of bits– sense amplifiers drive bitlines full rail to recharge storage cells

• Column access (CAS)– decode column address to select small number of sense amplifier

latches (4, 8, 16, or 32 bits depending on DRAM package)– on read, send latched bits out to chip pins– on write, change sense amplifier latches. which then charge storage

cells to required value– can perform multiple column accesses on same row without another

row access (burst mode)

Page 7: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/21/2010 cs252-S10, Lecture 23 7

AD

OE_L

256K x 8DRAM9 8

WE_LCAS_LRAS_L

OE_L

A Row Address

WE_L

Junk

Read AccessTime

Output EnableDelay

CAS_L

RAS_L

Col Address Row Address JunkCol Address

D High Z Data Out

DRAM Read Cycle Time

Early Read Cycle: OE_L asserted before CAS_L Late Read Cycle: OE_L asserted after CAS_L

• Every DRAM access begins at:

– The assertion of the RAS_L

– 2 ways to read: early or late v. CAS

Junk Data Out High Z

DRAM Read Timing (Example)

Page 8: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/21/2010 cs252-S10, Lecture 23 8

• DRAM (Read/Write) Cycle Time >> DRAM (Read/Write) Access Time

– 2:1; why?

• DRAM (Read/Write) Cycle Time :– How frequent can you initiate an access?

– Analogy: A little kid can only ask his father for money on Saturday

• DRAM (Read/Write) Access Time:– How quickly will you get what you want once you initiate an access?

– Analogy: As soon as he asks, his father will give him the money

• DRAM Bandwidth Limitation analogy:– What happens if he runs out of money on Wednesday?

TimeAccess Time

Cycle Time

Main Memory Performance

Page 9: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/21/2010 cs252-S10, Lecture 23 9

Access Pattern without Interleaving:

Start Access for D1

CPU Memory

Start Access for D2

D1 available

Access Pattern with 4-way Interleaving:

Acc

ess

Ban

k 0

Access Bank 1

Access Bank 2

Access Bank 3

We can Access Bank 0 again

CPU

MemoryBank 1

MemoryBank 0

MemoryBank 3

MemoryBank 2

Increasing Bandwidth - Interleaving

Page 10: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/21/2010 cs252-S10, Lecture 23 10

• Simple: – CPU, Cache, Bus, Memory

same width (32 bits)

• Interleaved: – CPU, Cache, Bus 1 word:

Memory N Modules(4 Modules); example is word interleaved

• Wide: – CPU/Mux 1 word;

Mux/Cache, Bus, Memory N words (Alpha: 64 bits & 256 bits)

Main Memory Performance

Page 11: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/21/2010 cs252-S10, Lecture 23 11

Quest for DRAM Performance

1. Fast Page mode – Add timing signals that allow repeated accesses to row buffer

without another row access time– Such a buffer comes naturally, as each array will buffer 1024 to

2048 bits for each access

2. Synchronous DRAM (SDRAM)– Add a clock signal to DRAM interface, so that the repeated

transfers would not bear overhead to synchronize with DRAM controller

3. Double Data Rate (DDR SDRAM)– Transfer data on both the rising edge and falling edge of the

DRAM clock signal doubling the peak data rate– DDR2 lowers power by dropping the voltage from 2.5 to 1.8

volts + offers higher clock rates: up to 400 MHz– DDR3 drops to 1.5 volts + higher clock rates: up to 800 MHz

• Improved Bandwidth, not Latency

Page 12: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/21/2010 cs252-S10, Lecture 23 12

Fast Memory Systems: DRAM specific• Multiple CAS accesses: several names (page mode)

– Extended Data Out (EDO): 30% faster in page mode

• Newer DRAMs to address gap; what will they cost, will they survive?

– RAMBUS: startup company; reinvented DRAM interface

» Each Chip a module vs. slice of memory

» Short bus between CPU and chips

» Does own refresh

» Variable amount of data returned

» 1 byte / 2 ns (500 MB/s per chip)

– Synchronous DRAM: 2 banks on chip, a clock signal to DRAM, transfer synchronous to system clock (66 - 150 MHz)

» DDR DRAM: Two transfers per clock (on rising and falling edge)

– Intel claims FB-DIMM is the next big thing

» Stands for “Fully-Buffered Dual-Inline RAM”

» Same basic technology as DDR, but utilizes a serial “daisy-chain” channel between different memory components.

Page 13: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/21/2010 cs252-S10, Lecture 23 13

Fast Page Mode Operation• Regular DRAM Organization:

– N rows x N column x M-bit– Read & Write M-bit at a time– Each M-bit access requires

a RAS / CAS cycle

• Fast Page Mode DRAM– N x M “SRAM” to save a row

• After a row is read into the register

– Only CAS is needed to access other M-bit blocks on that row

– RAS_L remains asserted while CAS_L is toggled

N r

ows

N cols

DRAM

ColumnAddress

M-bit OutputM bits

N x M “SRAM”

RowAddress

A Row Address

CAS_L

RAS_L

Col Address Col Address

1st M-bit Access

Col Address Col Address

2nd M-bit 3rd M-bit 4th M-bit

Page 14: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/21/2010 cs252-S10, Lecture 23 14

SDRAM timing (Single Data Rate)

• Micron 128M-bit dram (using 2Meg16bit4bank ver)– Row (12 bits), bank (2 bits), column (9 bits)

RAS(New Bank)

CAS Prechargex

BurstREADCAS Latency

Page 15: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/21/2010 cs252-S10, Lecture 23 15

Double-Data Rate (DDR2) DRAM

[ Micron, 256Mb DDR2 SDRAM datasheet ]

Row Column Precharge Row’

Data

200MHz Clock

400Mb/s Data Rate

Page 16: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/21/2010 cs252-S10, Lecture 23 16

DDR vs DDR2 vs DDR3

• All about increasing the rate at the pins

• Not an improvement in latency

– In fact, latency can sometimes be worse

• Internal banks often consumed for increased bandwidth

Page 17: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/21/2010 cs252-S10, Lecture 23 17

DRAM Packaging

• DIMM (Dual Inline Memory Module) contains multiple chips arranged in “ranks”

• Each rank has clock/control/address signals connected in parallel (sometimes need buffers to drive signals to all chips), and data pins work together to return wide word

– e.g., a rank could implement a 64-bit data bus using 16x4-bit chips, or a 64-bit data bus using 8x8-bit chips.

• A modern DIMM usually has one or two ranks (occasionally 4 if high capacity)

– A rank will contain the same number of banks as each constituent chip (e.g., 4-8)

Address lines multiplexed row/column address

Clock and control signals

Data bus(4b,8b,16b,32b)

DRAM chip

~12

~7

Page 18: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/21/2010 cs252-S10, Lecture 23 18

DRAM Channel

16Chip

Bank

16Chip

Bank

16Chip

Bank

16Chip

Bank

64-bit Data Bus

Command/Address Bus

Memory Controller

Rank

16Chip

Bank

16Chip

Bank

16Chip

Bank

16Chip

Bank

Rank

Page 19: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/21/2010 cs252-S10, Lecture 23 19

DRAM name based on Peak Chip Transfers / SecDIMM name based on Peak DIMM MBytes / Sec

Stan-dard

Clock Rate (MHz)

M transfers / second

DRAM Name

Mbytes/s/ DIMM

DIMM Name

DDR 133 266 DDR266 2128 PC2100

DDR 150 300 DDR300 2400 PC2400

DDR 200 400 DDR400 3200 PC3200

DDR2 266 533 DDR2-533 4264 PC4300

DDR2 333 667 DDR2-667 5336 PC5300

DDR2 400 800 DDR2-800 6400 PC6400

DDR3 533 1066 DDR3-1066 8528 PC8500

DDR3 666 1333 DDR3-1333 10664 PC10700

DDR3 800 1600 DDR3-1600 12800 PC12800x 2 x 8

Page 20: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/21/2010 cs252-S10, Lecture 23 20

FB-DIMM Memories

• Uses Commodity DRAMs with special controller on actual DIMM board

• Connection is in a serial form:FB

-DIM

M

FB

-DIM

M

FB

-DIM

M

FB

-DIM

M

FB

-DIM

M

Controller

FB-DIMM

RegularDIMM

Page 21: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/21/2010 cs252-S10, Lecture 23 21

FLASH Memory

• Like a normal transistor but:– Has a floating gate that can hold charge– To write: raise or lower wordline high enough to cause charges to tunnel– To read: turn on wordline as if normal transistor

» presence of charge changes threshold and thus measured current

• Two varieties: – NAND: denser, must be read and written in blocks– NOR: much less dense, fast to read and write

Samsung 2007:16GB, NAND Flash

Page 22: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/21/2010 cs252-S10, Lecture 23 22

• Tunneling Magnetic Junction RAM (TMJ-RAM)– Speed of SRAM, density of DRAM, non-volatile (no refresh)– “Spintronics”: combination quantum spin and electronics– Same technology used in high-density disk-drives

Tunneling Magnetic Junction (MRAM)

Page 23: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/21/2010 cs252-S10, Lecture 23 23

Phase Change memory (IBM, Samsung, Intel)

• Phase Change Memory (called PRAM or PCM)– Chalcogenide material can change from amorphous to crystalline

state with application of heat– Two states have very different resistive properties – Similar to material used in CD-RW process

• Exciting alternative to FLASH– Higher speed– May be easy to integrate with CMOS processes

Page 24: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/21/2010 cs252-S10, Lecture 23 24

Error Correction Codes (ECC)• Memory systems generate errors (accidentally flipped-

bits)– DRAMs store very little charge per bit

– “Soft” errors occur occasionally when cells are struck by alpha particles or other environmental upsets.

– Less frequently, “hard” errors can occur when chips permanently fail.

– Problem gets worse as memories get denser and larger

• Where is “perfect” memory required?– servers, spacecraft/military computers, ebay, …

• Memories are protected against failures with ECCs

• Extra bits are added to each data-word– used to detect and/or correct faults in the memory system

– in general, each possible data word value is mapped to a unique “code word”. A fault changes a valid code word to an invalid one - which can be detected.

Page 25: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/21/2010 cs252-S10, Lecture 23 25

• Approach: Redundancy– Add extra information so that we can recover from errors– Can we do better than just create complete copies?

• Block Codes: Data Coded in blocks– k data bits coded into n encoded bits– Measure of overhead: Rate of Code: K/N – Often called an (n,k) code– Consider data as vectors in GF(2) [ i.e. vectors of bits ]

• Code Space is set of all 2n vectors, Data space set of 2k vectors

– Encoding function: C=f(d)– Decoding function: d=f(C’)– Not all possible code vectors, C, are valid!

ECC Approach: Redundancy

Page 26: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/21/2010 cs252-S10, Lecture 23 26

Code Space

v0

C0=f(v0)

Code Distance(Hamming Distance)

General Idea: Code Vector Space

• Not every vector in the code space is valid• Hamming Distance (d):

– Minimum number of bit flips to turn one code word into another

• Number of errors that we can detect: (d-1)• Number of errors that we can fix: ½(d-1)

Page 27: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/21/2010 cs252-S10, Lecture 23 27

Some Code Types• Linear Codes:

Code is generated by G and in null-space of H– (n,k) code: Data space 2k, Code space 2n

– (n,k,d) code: specify distance d as well

• Random code:– Need to both identify errors and correct them– Distance d correct ½(d-1) errors

• Erasure code:– Can correct errors if we know which bits/symbols are bad– Example: RAID codes, where “symbols” are blocks of disk– Distance d correct (d-1) errors

• Error detection code:– Distance d detect (d-1) errors

• Hamming Codes– d = 3 Columns nonzero, Distinct– d = 4 Columns nonzero, Distinct, Odd-weight

• Binary Golay code: based on quadratic residues mod 23– Binary code: [24, 12, 8] and [23, 12, 7]. – Often used in space-based schemes, can correct 3 errors

CHS dGC

Page 28: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/21/2010 cs252-S10, Lecture 23 28

Hamming Bound, symbols in GF(2)• Consider an (n,k) code with distance d

– How do n, k, and d relate to one another?

• First question: How big are spheres?– For distance d, spheres are of radius ½ (d-1),

» i.e. all error with weight ½ (d-1) or less must fit within sphere

– Thus, size of sphere is at least:1 + Num(1-bit err) + Num(2-bit err) + …+ Num( ½(d-1) – bit err)

• Hamming bound reflects bin-packing of spheres:– need 2k of these spheres within code space

)1(2

1

0

d

e e

nSize

nd

e

k

e

n22

)1(2

1

0

3,2)1(2 dn nk

Page 29: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/26/2010 cs252-S10, Lecture 24 29

How to Generate code words?• Consider a linear code. Need a Generator Matrix.

– Let vi be the data value (k bits), Ci be resulting code (n bits):

• Are there 2k unique code values?– Only if the k columns of G are linearly independent!

• Of course, need some way of decoding as well.

– Is this linear??? Why or why not?

• A code is systematic if the data is directly encoded within the code words.

– Means Generator has form:– Can always turn non-systematic

code into a systematic one (row ops)

• But – What is distance of code? Not Obvious!

'idi Cfv

ii vC G

P

IG

G must be an nk matrix

Page 30: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

Implicitly Defining Codes by Check Matrix• Consider a parity-check matrix H (n[n-k])

– Define valid code words Ci as those that give Si=0 (null space of H)

– Size of null space? (null-rank H)=k if (n-k) linearly independent columns in H

• Suppose we transmit code word C with error:– Model this as vector E which flips selected bits of C to get R (received):

– Consider what happens when we multiply by H:

• What is distance of code?– Code has distance d if no sum of d-1 or less columns yields 0

– I.e. No error vectors, E, of weight < d have zero syndromes

– So – Code design is designing H matrix

4/26/2010 cs252-S10, Lecture 24 30

0 ii CS H

ECR

EECRS HHH )(

Page 31: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/26/2010 cs252-S10, Lecture 24 31

How to relate G and H (Binary Codes)• Defining H makes it easy to understand distance of

code, but hard to generate code (H defines code implicitly!)

• However, let H be of following form:

• Then, G can be of following form (maximal code size):

• Notice: G generates values in null-space of H and has k independent columns so generates 2k unique values:

IPH | P is (n-k)k, I is (n-k)(n-k)Result: H is (n-k)n

P

IG P is (n-k)k, I is kk

Result: G is nk

0|

iii vvS

P

IIPGH

Page 32: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/26/2010 cs252-S10, Lecture 24 32

Simple example (Parity, d=2)• Parity code (8-bits):

• Note: Complexity of logic depends on number of 1s in row!

111111111H

11111111

10000000

01000000

00100000

00010000

00001000

00000100

00000010

00000001

G

v7

v6

v5

v4

v3

v2

v1

v0

+ c8

+ s0

C8

C7

C6

C5

C4

C3

C2

C1

C0

Page 33: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/26/2010 cs252-S10, Lecture 24 33

Simple example: Repetition (voting, D=3)• Repetition code (1-bit):

• Positives: simple

• Negatives: – Expensive: only 33% of code word is data

– Not packed in Hamming-bound sense (only D=3). Could get much more efficient coding by encoding multiple bits at a time

101

011H

1

1

1

G

C0

C1

C2

Error

v0

C0

C1

C2

Page 34: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

• Binary Hamming code meets Hamming bound

• Recall bound for d=3:

• So, rearranging:

• Thus, for:– c=3 check bits, k ≤ 4 – c=4 check bits, k ≤ 11, use k=8?– c=5 check bits, k ≤ 26, use k=16?– c=6 check bits, k ≤ 57, use k=32?– c=7 check bits, k ≤ 120, use k=64?

• H matrix consists of all unique, non-zero vectors

– There are 2c-1 vectors, c used for parity, so remaining 2c-c-1

4/26/2010 cs252-S10, Lecture 24 34

Simple Example: Hamming Code (d=3)

1000111

0101011

0011101

H

0111

1011

1101

1000

0100

0010

0001

G

122)1(2 knnk nn

kncck c ),1(2

Page 35: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/26/2010 cs252-S10, Lecture 24 35

Example, d=4 code (SEC-DED)• Design H with:

– All columns non-zero, odd-weight, distinct» Note that odd-weight refers to Hamming Weight, i.e. number of zeros

• Why does this generate d=4?– Any single bit error will generate a distinct, non-zero value– Any double error will generate a distinct, non-zero value

» Why? Add together two distinct columns, get distinct result– Any triple error will generate a non-zero value

» Why? Add together three odd-weight values, get an odd-weight value– So: need four errors before indistinguishable from code word

• Because d=4:– Can correct 1 error (Single Error Correction, i.e. SEC)– Can detect 2 errors (Double Error Detection, i.e. DED)

• Example:– Note: log size of nullspace will

be (columns – rank) = 4, so:» Rank = 4, since rows

independent, 4 cols indpt» Clearly, 8 bits in code word» Thus: (8,4) code

7

6

5

4

3

2

1

0

3

2

1

0

10001110

01001101

00101011

00010111

C

C

C

C

C

C

C

C

S

S

S

S

Page 36: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/26/2010 cs252-S10, Lecture 24 36

Tweeks:• No reason cannot make code shorter than required

• Suppose n-k=8 bits of parity. What is max code size (n) for d=4?

– Maximum number of unique, odd-weight columns: 27 = 128

– So, n = 128. But, then k = n – (n – k) = 120. Weird!

– Just throw out columns of high weight and make (72, 64) code!

• Circuit optimization: if throwing out column vectors, pick ones of highest weight (# bits=1) to simplify circuit

• But – shortened codes like this might have d > 4 in some special directions

– Example: Kaneda paper, catches failures of groups of 4 bits

– Good for catching chip failures when DRAM has groups of 4 bits

• What about EVENODD code?– Can be used to handle two erasures

– What about two dead DRAMs? Yes, if you can really know they are dead

Page 37: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/26/2010 cs252-S10, Lecture 24 37

How to correct errors?• Consider a parity-check matrix H (n[n-k])

– Compute the following syndrome Si given code element Ci:

• Suppose that two correctable error vectors E1 and E2 produce same syndrome:

• But, since both E1 and E2 have (d-1)/2 bits set, E1 + E2 d-1 bits set so this conclusion cannot be true!

• So, syndrome is unique indicator of correctable error vectors

ECS ii HH

set bits moreor d has

0

21

2121

EE

EEEE

HHH

Page 38: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/26/2010 cs252-S10, Lecture 24 38

Page 39: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/26/2010 cs252-S10, Lecture 24 39

Galois Field• Definition: Field: a complete group of elements with:

– Addition, subtraction, multiplication, division– Completely closed under these operations– Every element has an additive inverse– Every element except zero has a multiplicative inverse

• Examples:– Real numbers– Binary, called GF(2) Galois Field with base 2

» Values 0, 1. Addition/subtraction: use xor. Multiplicative inverse of 1 is 1– Prime field, GF(p) Galois Field with base p

» Values 0 … p-1» Addition/subtraction/multiplication: modulo p» Multiplicative Inverse: every value except 0 has inverse» Example: GF(5): 11 1 mod 5, 23 1mod 5, 44 1 mod 5

– General Galois Field: GF(pm) base p (prime!), dimension m» Values are vectors of elements of GF(p) of dimension m» Add/subtract: vector addition/subtraction» Multiply/divide: more complex» Just like read numbers but finite!» Common for computer algorithms: GF(2m)

Page 40: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/26/2010 cs252-S10, Lecture 24 40

Specific Example: Galois Fields GF(2n)• Consider polynomials whose coefficients come from GF(2).

• Each term of the form xn is either present or absent.

• Examples: 0, 1, x, x2, and x7 + x6 + 1

= 1·x7 + 1· x6 + 0 · x5 + 0 · x4 + 0 · x3 + 0 · x2 + 0 · x1 + 1· x0

• With addition and multiplication these form a “ring” (not quite a field – still missing division):

• “Add”: XOR each element individually with no carry:x4 + x3 + + x + 1

+ x4 + + x2 + x

x3 + x2 + 1

• “Multiply”: multiplying by x is like shifting to the left.

x2 + x + 1 x + 1

x2 + x + 1 x3 + x2 + x x3 + 1

Page 41: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/26/2010 cs252-S10, Lecture 24 41

So what about division (mod)

x4 + x2 x

= x3 + x with remainder 0

x4 + x2 + 1 X + 1

= x3 + x2 with remainder 1

x4 + 0x3 + x2 + 0x + 1 X + 1

x3

x4 + x3

x3 + x2

+ x2

x3 + x2

0x2 + 0x

+ 0x

0x + 1

+ 0

Remainder 1

Page 42: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

Producing Galois Fields• These polynomials form a Galois (finite) field if we take

the results of this multiplication modulo a prime polynomial p(x)

– A prime polynomial cannot be written as product of two non-trivial polynomials q(x)r(x)

– For any degree, there exists at least one prime polynomial.

– With it we can form GF(2n)

• Every Galois field has a primitive element, , such that all non-zero elements of the field can be expressed as a power of

– Certain choices of p(x) make the simple polynomial x the primitive element. These polynomials are called primitive

• For example, x4 + x + 1 is primitive. So = x is a primitive element and successive powers of will generate all non-zero elements of GF(16).

• Example on next slide.

4/26/2010 cs252-S10, Lecture 24 42

Page 43: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/26/2010 cs252-S10, Lecture 24 43

Galois Fields with primitive x4 + x + 1 0 = 1

1 = x

2 = x2

3 = x3

4 = x + 1

5 = x2 + x

6 = x3 + x2

7 = x3 + x + 1

8 = x2 + 1

9 = x3 + x

10 = x2 + x + 1

11 = x3 + x2 + x

12 = x3 + x2 + x + 1

13 = x3 + x2 + 1

14 = x3 + 1

15 = 1

• Primitive element α = x in GF(2n)

• In general finding primitive polynomials is difficult. Most people just look them up in a table, such as:

α4 = x4 mod x4 + x + 1 = x4 xor x4 + x + 1 = x + 1

Page 44: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/26/2010 cs252-S10, Lecture 24 44

Primitive Polynomialsx2 + x +1x3 + x +1x4 + x +1x5 + x2 +1x6 + x +1x7 + x3 +1x8 + x4 + x3 + x2 +1x9 + x4 +1x10 + x3 +1x11 + x2 +1

x12 + x6 + x4 + x +1x13 + x4 + x3 + x +1x14 + x10 + x6 + x +1

x15 + x +1x16 + x12 + x3 + x +1

x17 + x3 + 1x18 + x7 + 1

x19 + x5 + x2 + x+ 1x20 + x3 + 1x21 + x2 + 1

x22 + x +1x23 + x5 +1

x24 + x7 + x2 + x +1x25 + x3 +1

x26 + x6 + x2 + x +1x27 + x5 + x2 + x +1

x28 + x3 + 1x29 + x +1

x30 + x6 + x4 + x +1x31 + x3 + 1

x32 + x7 + x6 + x2 +1 Galois Field Hardware

Multiplication by x shift leftTaking the result mod p(x) XOR-ing with the coefficients of p(x)

when the most significant coefficient is 1.

Obtaining all 2n-1 non-zeroelements by evaluating xk Shifting and XOR-ing 2n-1 times.for k = 1, …, 2n-1

Page 45: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/26/2010 cs252-S10, Lecture 24 45

Reed-Solomon Codes• Galois field codes: code words consist of symbols

– Rather than bits

• Reed-Solomon codes:– Based on polynomials in GF(2k) (I.e. k-bit symbols)– Data as coefficients, code space as values of polynomial:– P(x)=a0+a1x1+… ak-1xk-1

– Coded: P(0),P(1),P(2)….,P(n-1)– Can recover polynomial as long as get any k of n

• Properties: can choose number of check symbols– Reed-Solomon codes are “maximum distance separable” (MDS)– Can add d symbols for distance d+1 code– Often used in “erasure code” mode: as long as no more than n-k

coded symbols erased, can recover data

• Side note: Multiplication by constant in GF(2k) can be represented by kk matrix: ax

– Decompose unknown vector into k bits: x=x0+2x1+…+2k-1xk-1

– Each column is result of multiplying a by 2i

Page 46: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

Reed-Solomon Codes (con’t)

4

3

2

1

0

43210

43210

43210

43210

43210

43210

43210

77777

66666

55555

44444

33333

22222

11111

a

a

a

a

a

G

4/26/2010 cs252-S10, Lecture 24 46

1111111

0000000'

7654321

7654321H

• Reed-solomon codes (Non-systematic):

– Data as coefficients, code space as values of polynomial:

– P(x)=a0+a1x1+… a6x6

– Coded: P(0),P(1),P(2)….,P(6)

• Called Vandermonde Matrix: maximum rank

• Different representation(This H’ and G not related)

– Clear that all combinations oftwo or less columns independent d=3

– Very easy to pick whatever d you happen to want: add more rows

• Fast, Systematic version of Reed-Solomon:

– Cauchy Reed-Solomon, others

Page 47: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

Aside: Why erasure coding?High Durability/overhead ratio!

• Exploit law of large numbers for durability!• 6 month repair, FBLPY:

– Replication: 0.03– Fragmentation: 10-35

Fraction Blocks Lost

Per Year (FBLPY)

4/26/2010 47cs252-S10, Lecture 24

Page 48: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

Statistical Advantage of Fragments

• Latency and standard deviation reduced:– Memory-less latency model– Rate ½ code with 32 total fragments

Time to Coalesce vs. Fragments Requested (TI5000)

0

20

40

60

80

100

120

140

160

180

16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

Objects Requested

La

ten

cy

4/26/2010 48cs252-S10, Lecture 24

Page 49: CS252 Graduate Computer Architecture Lecture 23 Memory Technology (Con’t) Error Correction Codes April 21 st, 2010 John Kubiatowicz Electrical Engineering

4/21/2010 cs252-S10, Lecture 23 49

Conclusion• Main memory is Dense, Slow

– Cycle time > Access time!

• Techniques to optimize memory– Wider Memory

– Interleaved Memory: for sequential or independent accesses

– Avoiding bank conflicts: SW & HW

– DRAM specific optimizations: page mode & Specialty DRAM

• ECC: add redundancy to correct for errors– (n,k,d) n code bits, k data bits, distance d

– Linear codes: code vectors computed by linear transformation

• Erasure code: after identifying “erasures”, can correct

• Reed-Solomon codes – Based on GF(pn), often GF(2n)

– Easy to get distance d+1 code with d extra symbols

– Often used in erasure mode