54
CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th , 2003 Prof. John Kubiatowicz http://www.cs.berkeley.edu/~kubitron/courses/ cs252-F03

CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

  • View
    220

  • Download
    0

Embed Size (px)

Citation preview

Page 1: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.1

11/16/03

CS252Graduate Computer Architecture

Lecture 22

I/O Introduction

November 16th, 2003

Prof. John Kubiatowicz

http://www.cs.berkeley.edu/~kubitron/courses/cs252-F03

Page 2: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.2

11/16/03

• Motivation:– DRAM is dense Signals are easily disturbed– High Capacity higher probability of failure

• Approach: Redundancy– Add extra information so that we can recover from errors– Can we do better than just create complete copies?

• Block Codes: Data Coded in blocks– k data bits coded into n encoded bits– Measure of overhead: Rate of Code: K/N – Often called an (n,k) code– Consider data as vectors in GF(2) [ i.e. vectors of bits ]

• Code Space is set of all 2n vectors, Data space set of 2k vectors– Encoding function: C=f(d)– Decoding function: d=f(C’)– Not all possible code vectors, C, are valid!

• Systematic Codes: original data appears within coded data

Review: Error Correction

Page 3: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.3

11/16/03

• Not every vector in the code space is valid• Hamming Distance (d):

– Minimum number of bit flips to turn one code word into another

• Number of errors that we can detect: (d-1)• Number of errors that we can fix: ½(d-1)

Code Space

d0

C0=f(d0)

Code Distance(Hamming Distance)

General Idea:Code Vector Space

Page 4: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.4

11/16/03

Review: Code Types• Linear Codes:

Code is generated by G and in null-space of H• Hamming Codes: Design the H matrix

– d = 3 Columns nonzero, Distinct– d = 4 Columns nonzero, Distinct, Odd-weight

• Reed-solomon codes:– Based on polynomials in GF(2k) (I.e. k-bit symbols)– Data as coefficients, code space as values of polynomial:

– P(x)=a0+a1x1+… ak-1xk-1

– Coded: P(0),P(1),P(2)….,P(n-1)– Can recover polynomial as long as get any k of n– Alternatively: as long as no more than n-k coded symbols

erased, can recover data.

• Side note: Multiplication by constant in GF(2k) can be represented by kk matrix: ax– Decompose unknown vector into k bits: x=x0+2x1+…+2k-

1xk-1

– Each column is result of multiplying a by 2i

CHS dGC

Page 5: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.5

11/16/03

Motivation: Who Cares About I/O?

• CPU Performance: 60% per year• I/O system performance limited by mechanical

delays (disk I/O)< 10% per year (IO per sec or MB per sec)

• Amdahl's Law: system speed-up limited by the slowest part!

10% IO & 10x CPU => 5x Performance (lose 50%)10% IO & 100x CPU => 10x Performance (lose 90%)

• I/O bottleneck: Diminishing fraction of time in CPUDiminishing value of faster CPUs

Page 6: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.6

11/16/03

I/O Systems

Processor

Cache

Memory - I/O Bus

MainMemory

I/OController

Disk Disk

I/OController

I/OController

Graphics Network

interruptsinterrupts

Page 7: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.7

11/16/03

Technology Trends

Disk Capacity now doubles every 18 months; before1990 every 36 motnhs

• Today: Processing Power Doubles Every 18 months

• Today: Memory Size Doubles Every 18 months(4X/3yr)

• Today: Disk Capacity Doubles Every 18 months

• Disk Positioning Rate (Seek + Rotate) Doubles Every Ten Years!

The I/OGAP

The I/OGAP

Page 8: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.8

11/16/03

Storage Technology Drivers

• Driven by the prevailing computing paradigm– 1950s: migration from batch to on-line processing– 1990s: migration to ubiquitous computing

» computers in phones, books, cars, video cameras, …

» nationwide fiber optical network with wireless tails

• Effects on storage industry:– Embedded storage

» smaller, cheaper, more reliable, lower power– Data utilities

» high capacity, hierarchically managed storage

Page 9: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.9

11/16/03

Historical Perspective• 1956 IBM Ramac — early 1970s Winchester

– Developed for mainframe computers, proprietary interfaces– Steady shrink in form factor: 27 in. to 14 in.

• 1970s developments– 5.25 inch floppy disk formfactor (microcode into mainframe)– early emergence of industry standard disk interfaces

» ST506, SASI, SMD, ESDI

• Early 1980s– PCs and first generation workstations

• Mid 1980s– Client/server computing – Centralized storage on file server

» accelerates disk downsizing: 8 inch to 5.25 inch– Mass market disk drives become a reality

» industry standards: SCSI, IPI, IDE» 5.25 inch drives for standalone PCs, End of proprietary interfaces

Page 10: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.10

11/16/03

Disk History

Data densityMbit/sq. in.

Capacity ofUnit ShownMegabytes

1973:1. 7 Mbit/sq. in140 MBytes

1979:7. 7 Mbit/sq. in2,300 MBytes

source: New York Times, 2/23/98, page C3, “Makers of disk drives crowd even mroe data into even smaller spaces”

Page 11: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.11

11/16/03

Historical Perspective

• Late 1980s/Early 1990s:– Laptops, notebooks, (palmtops)– 3.5 inch, 2.5 inch, (1.8 inch formfactors)– Formfactor plus capacity drives market, not so

much performance» Recently Bandwidth improving at 40%/ year

– Challenged by DRAM, flash RAM in PCMCIA cards» still expensive, Intel promises but doesn’t

deliver» unattractive MBytes per cubic inch

– Optical disk fails on performace (e.g., NEXT) but finds niche (CD ROM)

Page 12: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.12

11/16/03

Disk History

1989:63 Mbit/sq. in60,000 MBytes

1997:1450 Mbit/sq. in2300 MBytes

source: New York Times, 2/23/98, page C3, “Makers of disk drives crowd even mroe data into even smaller spaces”

1997:3090 Mbit/sq. in8100 MBytes

Page 13: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.13

11/16/03

MBits per square inch: DRAM as % of Disk over

time

0%

10%

20%

30%

40%

50%

1974 1980 1986 1992 1998

source: New York Times, 2/23/98, page C3, “Makers of disk drives crowd even mroe data into even smaller spaces”

470 v. 3000 Mb/si

9 v. 22 Mb/si

0.2 v. 1.7 Mb/si

Page 14: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.14

11/16/03

Disk Performance Model /Trends

• Capacity+ 100%/year (2X / 1.0 yrs)

• Transfer rate (BW)+ 40%/year (2X / 2.0 yrs)

• Rotation + Seek time– 8%/ year (1/2 in 10 yrs)

• MB/$> 100%/year (2X / <1.5 yrs)Fewer chips + areal density

Page 15: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.15

11/16/03

Photo of Disk Head, Arm, Actuator

Actuator

ArmHead

Platters (12)

{Spindle

Page 16: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.16

11/16/03

Nano-layered Disk Heads• Special sensitivity of Disk head comes from “Giant

Magneto-Resistive effect” or (GMR) • IBM is leader in this technology

– Same technology as TMJ-RAM breakthrough we described in earlier class.

Coil for writing

Page 17: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.17

11/16/03

Disk Device Terminology

• Several platters, with information recorded magnetically on both surfaces (usually)

• Actuator moves head (end of arm,1/surface) over track (“seek”), select surface, wait for sector rotate under head, then read or write– “Cylinder”: all tracks under heads

• Bits recorded in tracks, which in turn divided into sectors (e.g., 512 Bytes)

Platter

OuterTrack

InnerTrackSector

Actuator

HeadArm

Page 18: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.18

11/16/03

Disk Performance Example

Disk Latency = Queuing Time + Seek Time + Rotation Time + Xfer Time + Ctrl Time

Order of magnitude times for 4K byte transfers:Seek: 12 ms or less

Rotate: 4.2 ms @ 7200 rpm = 0.5 rev/(7200 rpm/60m/s) (8.3 ms @ 3600 rpm )Xfer: 1 ms @ 7200 rpm (2 ms @ 3600 rpm)

Ctrl: 2 ms (big variation)

Disk Latency = Queuing Time + (12 + 4.2 + 1 + 2)ms = QT + 19.2msAverage Service Time = 19.2 ms

Page 19: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.19

11/16/03

Disk Time Example

• Disk Parameters:– Transfer size is 8K bytes– Advertised average seek is 12 ms– Disk spins at 7200 RPM– Transfer rate is 4 MB/sec

• Controller overhead is 2 ms• Assume that disk is idle so no queuing delay• What is Average Disk Access Time for a Sector?

– Ave seek + ave rot delay + transfer time + controller overhead

– 12 ms + 0.5/(7200 RPM/60) + 8 KB/4 MB/s + 2 ms– 12 + 4.15 + 2 + 2 = 20 ms

• Advertised seek time assumes no locality: typically 1/4 to 1/3 advertised seek time: 20 ms => 12 ms

Page 20: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.20

11/16/03

State of the Art: Ultrastar 72ZX– 73.4 GB, 3.5 inch disk– 2¢/MB– 10,000 RPM;

3 ms = 1/2 rotation– 11 platters, 22 surfaces– 15,110 cylinders– 7 Gbit/sq. in. areal den– 17 watts (idle)– 0.1 ms controller time– 5.3 ms avg. seek– 50 to 29 MB/s(internal)

source: www.ibm.com; www.pricewatch.com; 2/14/00

Latency = Queuing Time + Controller time +Seek Time + Rotation Time + Size / Bandwidth

per access

per byte{+

Sector

Track

Cylinder

Head PlatterArmTrack Buffer

Page 21: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.21

11/16/03

CS 252 Administrivia• Upcoming schedule of project events in CS

252– People who have not talked to me about their projects

should do so soon! – Oral Presentations: Tue/Wed 12/9-10– Final projects due: Friday 12/12?

• Final topics:– Some queuing theory– Multiprocessing

• Final Midterm: 12/1 (Monday after thanksgiving)

• Alternative: 12/3?

Page 22: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.22

11/16/03

Alternative Data Storage Technologies: Early 1990s

Cap BPI TPI BPI*TPI Data Xfer Access

Technology (MB) (Million) (KByte/s) Time

Conventional Tape:

Cartridge (.25") 150 12000 104 1.2 92 minutes

IBM 3490 (.5") 800 22860 38 0.9 3000 seconds

Helical Scan Tape:

Video (8mm) 4600 43200 1638 71 492 45 secs

DAT (4mm) 1300 61000 1870 114 183 20 secs

Magnetic & Optical Disk:

Hard Disk (5.25") 1200 33528 1880 63 3000 18 ms

IBM 3390 (10.5") 3800 27940 2235 62 4250 20 ms

Sony MO (5.25") 640 24130 18796 454 88 100 ms

Page 23: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.23

11/16/03

Tape vs. Disk

• Longitudinal tape uses same technology as hard disk; tracks its density improvements

• Disk head flies above surface, tape head lies on surface

• Disk fixed, tape removable

• Inherent cost-performance based on geometries: fixed rotating platters with gaps (random access, limited area, 1 media / reader)vs. removable long strips wound on spool (sequential access, "unlimited" length, multiple / reader)

• New technology trend: Helical Scan (VCR, Camcoder, DAT) Spins head at angle to tape to improve density

Page 24: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.24

11/16/03

Current Drawbacks to Tape• Tape wear out:

– Helical 100s of passes to 1000s for longitudinal

• Head wear out: – 2000 hours for helical

• Both must be accounted for in economic / reliability model

• Long rewind, eject, load, spin-up times; not inherent, just no need in marketplace (so far)

• Designed for archival

Page 25: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.25

11/16/03

Automated Cartridge System

STC 4400

6000 x 0.8 GB 3490 tapes = 5 TBytes in 1992 $500,000 O.E.M. Price

6000 x 10 GB D3 tapes = 60 TBytes in 1998

Library of Congress: all information in the world; in 1992, ASCII of all books = 30 TB

8 feet

10 feet

Page 26: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.26

11/16/03

Relative Cost of Storage Technology—Late 1995/Early

1996Magnetic Disks

5.25” 9.1 GB $2129 $0.23/MB$1985 $0.22/MB

3.5” 4.3 GB $1199 $0.27/MB$999 $0.23/MB

2.5” 514 MB $299 $0.58/MB1.1 GB $345 $0.33/MB

Optical Disks5.25” 4.6 GB $1695+199 $0.41/MB

$1499+189 $0.39/MB

PCMCIA CardsStatic RAM 4.0 MB $700 $175/MBFlash RAM 40.0 MB $1300 $32/MB

175 MB $3600 $20.50/MB

Page 27: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.27

11/16/03

Manufacturing Advantages

of Disk Arrays

14”10”5.25”3.5”

3.5”

Disk Array: 1 disk design

Conventional: 4 disk designs

Low End High End

Disk Product Families

Page 28: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.28

11/16/03

Replace Small # of Large Disks with Large # of Small

Disks! (1988 Disks)

Data Capacity

Volume

Power

Data Rate

I/O Rate

MTTF

Cost

IBM 3390 (K)

20 GBytes

97 cu. ft.

3 KW

15 MB/s

600 I/Os/s

250 KHrs

$250K

IBM 3.5" 0061

320 MBytes

0.1 cu. ft.

11 W

1.5 MB/s

55 I/Os/s

50 KHrs

$2K

x70

23 GBytes

11 cu. ft.

1 KW

120 MB/s

3900 IOs/s

??? Hrs

$150K

Disk Arrays have potential for

large data and I/O rates

high MB per cu. ft., high MB per KW

reliability?

Page 29: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.29

11/16/03

Array Reliability

• Reliability of N disks = Reliability of 1 Disk ÷ N

50,000 Hours ÷ 70 disks = 700 hours

Disk system MTTF: Drops from 6 years to 1 month!

• Arrays (without redundancy) too unreliable to be useful!

Hot spares support reconstruction in parallel with access: very high media availability can be achievedHot spares support reconstruction in parallel with access: very high media availability can be achieved

Page 30: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.30

11/16/03

Redundant Arrays of Disks

• Files are "striped" across multiple spindles• Redundancy yields high data availability

Disks will fail

Contents reconstructed from data redundantly stored in the array

Capacity penalty to store it

Bandwidth penalty to update

Mirroring/Shadowing (high capacity cost)

Horizontal Hamming Codes (overkill)

Parity & Reed-Solomon Codes

Failure Prediction (no capacity overhead!)VaxSimPlus — Technique is controversial

Techniques:

Page 31: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.31

11/16/03

Redundant Arrays of DisksRAID 1: Disk Mirroring/Shadowing

• Each disk is fully duplicated onto its "shadow" Very high availability can be achieved

• Bandwidth sacrifice on write: Logical write = two physical writes

• Reads may be optimized

• Most expensive solution: 100% capacity overhead

Targeted for high I/O rate , high availability environments

recoverygroup

Page 32: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.32

11/16/03

Redundant Arrays of Disks RAID 3: Parity Disk

P100100111100110110010011

. . .

logical record 10010011

11001101

10010011

00110000

Striped physicalrecords

• Parity computed across recovery group to protect against hard disk failures 33% capacity cost for parity in this configuration wider arrays reduce capacity costs, decrease expected availability, increase reconstruction time• Arms logically synchronized, spindles rotationally synchronized logically a single high capacity, high transfer rate disk

Targeted for high bandwidth applications: Scientific, Image Processing

Page 33: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.33

11/16/03

Redundant Arrays of Disks RAID 5+: High I/O Rate Parity

A logical writebecomes fourphysical I/Os

Independent writespossible because ofinterleaved parity

Reed-SolomonCodes ("Q") forprotection duringreconstruction

A logical writebecomes fourphysical I/Os

Independent writespossible because ofinterleaved parity

Reed-SolomonCodes ("Q") forprotection duringreconstruction

D0 D1 D2 D3 P

D4 D5 D6 P D7

D8 D9 P D10 D11

D12 P D13 D14 D15

P D16 D17 D18 D19

D20 D21 D22 D23 P

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.Disk Columns

IncreasingLogical

Disk Addresses

Stripe

StripeUnit

Targeted for mixedapplications

Page 34: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.34

11/16/03

Problems of Disk Arrays: Small Writes

D0 D1 D2 D3 PD0'

+

+

D0' D1 D2 D3 P'

newdata

olddata

old parity

XOR

XOR

(1. Read) (2. Read)

(3. Write) (4. Write)

RAID-5: Small Write Algorithm

1 Logical Write = 2 Physical Reads + 2 Physical Writes

Page 35: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.35

11/16/03

Subsystem Organization

hostarray

controller

single boarddisk

controller

single boarddisk

controller

single boarddisk

controller

single boarddisk

controller

hostadapter

manages interfaceto host, DMA

control, buffering,parity logic

physical devicecontrol

often piggy-backedin small format devices

striping software off-loaded from host to array controller

no applications modifications

no reduction of host performance

Page 36: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.36

11/16/03

System Availability: Orthogonal RAIDs

ArrayController

StringController

StringController

StringController

StringController

StringController

StringController

. . .

. . .

. . .

. . .

. . .

. . .

Data Recovery Group: unit of data redundancy

Redundant Support Components: fans, power supplies, controller, cables

End to End Data Integrity: internal parity protected data paths

Page 37: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.37

11/16/03

System-Level Availability

Fully dual redundantI/O Controller I/O Controller

Array Controller Array Controller

. . .

. . .

. . .

. . . . . .

.

.

.RecoveryGroup

Goal: No SinglePoints ofFailure

Goal: No SinglePoints ofFailure

host host

with duplicated paths, higher performance can beobtained when there are no failures

Page 38: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.38

11/16/03

OceanStore:Global Scale Persistent

Storage

Global-Scale Persistent Storage

Page 39: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.39

11/16/03

Pac Bell

Sprint

IBMAT&T

CanadianOceanStore

• Service provided by confederation of companies– Monthly fee paid to one service provider– Companies buy and sell capacity from each other

IBM

Utility-based Infrastructure

Page 40: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.40

11/16/03

Important P2P Technology(Decentralized Object Location and Routing)

GUID1

DOLR

GUID1GUID2

Page 41: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.41

11/16/03

The Path of an OceanStore Update

Second-TierCaches

Multicasttrees

Inner-RingServers

Clients

Page 42: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.42

11/16/03

Archival Disseminationof Fragments

Page 43: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.43

11/16/03

Aside: Why erasure coding?High Durability/overhead ratio!

• Exploit law of large numbers for durability!• 6 month repair, FBLPY:

– Replication: 0.03– Fragmentation: 10-35

Fraction Blocks Lost Per Year (FBLPY)

Page 44: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.44

11/16/03

The Dissemination Process:Achieving Failure Independence

Model Builder

Set Creator

IntrospectionHuman Input

Network

Monitoringmodel

Inner Ring

Inner Ring

set

set

probe

type

fragments

fragments

fragments

Anti-correlationAnalysis/Models• Mutual Information• Human Input• Data Mining

Independent SetGeneration

Effective Dissemination

Page 45: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.45

11/16/03

• OceanStore Concepts Applied to Tape-less backup– Self-Replicating, Self-Repairing, Self-Managing– No need for actual Tape in system

» (Although could be there to keep with tradition)

The Berkeley PetaByte Archival Service

Page 46: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.46

11/16/03

But: What about queue time?

Or: why nonlinear response

Response time = Queue + Device Service time

100%

ResponseTime (ms)

Throughput (Utilization)(% total BW)

0

100

200

300

0%

Proc

Queue

IOC Device

Metrics: Response Time Throughput

latency goes as Tser×u/(1-u) u = utilization

Page 47: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.47

11/16/03

•Queueing Theory applies to long term, steady state behavior Arrival rate = Departure rate•Little’s Law: Mean number tasks in system = arrival rate x mean reponse time

– Observed by many, Little was first to prove– Simple interpretation: you should see the same number of tasks

in queue when entering as when leaving.

•Applies to any system in equilibrium, as long as nothing in black box is creating or destroying tasks

“Black Box”Queueing

System

Arrivals Departures

Introduction to Queueing Theory

Page 48: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.48

11/16/03

•Server spends a variable amount of time with customers– Weighted mean m1 = (f1 x T1 + f2 x T2 +...+ fn x Tn)/F

= p(T)xT 2 = (f1 x T12 + f2 x T22 +...+ fn x Tn2)/F – m12

= p(T)xT2 - m12

– Squared coefficient of variance: C = 2/m12

» Unitless measure (100 ms2 vs. 0.1 s2)• Exponential distribution C = 1 : most short relative to

average, few others long; 90% < 2.3 x average, 63% < averageHypoexponential distribution C < 1 : most close to average, C=0.5 => 90% < 2.0 x average, only 57% < averageHyperexponential distribution C > 1 : further from average C=2.0 => 90% < 2.8 x average, 69% < average

Avg.

A Little Queuing Theory: Use of random distributions

Avg.

0

Proc IOC Device

Queue server

System

Page 49: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.49

11/16/03

•Disk response times C 1.5 (majority seeks < average)•Yet usually pick C = 1.0 for simplicity

– Memoryless, exponential dist– Many complex systems well described

by memoryless distribution!

•Another useful value is average time must wait for server to complete current task: m1(z)– Called “Average Residual Wait Time”– Not just 1/2 x m1 because doesn’t capture variance– Can derive m1(z) = 1/2 x m1 x (1 + C)– No variance C= 0 => m1(z) = 1/2 x m1– Exponential C= 1 => m1(z) = m1

A Little Queuing Theory: Variable Service Time

Proc IOC Device

Queue server

System

Avg.

0 Time

Page 50: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.50

11/16/03

•Calculating average wait time in queue Tq:– All customers in line must complete; avg time: m1Tser= 1/– If something at server, it takes to complete on average m1(z)

» Chance server is busy = u=/; average delay is u x m1(z)

Tq = u x m1(z) + Lq x Tser

Tq = u x m1(z) + x Tq x Tser

Tq = u x m1(z) + u x Tq

Tq x (1 – u) = m1(z) x uTq = m1(z) x u/(1-u) = Tser x {1/2 x (1+C)} x u/(1 – u))

Notation: average number of arriving customers/second

Tser average time to service a customeru server utilization (0..1): u = x Tser

Tq average time/customer in queueLq average length of queue:Lq= x Tq

m1(z) average residual wait time = Tser x {1/2 x (1+C)}

A Little Queuing Theory: Average Wait

Time

Little’s Law

Defn of utilization (u)

Page 51: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.51

11/16/03

A Little Queuing Theory: M/G/1 and M/M/1

• Assumptions so far:– System in equilibrium– Time between two successive arrivals in line are random– Server can start on next customer immediately after prior

finishes– No limit to the queue: works First-In-First-Out

– Afterward, all customers in line must complete; each avg Tser

• Described “memoryless” or Markovian request arrival (M for C=1 exponentially random), General service distribution (no restrictions), 1 server: M/G/1 queue

• When Service times have C = 1, M/M/1 queueTq = Tser x u x(1 + C) /(2 x (1 – u)) = Tser x u /(1 – u)

Tser average time to service a customeru server utilization (0..1): u = r x Tser

Tq average time/customer in queue

Page 52: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.52

11/16/03

A Little Queuing Theory: An Example

• processor sends 10 x 8KB disk I/Os per second, requests & service exponentially distrib., avg. disk service = 20 ms

• On average, how utilized is the disk?– What is the number of requests in the queue?– What is the average time spent in the queue?– What is the average response time for a disk request?

• Notation: average number of arriving customers/second = 10

Tser average time to service a customer = 20 ms (0.02s)u server utilization (0..1): u = x Tser= 10/s x .02s = 0.2Tq average time/customer in queue = Tser x u / (1 – u)

= 20 x 0.2/(1-0.2) = 20 x 0.25 = 5 ms (0 .005s)Tsys average time/customer in system: Tsys =Tq +Tser= 25 msLq average length of queue:Lq= x Tq

= 10/s x .005s = 0.05 requests in queueLsys average # tasks in system: Lsys = x Tsys = 10/s x .025s = 0.25

Page 53: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.53

11/16/03

A Little Queuing Theory:Yet Another Example

• Processor access memory over “network”• DRAM service properties

– speed = 9cycles+ 2cycles/word– With 8 word cachelines: Tser= 25cycles, =1/25=.04 ops/cycle– Deterministic Servicetime! (C=0)

• Processor behavior– CPI=1, 40% memory, 7.5% cache misses – Rate: = 1 inst/cycle*.4*.075 = .03 ops/cycle

• Notation: average number of arriving customers/cycle=.03

Tser average time to service a customer= 25 cyclesu server utilization (0..1): u = x Tser= .0325=.75Tq average time/customer in queue = Tser x u x (1 + C) /(2 x (1 – u))

= (25 x 0.75 x ½)/(1 – 0.75) = 37.5 cyclesTsys average time/customer in system: Tsys = Tq +Tser= 62.5 cyclesLq average length of queue:Lq= x Tq

= .03 x 37.5 = 1.125 requests in queueLsys average # tasks in system :Lsys = x Tsys = .03 x 62.5 = 1.875

Page 54: CS252/Kubiatowicz Lec 22.1 11/16/03 CS252 Graduate Computer Architecture Lecture 22 I/O Introduction November 16 th, 2003 Prof. John Kubiatowicz kubitron/courses/cs252-F03

CS252/KubiatowiczLec 22.54

11/16/03

Summary• Disk industry growing rapidly, improves:

– bandwidth 40%/yr , – areal density 60%/year, $/MB faster?

• queue + controller + seek + rotate + transfer• Advertised average seek time benchmark much

greater than average seek time in practice• Response time vs. Bandwidth tradeoffs• Queueing theory: or (c=1):

• Value of faster response time:– 0.7sec off response saves 4.9 sec and 2.0 sec (70%) total

time per transaction => greater productivity– everyone gets more done with faster response,

but novice with fast response = expert with slow

u

uxCW

1

121

u

uxW

1