19
CSCE 212 Chapter 8 Storage, Networks, and Other Peripherals Instructor: Jason D. Bakos

CSCE 212 Chapter 8 Storage, Networks, and Other Peripherals Instructor: Jason D. Bakos

Embed Size (px)

Citation preview

Page 1: CSCE 212 Chapter 8 Storage, Networks, and Other Peripherals Instructor: Jason D. Bakos

CSCE 212Chapter 8

Storage, Networks, and Other Peripherals

Instructor: Jason D. Bakos

Page 2: CSCE 212 Chapter 8 Storage, Networks, and Other Peripherals Instructor: Jason D. Bakos

CSCE 212 2

Magnetic Disk Storage: Terminology

• Magnetic disk storage is nonvolatile• 1-4 platters, 2 recordable surfaces each• 5400 – 15,000 RPM• 10,000 – 50,000 tracks per surface• Each track has 100 to 500 sectors• Sectors store 512 bytes (smallest unit read or written)• Cylinder: all the tracks currently under the heads

Page 3: CSCE 212 Chapter 8 Storage, Networks, and Other Peripherals Instructor: Jason D. Bakos

CSCE 212 3

Accessing Data

• To read:– Seek to the track

• Ave. seek times advertized as 3 ms to 14 ms but usually much better due to locality and OS

– Rotate to the sector• On average, need to go ½ around the track

– Transfer data• 30 – 40 MB/sec from disk, 320 MB/sec from cache

– Controller time / overhead

• ADAT = seek time + rotational latency + transfer time + controller time

Page 4: CSCE 212 Chapter 8 Storage, Networks, and Other Peripherals Instructor: Jason D. Bakos

CSCE 212 4

Reliability

• Mean time to failure (MTTF)• Mean time to repair (MTTR)• Mean time between failures (MTTF + MTTR)• Availability = MTTF / (MTTF + MTTR)• Fault: failure of a component

• To increase MTTF:– Fault avoidance– Fault tolerance– Fault forecasting

Page 5: CSCE 212 Chapter 8 Storage, Networks, and Other Peripherals Instructor: Jason D. Bakos

CSCE 212 5

RAID

• Redundant Arrays of Inexpensive Disks– Advantages:

• parallelism• cost• floor space (smaller disks have higher density)

– Disadvantage:• adding disks to the array increases the likelihood of individual disk failures

– Solution:• add data redundancy to recover from disk failures• small disks are inexpensive, so this reliability is less expensive relative to

buying fewer, larger disks

Page 6: CSCE 212 Chapter 8 Storage, Networks, and Other Peripherals Instructor: Jason D. Bakos

CSCE 212 6

RAID

• RAID 0– No redundancy– Stripe data across disks– Appears as one disk

• RAID 1– Mirroring– Highest cost

• RAID 2– Error control codes– No longer used

Page 7: CSCE 212 Chapter 8 Storage, Networks, and Other Peripherals Instructor: Jason D. Bakos

CSCE 212 7

RAID

• RAID 3– Bit-interleaved parity– Add one parity disk for each

protection group– All disks must be accessed to

determine missing data– Writes access all disks for

striping, then recalculation of parity

– Example:• 1 0 1 0• 1 X 1 0• X = 1 xor 1 xor 0

Page 8: CSCE 212 Chapter 8 Storage, Networks, and Other Peripherals Instructor: Jason D. Bakos

CSCE 212 8

RAID

• RAID 4– Block interleaved parity– Stripe blocks instead of bits– Small writes only need to

access one data disk and update parity

– Only flip parity bits that correspond to changed data bits

– Example:• 0011 1010 1101 0100• 0011 XXXX 1101 0100• XXXX = 0011 xor 1101 xor 0100

Page 9: CSCE 212 Chapter 8 Storage, Networks, and Other Peripherals Instructor: Jason D. Bakos

CSCE 212 9

RAID

• RAID 5– Distributed block-interleaved

parity– In RAID 4, the parity disk is

bottleneck– Solution: alternate parity

blocks

Page 10: CSCE 212 Chapter 8 Storage, Networks, and Other Peripherals Instructor: Jason D. Bakos

CSCE 212 10

RAID

• RAID 6– P + Q Redundancy– Add another check disk to

be able to recover from two errors

• RAID 1 and 5 are most popular

• RAID weakness:– Assumes disk failures are

not correlated

Page 11: CSCE 212 Chapter 8 Storage, Networks, and Other Peripherals Instructor: Jason D. Bakos

CSCE 212 11

Busses

• Shared communication link

• Advantages:– Inexpensive, simple, extendible

• Disadvantages:– Capacity is shared, contention creates bottleneck– Not scalable– Length and signaling rate limitations (skew, noise)

Device 1 Device 2 Device 3

Page 12: CSCE 212 Chapter 8 Storage, Networks, and Other Peripherals Instructor: Jason D. Bakos

CSCE 212 12

Model Computer System

Page 13: CSCE 212 Chapter 8 Storage, Networks, and Other Peripherals Instructor: Jason D. Bakos

CSCE 212 13

Busses

• Typical bus:– data lines

• DataIn, DataOut, Address

– control lines• Bus request, what type of transfer

• Busses are either:– synchonous

• includes shared clock signal as a control signal• devices communicate with a protocol that is relative to the clock• example: send address in cycle 1, data is valid in cycle 4• used in fast, short-haul busses

– asynchronous• not clocked or synchronized• requires handshaking protocol

Page 14: CSCE 212 Chapter 8 Storage, Networks, and Other Peripherals Instructor: Jason D. Bakos

CSCE 212 14

Bus Types

ReadReq

Data

Ack

DataRdy

Read

Data

Clock

asynchronous

(peripheral bus)

synchronous

(system bus)

Page 15: CSCE 212 Chapter 8 Storage, Networks, and Other Peripherals Instructor: Jason D. Bakos

CSCE 212 15

Model Computer System

Page 16: CSCE 212 Chapter 8 Storage, Networks, and Other Peripherals Instructor: Jason D. Bakos

CSCE 212 16

Model Computer System

• Example computer system:

– northbridge: fast interconnect for memory and video

– southbridge: slower interconnect for I/O devices

Page 17: CSCE 212 Chapter 8 Storage, Networks, and Other Peripherals Instructor: Jason D. Bakos

CSCE 212 17

I/O

• I/O Schemes:– Memory-mapped I/O

• Contolled by CPU• I/O devices act as segments of memory• CPU reads and writes “registers” on the I/O device• Example: status register (done, error), data register

– DMA (Direct Memory Access)• Controlled by I/O device• DMA controller transfers data directly between I/O device and memory

– DMA controller can become a bus master– DMA controller is initialized by the processor and I/O device

– Polling• CPU regularly checks the status of an I/O device

– Interrupts• I/O device forces the CPU into the operating system to service an I/O device request• Interrupts have priority (for simultaneous interrupts and preemption)

Page 18: CSCE 212 Chapter 8 Storage, Networks, and Other Peripherals Instructor: Jason D. Bakos

CSCE 212 18

I/O Performance Measures

• Transaction Processing– Database applications (i.e. banking systems)– Mostly concerned with I/O rate and response time– Benchmarks created by Transaction Processing Council (TPC)

• TPC-C: complex queries• TPC-H: ad hoc queries• TPC-R: standard queries (preknowledge)• TPC-W: web-based simulates web queries

• File System– MakeDir, Copy, ScanDir, ReadAll, Make– SPECFS (NFS performance)

• Web I/O– SPECWeb (web server benchmark)

• I/O performance estimation: simulation or queueing theory

Page 19: CSCE 212 Chapter 8 Storage, Networks, and Other Peripherals Instructor: Jason D. Bakos

CSCE 212 19

Example Problem

• CPU can sustain 3 billion instructions per second and averages 100,000 instructions in the OS per I/O operation

• Memory backplane bus capable of sustaining a transfer rate of 1000 MB/s• SCSI Ultra320 controllers with a transfer rate of 320 MB/s and

accomodating up to 7 disks• Disk drives with a read/write bandwidth of 75 MB/s and an average seek

plus rotational latency of 6 ms

• If the workload consists of 64 KB sequential reads and the user program requires 200,000 instructions per I/O operation, what’s the maximum I/O rate and the number of disks and SCSI controllers needed (ignoring disk conflicts)?