26

The Rise of Open Storage

Embed Size (px)

DESCRIPTION

My presentation to the Nexenta European User Conference, May 20th 2011, Amsterdam

Citation preview

Page 1: The Rise of Open Storage
Page 2: The Rise of Open Storage

The Rise of Open Storage

…and the benefits of storage virtualisation

2

Page 3: The Rise of Open Storage

Agenda

• Early Storage• Storage Arrays - the last 20 years• The move to standardised hardware• It’s all about the software• Parallels with Server Virtualisation• Storage virtualisation and hardware

independence• Future speculation

3

Page 4: The Rise of Open Storage

Early Storage

• Pioneered by IBM– IBM 350 Disk Storage

Unit– Released in 1956– 1.52m x 1.72m x 0.74m– 50 magnetic disks– 5MB capacity– 600ms access time

4

IBM 350 Disk Storage UnitImage courtesy of IBM Archives

Page 5: The Rise of Open Storage

Early Storage

• “Winchester” Drives– Named after 30-30 rifle– Released in 1973– Smaller & lighter– 70MB capacity– 25ms access time

5

IBM 3340 DASF(courtesy of IBM archives)

Page 6: The Rise of Open Storage

Early Storage

• Large, monolithic “refrigerator” units

• No hardware recovery• Slow & expensive• Cumbersome CKD

format• Each LUN/volume still a

physical disk6

IBM 3380 Model CJ2(courtesy of IBM archives)

Page 7: The Rise of Open Storage

Early Storage

• Seagate ST-506– Released in 1980– 5MB Capacity– 5.25” form factor– No onboard controller– Adopted for IBM PC

7

Over 24 years, for the same capacity, drive sizes reduced by 800 times

Page 8: The Rise of Open Storage

Early Storage

• Disk Drives Today– 3TB+ Capacity– Integrated controllers– Small Form Factor (2.5”)– 6Gb/s interfaces– Very high reliability– Low cost per GB

Drives are now Commodity Components

8

Page 9: The Rise of Open Storage

Early Storage

9

50 years of development…

…from cargo to pocket!

Page 10: The Rise of Open Storage

Storage – The Last 20 Years

• EMC set the standard– Symmetrix released 1990– Integrated Cache Disk Array– Dedicated hardware

components– RAID-1– Replication (SRDF in 1994)– Support for non-mainframe

(1995)

10

Page 11: The Rise of Open Storage

• Integrated storage arrays separated control and management from the host

• Custom hardware design• More functionality pushed to

the array– Cache I/O– Replication– Snapshots– Logical LUNs

11

Storage – The Last 20 Years

Page 12: The Rise of Open Storage

• Rapid development in features

• Many vendors – IBM, Hitachi, HP

• New product categories– Midrange/modular (e.g.

CLARiiON)– NFS Appliances – Filers– De-duplication devices

12

Storage – The Last 20 Years

Page 13: The Rise of Open Storage

Storage – The Last 20 Years

• Storage has Centralised– Storage Area Networks– Started with ESCON &

SCSI– Fibre Channel (1997

onwards)– NAS (early 1990s)– iSCSI (1999 onwards)– FCoE

13

Page 14: The Rise of Open Storage

The Move to Standarisation

• Hardware components have become more reliable

• More features moved into software– RAID– Replication

• Some bespoke features remaining in silicon– 3PAR dedicated ASIC– Hitachi VSP virtual processors

14

Page 15: The Rise of Open Storage

The Move to Standardisation

• Reduced Cost– Cheaper components– No custom design– Reusable by generation

• Higher Margins

15

Page 16: The Rise of Open Storage

The Move to Standardisation

• New breed of products– EMC VMAX– Hitachi VSP– HP P9500

• New Companies– Compellent– 3PAR– Lefthand– Equallogic– Isilon– IBRIX

• It’s no surprise that these companies have been acquired for their software assets16

Page 17: The Rise of Open Storage

It’s all About Software

• Storage arrays look like servers– Common components– Generic physical layer

• Independence from hardware allows:– Reduced cost– Design hardware to meet requirements

• Quicker to market with new hardware

– More scalability– Quicker/Easier upgrade path– Deliver new features without hardware

upgrade

17

Page 18: The Rise of Open Storage

It’s all About Software

• Many vendors have produced VSAs– Netapp – simulator (not strictly a VSA),

Lefthand/HP, Gluster, Falconstor, Openfiler, OPEN-E, StorMagic, NexentaStor, Sun Amber Road

• Most of these run exactly the same codebase as the physical storage device

• As long as reliability & availability are met, then the hardware is no longer significant

18

Page 19: The Rise of Open Storage

Parallels with Server Virtualisation

• Server virtualisation was successful due to power of Intel processors & Linux

• Enabled x86 work to be used for Windows and Open Systems

• Windows platform almost needs 1 server per application, forcing consolidation

• Wave 1 server virtualisation reduced costs, improved hardware utilisation – the consolidation phase.

• Wave 2 implemented mobility features; vMotion, Storage vMotion, HA, DRS.

19

Page 20: The Rise of Open Storage

Parallels with Server Virtualisation

• Virtualisation enables disparate operating systems to be supported on the same hardware

• Workload can be balanced to meet demand

• Hardware can be added/removed non-disruptively – transparent upgrade

• Server virtualisation has enabled high scalability

20

Page 21: The Rise of Open Storage

• VSAs show closely coupled hardware/software is no longer required

• Software can be developed and released independently– Feature release not dependent on

hardware

21

Storage Virtualisation & Hardware Independence

Page 22: The Rise of Open Storage

Storage Virtualisation & Hardware Independence

• Hardware can be designed to meet performance, availability & throughput, leveraging server hardware development– Branches with smaller hardware– Core data centres with bigger

arrays– Both using same

features/functionality22

Page 23: The Rise of Open Storage

Future Speculation

• LUN virtualisation rather than array virtualisation is the key to future

• LUNs must be individually addressable• Ability to move a LUN between physical

infrastructures– LUN owned/managed by an array– Transparent migration, failover– Increased availability

• Delivers data mobility – an absolute requirement as data quantities increase (especially PB+ arrays)

23

Page 24: The Rise of Open Storage

Future Speculation

• New addressing schema necessary• Remove restrictions of Fibre Channel

– Address LUN independently of physical location

– Allow LUN to move around infrastructure– Allow LUN to be addressed through

multiple locations– More granular sub-object access

• Better load balancing• Better mobility

24

Page 25: The Rise of Open Storage

Future Speculation

• VMFS are LUNs – which are binary objects• VMFS divides into VMDKs for

independent access• Virtual machine becomes the object to

move around the infrastructure• Sub-LUN access and locking enables read

& write everywhere approach• Storage and Virtualisation will be

inextricably linked to each other

25

Page 26: The Rise of Open Storage

26

Questions?