16
Future of Information Storage with ISS SuperCore and Ceph Intelligent Systems Services Inc. Alex Gorbachev, President Neal Purchase, Solutions Architect Cindy Markee, Director of Technology Sales Email: [email protected]

OpenNebula TechDay Boston 2015 - Future of Information Storage with ISS SuperCore and Ceph

Embed Size (px)

Citation preview

Page 1: OpenNebula TechDay Boston 2015 - Future of Information Storage with ISS SuperCore and Ceph

Future of Information Storage with ISS SuperCore and Ceph

Intelligent Systems Services Inc.

Alex Gorbachev, President

Neal Purchase, Solutions Architect

Cindy Markee, Director of Technology Sales

Email: [email protected]

Page 2: OpenNebula TechDay Boston 2015 - Future of Information Storage with ISS SuperCore and Ceph
Page 3: OpenNebula TechDay Boston 2015 - Future of Information Storage with ISS SuperCore and Ceph

Data Storage Challenges

• Always. Need. More. GB TB PB EB

• RAID rebuild times now takes days per disk

• Limits on SAN array expansion (once you hit the physical limits you have to buy more arrays)

• Cost. Cost. Cost.

• Have to choose: Speed? Cost? Size?

• Upgrades mean downtime

Page 4: OpenNebula TechDay Boston 2015 - Future of Information Storage with ISS SuperCore and Ceph

Intro to Ceph

• Created as a Ph. D. thesis by Sage Weil in 2007

• Method for storing data that does not depend on parity calculations, controllers or lookup tools

• Pseudo Random Data Distribution

• CRUSH – Controlled Replication Under Scalable Hashing

• Getting too weird?

Page 5: OpenNebula TechDay Boston 2015 - Future of Information Storage with ISS SuperCore and Ceph

Hotel with a Billion Rooms

Page 6: OpenNebula TechDay Boston 2015 - Future of Information Storage with ISS SuperCore and Ceph

Pseudo Random Hashing

Abel

Baker

Charlie

• Data location is based on data itself

• Secondary (Tertiary etc.) location computed on the fly

Page 7: OpenNebula TechDay Boston 2015 - Future of Information Storage with ISS SuperCore and Ceph

Ceph RADOS

• Reliable Autonomic Distributed Object Store

• Block Device store without limitations on size, scalability, performance

Page 8: OpenNebula TechDay Boston 2015 - Future of Information Storage with ISS SuperCore and Ceph

Why is this not everywhere?

• Focus on performance, features, scalability

• No SAN like interface

• Native drivers are not yet mainstream

• Cloud oriented rather than enterprise

• But RADOS is a mature, well performing technology, so we developed…

Page 9: OpenNebula TechDay Boston 2015 - Future of Information Storage with ISS SuperCore and Ceph

Client Network and Fabric

Virtualization Servers

SuperCore Delivery Cluster

SuperCore Active Storage Nodes

SuperCore Control-Monitor

iSCSI

Fibre Channel

NFS

CIFS/SMB

Standalone Servers

SuperCore Backup

Page 10: OpenNebula TechDay Boston 2015 - Future of Information Storage with ISS SuperCore and Ceph

Basic Node Architecture

Page 11: OpenNebula TechDay Boston 2015 - Future of Information Storage with ISS SuperCore and Ceph

Control and Management Interface

Page 12: OpenNebula TechDay Boston 2015 - Future of Information Storage with ISS SuperCore and Ceph

Why we think this is AWESOME…

Page 13: OpenNebula TechDay Boston 2015 - Future of Information Storage with ISS SuperCore and Ceph

Resilience and Survival

• Replaces the outdated RAID technology with distributed storage logic that (unlike RAID) delivers limitless expansion and instant rebuild capability.

• Core and delivery EACH have separate redundancy layers

• Self healing algorithm located and eliminates faults on the fly without limitations (RAID systems can take 1-2 hits before data gets irreversibly corrupted, while SuperCore can take many hits in different areas and remain functional).

Page 14: OpenNebula TechDay Boston 2015 - Future of Information Storage with ISS SuperCore and Ceph

Replication and Optimization

• Ability to set placement rules for storage (where does data go and how is it spread out to protect the most against failures - rack, row, data center, city, continent...)

• Highly available flash caching that can be individually configured for every volume and centralized tiered caching.

• Performance increases with growth vs. traditional systems

Page 15: OpenNebula TechDay Boston 2015 - Future of Information Storage with ISS SuperCore and Ceph

Economy

• Different client systems can share the same storage hardware while maintaining quality of performance and separation of resources

• Ability to control storage overhead to meet capacity and performance demands while maintaining the required degree of protection

• On-demand provisioning (Thin Provisioning) economizes storage use overall up to 40%

Page 16: OpenNebula TechDay Boston 2015 - Future of Information Storage with ISS SuperCore and Ceph

Questions?