Red Hat Storage 2014 - Product(s) Overview

Preview:

DESCRIPTION

Red Hat Storage - Software Defined Storage solutions for File, Block and Object purposes.

Citation preview

RED HAT STORAGELIBERATE YOUR INFORMATION

Marcel Hergaarden

Solution Architect, Red Hat

Tuesday, October 28, 2014

Agenda

● Red Hat Storage and Inktank Ceph

● Software Defined concept

● Setup Hierarchy

● Storage Topology Types

● Storage for Openstack

● RHS 3.0 New Features

● Inktank Ceph introduction

Inktank Ceph

● April 2014: Red Hat acquires Inktank Ceph

Future Red Hat Storage: 2 Flavours

● Red Hat Storage – Gluster edition

Mostly used for filebased storage purposesCan also be used as Virtualization Store or Object Storage

● Red Hat Storage – Ceph Edition

Positioned as defacto storage platform for OpenStackCeph offers block- and objectbased access

Red Hat Storage Positioning

FILE OBJECT BLOCK

RED HAT STORAGE

CEPH

Best Scale-Out NAS

Best Object Store Kernel-supported& exposed

Not yet available

SWIFT-basedFile + Object access

Through API-only(libqemu)

What means Software Defined Storage ?

● RHS is a software solution, not an appliance with disks

Open Software-Defined Storage● Stable scale-out storage platform

● Runs On-premise, in Private- and in Public Cloud

SCALE-OUT STORAGEARCHITECTURE

PHYSICAL

Standard x86 systemsScale-out NAS solutions

VIRTUAL

Include idle or legacy resources

CLOUD

EBSEBS

PERSISTENT DATA STORES

Physical Virtual Cloud

Red Hat Storage Server: Software Defined Storage Platform

Continuous Storage Platform

Converged Compute and Storage

Red Hat Storage Server: Software Defined Storage Platform

ENTERPRISEMOBILITY

Increase Data, Application and Infrastructure Agility

CLOUDAPPLICATIONS

CONVERGED COMPUTE AND STORAGE

OPEN, SOFTWARE-DEFINED STORAGE PLATFORM

SCALE-OUT STORAGEARCHITECTURE

PHYSICAL

Standard x86 systemsScale-out NAS solutions

VIRTUAL

Include idle or legacy resources

CLOUD

EBSEBS

BIG DATAWORKLOADS

ENTERPRISEAPPLICATIONS

DATA SERVICES

PERSISTANT DATA STORES

InktankCephEnterprise

FILE SERVICES OPEN OBJECT APIsBLOCK IO

COMPUTECOMPUTE

SOFTWARE-DEFINED / BASED

COMPUTE(Virtualization)

SOFTWARE-DEFINED / BASED

COMPUTE(Virtualization)

STORAGESTORAGE

SOFTWARE-DEFINED / BASED

STORAGE

SOFTWARE-DEFINED / BASED

STORAGE

NETWORKINGNETWORKING

SOFTWARE-DEFINED / BASED

NETWORKING

SOFTWARE-DEFINED / BASED

NETWORKING

ENVIRONMENTALENVIRONMENTAL

SOFTWARE-DEFINED / BASED

FACILITIES

SOFTWARE-DEFINED / BASED

FACILITIES

DATA CENTER FABRIC DATA CENTER FABRIC

Cornerstone of the modern SOFTWARE DEFINED DATACENTER

Red Hat Storage design philosophy

● Runs on X86 commodity hardware systems

● Agnostic deployment (on-premise, virtual, cloud)

● Provides a single namespace storage capacity

● Elastic storage pool – grow or shrink online as needed

● Linear scaling – either scale-up as scale-out

● Components can be subject to hardware failures

Scale-out Software-Defined Architecture

1TB

4TB

Scale out performance, capacity, and availability

Scal

e up

cap

acity

SINGLE GLOBAL NAMESPACE

...

...SERVER(CPU/MEM)

............ ... ...

Global namespace

Aggregates CPU, memory, network capacity

Deploys on RHEL-supported servers and directly connected storage

Scale out linearly

Scale out performance and capacity as needed

VOLUME

VARIETY

SCALE

PORTABILITY

New data problems

Business data growth estimates in 2014

Virtualization

StandardGrowth

MobileComputing

+

Big Data

SocialNetworks

Internet ofThings

+

CloudComputing

50% 100%

20142013

What happens in an Internet minute ?

The Challenge

Exponential Growth of Data

IT Storage Budget

Existing systems don’t scale and are notbuilt / optimized forunstructured data

Increasing cost and complexity

Need to invest in newplatforms ahead of time 2010 2020

Red Hat Storage Setup Topology

RHS Operating System RHS Operating System

Brick #1 Brick #2

Red Hat Storage Setup Topology

RHS Operating System RHS Operating System

SMB 2.0

server1:/exp1 server2:/exp1

DISTRIBUTED VOLUME

FILE 1 FILE 2 FILE 3

BRICK BRICK

MOUNT POINT

server1:/exp1 server2:/exp1

DISTRIBUTED VOLUME

BRICK BRICK

Red Hat Storage: Distributed Volume

MOUNT POINT

server1:/exp1 server2:/exp1

REPLICATED VOLUME

FILE 1 FILE 2

BRICK BRICK

Red Hat Storage: Replicated Volume

MOUNT POINT

Replicated Volume 0

DISTRIBUTED VOLUME

FILE 1 FILE 2

BRICK(exp1)

Replicated Volume 1

BRICK(exp2)

server1 server2

BRICK(exp3)

BRICK(exp4)

server3 server4

Red Hat Storage: Distributed Replicated Volume

Featured use-cases

● Scalable Storagelibrary: Into Petabytes scale

● VM store for RHEV (Red Hat Virtualization)

● Target store for Backup and Archiving (Commvault)

● Storage infra for OpenStack: Cinder, Glance & Swift

● Storage for Fileservice and/or data archives

● Storage for (very) Large files, also Big Data purposes

● Storage for Multimedia purposes

● Windows support: File service and Active Directory

Targetstore for CommVault Simpana

CommVault SimpanaDatastreams benefits

Red Hat Storage inside Openstack

Red Hat Storage 3.0

New key features in Red Hat Storage 3.0

Enhanced Data Protection● Snapshots of Gluster volumes

● Consistent Point-in-Time copies of data

● Help to improve Disaster Recovery Use-Case;

● Create multiple consistent Point-in-Time copies during aday

● Roll-back within minutes to the last Snapshot in case of aVirus-attack, Admin-error, etc.

● Doesn't replace Backup/Recovery but enhances it

Cluster Monitoring

● Nagios-based RHS-Cluster health and performance information

● 3 different deployment options;

● Nagios web-frontend standalone

● Agent-only for integrating in existing Nagios environments

● As a RHS-Console plugin

New key features in Red Hat Storage 3.0

Other Important enhancements in RHS 3.0Deep Hadoop Integration

HDFS-compatible filesystem eliminates overhead of data movement

Flexibility at each phase of your processing pipeline

Data scientist, Programmers, Business Analyst

Data fromAny Source

ApacheHadoop

(MapReduce/Pig/Hive/Hbase, etc)

Red Hat Storage

Posix

Any LinuxTool or

Application(grep, sed, awk, find,

python, etc)

Posix HDFS

Commodity Hardware

Load Pre-process Analyze Export

Data to Any Source

Post-process

Posix Posix

If Necessary... If Necessary...

Any LinuxTool or

Application(grep, sed, awk, find,

python, etc)

Other Important enhancements in RHS 3.0

Enhanced Capacity● Up to 60 disks per RHS node => lower TCO ● Up to ~205TB per node, netto usable capacity● Clustersize up to 120 nodes (was 64 nodes)

Maintainability● None-disruptive upgrades

Introduction of new package delivery options● Red Hat Storage Starter Pack SKU

Other Important enhancements in RHS 3.0

Brick resource changes

SSD disks as brick● SSD’s are now officially supported for the use as brick component

SAN resources● SAN disk resources may be used as brick (architecture review req.)

● Intuitive user interface● Manages massive

scale out● Installation and

configuration● Volume management● On-premise and

public cloud● Integrates with RHEV-

M

Red Hat Storage Gluster edition consolesimplified management

Inktank Ceph Enterprise 1.2

Key Themes in Inktank Ceph Enterprise v1.2

Enterprise Readiness● RADOS Management● User Quotas● RHEL 7 support

Lower TCO● Erasure Coding

Performance● Primary OSD Affinity● Cache Tiering● Key/Value OSD-backend

Intro to Ceph Storage

Ceph RADOS

RADOSA software-based, reliable, autonomous, distributed object store comprised ofself-healing, self-managing, intelligent storage nodes and lightweight monitors

Reliable Autonomous Distributed Object Store

Ceph LIBRADOS

LIBRADOSA library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby, PHP)

RADOSA software-based, reliable, autonomous, distributed object store comprised ofself-healing, self-managing, intelligent storage nodes and lightweight monitors

Library to access Rados

Ceph Unified Storage

RGWA web services gateway forobject storage, compatible

with S3 and Swift

LIBRADOSA library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby, PHP)

RADOSA software-based, reliable, autonomous, distributed object store comprised ofself-healing, self-managing, intelligent storage nodes and lightweight monitors

RBDA reliable, fully-distributed

block device with cloudplatform integration

CEPHFSA distributed file system

with POSIX semantics andscale-out metadata

management

APP HOST/VM CLIENT

Ceph Object Storage Daemons

FS

DISK

OSD

DISK

OSD

FS

DISK

OSD

FS

DISK

OSD

FS

btrfsxfsext4

M

M

M

Ceph RADOS cluster

APPLICATION

M M

M M

M

RADOS CLUSTER

Ceph RADOS Components

OSDs:

10s to 10000s in a cluster

One per disk (or one per SSD, RAID group…)

Serve stored objects to clients

Intelligently peer for replication & recovery

Monitors:

Maintain cluster membership and state

Provide consensus for distributed decision-making

Small, odd number

These do not serve stored objects to clientsM

Ceph CRUSH algorithm Dynamic Data Placement

CRUSH:

Pseudo-random placement algorithm

Fast calculation, no lookup

Repeatable, deterministic

Statistically uniform distribution

Stable mapping

Limited data migration on change

Rule-based configuration

Infrastructure topology aware

Adjustable replication

Weighting

BLOCKSTORAGE

OBJECTSTORAGE

Equivalent toAmazon S3

Equivalent toAmazon EBS

FILESYSTEM

Not yetEnterprisesupported

Ceph Unified Storage

BLOCKSTORAGE

OBJECTSTORAGE

Equivalent toAmazon S3

Equivalent toAmazon EBS

Ceph Unified Storage

Ceph with OpenStack

OPEN STACK

KEYSTONE APISWIFT

APICINDER

APIGLANCE API NOVA

API

CEPH STORAGE CLUSTER(RADOS)

CEPH OBJECT GATEWAY(RGW)

CEPH BLOCK DEVICE(RBD)

HYPERVISOR(Qemu/KVM)

Ceph as Cloud Storage

WEB APPLICATION

APP SERVER APP SERVER APP SERVER

CEPH STORAGE CLUSTER(RADOS)

CEPH OBJECT GATEWAY(RGW)

CEPH OBJECT GATEWAY(RGW)

APP SERVER

S3/Swift S3/Swift S3/Swift S3/Swift

Ceph Cloud Storage including DR

WEB APPLICATION

APP SERVER

CEPH OBJECT GATEWAY(RGW)

CEPH STORAGE CLUSTER(US-EAST)

WEB APPLICATION

APP SERVER

CEPH OBJECT GATEWAY(RGW)

CEPH STORAGE CLUSTER(EU-WEST)

Ceph Web Scale Applications

WEB APPLICATION

APP SERVER APP SERVER APP SERVER

CEPH STORAGE CLUSTER(RADOS)

APP SERVER

NativeProtocol

NativeProtocol

NativeProtocol

NativeProtocol

Ceph Cold Storage

APPLICATION

CACHE POOL (REPLICATED)

BACKING POOL (ERASURE CODED)

CEPH STORAGE CLUSTER

Ceph management: Calimaris

Hands-on Red Hat Storage workshop

Red Hat Storage testing on Amazon Web Services(AWS)

https://engage.redhat.com/aws-test-drive-201308271223

THANK YOU

“...A DISRUPTIVE ANDUNSTOPPABLE FORCE.”

–IDC REPORT

Recommended