Upload
lylien
View
221
Download
3
Embed Size (px)
Citation preview
NL VMUG UserCon – March 19 2015
Goodbye SAN huggersHello Virtual SAN and Virtual
Volumes
vSAN & VVOL Deepdive
© Cormac HoganSolution Architect
© Rawlinson RiveraPrincipal Architect
©Duncan EppingChief Technologist
We need to transform storage the way we transformed compute
Fast and Simple Provisioning Application-Centric Control Memory
CPU
Memory
CPU
Automated Self-Tuning
5
Customers Face Several Challenges with Storage Today
Device-centric Silos
✖ Static classes of service
✖ Rigid provisioning
✖ Lack of granular control
✖ Frequent data migrations
✖ Time consuming processes
✖ Lack of automation
✖ Slow reaction to request
Complex Processes
VIAdmin
StorageAdmin
App Admin
✖ Not commodity
✖ Low utilization
✖ Overprovisioning
Specialized Expensive HW
6
A New Approach is Needed: Software-Defined Storage
New Control Plane
From Hardware-centric to App-centric
New Data Plane
From Specialized to Industry Standard Hardware
Software-Defined StorageStorage Today
• Policy-driven automation• Common across arrays• Dynamic control
• Server SAN• Flash accelerated• Distributed
7
The hypervisor is best positioned
(1) Gartner Market Trends: x86 Server Virtualization, Worldwide, 2013
Why the Hypervisor:
• Over 70% of x86 server workloads are virtualized1
• It’s inherently app-aware
• Sits directly in the I/O path
• Has global view of underlying storage resources
• It’s hardware agnostic
vSphere
8
VMware Software-Defined Storage – The Control PlaneStorage Policy-Based Management Delivers App-centric Storage Automation
Capacity Performance
Availability Data Services
2 Failures to tolerate
Reserve thick 10 GB Reserve 200 IOPs
Snapshot
Replication
Deduplication
• Intelligent placement
• Fine control of services at VM level
• Automation at scale through policy
• Extensibility to storage arrays through vSphere Virtual Volumes
Storage Policy-Based Mgmt
vSphere
(Virtual) SAN
9
Virtual SAN
VMware® vSphere ® Storage Policy-Based Mgmt
• App-centric storage automation• Common mgmt across heterogeneous arrays
VMware® Virtual SAN™
• Hyper-converged architecture• Data persistence delivered from the hypervisor
The VMware Software-Defined Storage VisionTransforming Storage the Way Server Virtualization Transformed Compute
vSphere
11
VMware Virtual SAN
• Storage scale out architecture built into the hypervisor
• Aggregates locally attached storage from each ESXi host in a cluster
• Dynamic capacity and performance scalability
• Flash optimized storage solution • Fully integrated with vSphere and
interoperable:• vMotion, DRS, HA, VDP, VR …
• VM-centric data operations
+ + + ++ + +
…
+
12
vSphereStorage Policy-Based Mgmt
vSphere
VMware Software-Defined Storage - Data Plane
Virtual SANWhy now?• Greater CPU core
densities• Server-side flash• Flash interfaces standards
(i.e. NVMe)
Benefits• Brings data closer to
compute• Granular elastic scale out• Sever side economics
Hyper-converged
From single x86 platform
COMPUTE STORAGECompute Storage
13
Enterprise-Class Scale and Performance
Hosts / Cluster 32 64 64
IOPS / Host 20K 40K 90K
VMs / Host 100 200 200
VMs / Cluster 3200 6400 6400
Max VMDK Size 2TB 62TB 62TB
Virtual SAN5.5
Virtual SAN6.0 Hybrid
Virtual SAN6.0 All-Flash
14
Virtual SAN Implementation Details
Virtual SAN requires:– Minimum of 3 hosts in a cluster configuration– All 3 host must contribute storage
– Locally attached devices• Flash based devices (SSD)• Magnetic disks (HDD)• Max of 35 capacity devices per host with 4TB drives in a 64 host cluster that’s almost 9 Petabyte!
– Network connectivity• 10GbE Ethernet (preferred)• 1GbE Ethernet
15
Virtual SAN All-Flash
• Flash-based devices used for caching as well as capacity
• Cost-effective all-flash 2-tier model:o Cache is 100% write: using write-intensive,
higher grade flash-based devices
o Persistent storage: can leverage lower cost read-intensive flash-based devices
• Very high IOPS: up to 90K IOPS/Host• Consistent performance with sub-
millisecond latencies
Virtual SAN All-Flash
…
+ + + +
16
Virtual SAN Hybrid
• Software-defined storage built into vSphere• Runs on any standard x86 server• Pools magnetic disks into a shared datastore• Managed through per-VM storage policies• Delivers high performance through flash
acceleration• 2x more IOPS with VSAN Hybrid (6.0)
• Up to 40K IOPS / host• Highly resilient - zero data loss in the event of
hardware failures• Deeply integrated with the VMware stack
Virtual SAN Hybrid
…
+ + + +
17
3 ways to deploy Virtual SAN
VMware EVO:RAIL HCIA
• Maximum simplicity• Prebuilt Hyper-Converged Infrastructure
Appliance (HCIA)
• EVO:RAIL Rapid Deployment and Configuration Engine
• Single SKU for hardware, software,and SnS
• Transformational user experience
Virtual SAN built from HCL components
• Maximum choice• Go through HCL to find certified
combination
• Assemble hardware
• Procure many SKUs:• Hardware platform• Hardware SnS• Software + SnS• Possible services
Virtual SANReady Node
• Manufacturer specifies hardware configuration from HCL components
• Hardware is prebuilt; customerinstalls and configures vSphere& Virtual SAN
• Procure:• Hardware platform• Hardware SnS• Software + SnS• Possible services
18
HW Considerations - Check the VMware Compatibility Guide
Any Server on the VMware Compatibility
Guide
SAS/SATA/PCIe SSD
SAS/NL-SAS/SATA HDD
SAS/SATA Controllers
vSphere Edition
19
Support for Blade-only Direct Attached JBODs
2015 & 2016
Storage Blades
Blade Servers
SAS
Con
nect
ion
• Manage disks in enclosures
• Enables Virtual SAN to scale on blade servers by adding more storage to blade servers with few or no local disks
• Flash acceleration provided on the server or in the subsystem
• Supported on both VSAN 5.5 and 6.0
• Examples: – IBM Flex SEN with x240 Blade Series– Dell FX2 with 12G ControllersDirect Attach Compute:Storage 1:1
20
Yes… really simple!Virtual SAN is a cluster level feature similar to:
– vSphere DRS– vSphere HA– Virtual SAN
Deployed, configured and manage from vCenter through the vSphere Web Client– Radically simple
• Configure VMkernel interface for Virtual SAN• Enable Virtual SAN by clicking Turn On
21
Define a policy first…
• Virtual SAN currently surfaces five unique storage capabilities to vCenter Server
What If APIs
22
And if you configured it correctly, things are easy!
All VM provisioning operation include access to VM Storage Policies
23
It is just a datastore like any other, well but then cooler
Virtual SAN understands the capabilities in the VM Storage Policy, VM can be provisioned
24
Number of Failures to Tolerate
Number of failures to tolerate– Defines the number of hosts, disk or network failures a storage object can tolerate. For “n” failures
tolerated, “n+1” copies of the object are created and “2n+1” host contributing storage are required
vsan network
vmdkvmdk witness
esxi-01 esxi-02 esxi-03 esxi-04
~50% of I/O ~50% of I/O
Virtual SAN Policy: “Number of failures to tolerate = 1”
raid-1
25
Number of Disk Stripes Per Object
Number of disk stripes per object– The number of HDDs across which each replica of a storage object is distributed. Higher values may
result in better performance.
vsan network
stripe-2b witness
esxi-01 esxi-a02 esxi-03 esxi-04
stripe-1bstripe-1a stripe-2a
raid-0raid-0
VSAN Policy: “Number of failures to tolerate = 1” + “Stripe Width =2”
raid-1
26
Configure through Web Client / Host Profiles / RVC Create fault domains to increase availability
Four defined fault domains
FD1 = esxi-01, esxi-02
FD2 = esxi-03, esxi-04
To protect against one rack failure only 2 replicas are required
Fault Domains, increasing availability through awareness
FD3 = esxi-05, esxi-06
FD4 = esxi-7, esxi-08
vsan network
vmdk
esxi-01 esxi-02 esxi-03 esxi-04
raid-1
esxi-05 esxi-06 esxi-07 esxi-08Rack 1 – FD1 Rack 3 – FD3
vmdk
Rack 2 – FD2 Rack 4 – FD4
witness
27
VSAN FS
Performance Snapshots and Clones
• Virtual SAN 6.0 uses new on-disk format (VirstoFS)
• Virtual SAN 5.5 snapshots were based on vmfsSparse
• vsanSparse based snapshots deliver performance comparable to native SAN snapshots.
– vsanSparse takes advantage of the new on-disk format writing and extended caching capabilities to deliver efficient performance.
– Redirect on Write mechanism
snapshots clones
28
Virtual SAN 6 Delivers New High Performance Snapshots
• New redirect-on-write snapshot
• Greater snapshot depth (up to 32snapshots per object)
• Minimal performance degradation– As low as 2% from base
0
1
2
3
4
5
6
7
8
9
10
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31
% D
egra
datio
n
Snapshot Depth
Snapshot Performance
<2% impact
29
Virtual SAN Partners… just to name a few
vSphereStorage Policy-Based Mgmt
Virtual SAN
Storage Policy-Based Mgmt
VMware Software-Defined Storage
Virtual Datastore
30
vSphere Virtual VolumesA More Efficient Operational Model For External Storage
31
vSphere Virtual VolumesManagement & Integration Framework for External Storage
VirtualVolumes
The Basics
• Virtualizes SAN and NAS devices
• Virtual disks are natively represented on arrays
• Enables VM granular storage operations using array-based data services
• Storage Policy-Based Management enables automated consumption at scale
• Supports existing storage I/O protocols (FC, iSCSI, NFS)
• Industry-wide initiative supported by major storage vendors
• Included with vSphere Standard
32
vSphere Virtual Volumes Architecture
vSphereVirtual Volumes
SAN / NAS
Vendor Provider (VASA)
Control Path
Control Path
Storage Policies
Access
Capacity
Published Capabilities
Snapshot
Replication
Deduplication
QoS
Virtual Datastore
Storage Admin
VI Admin
VVOLs
DataPathProtocol Endpoint PE
33
vSphere Virtual Volumes
• Virtual Volumes – Virtual machine objects stored natively on the
array– No Filesystem on-disk formatting required
• There are five types of Virtual Volumes:– CONFIG – vmx, logs, nvram, log files, etc
– DATA – VMDKs
– MEM – Snapshots
– SWAP – Swap files
– Other – Vendor solution specific
vSphere Web Client View
34
Protocol EndpointProtocol Endpoints• Access points that enables communication
between ESXi hosts and storage array systems.– Part of the physical storage fabric– Created by Storage administrators
Scope of Protocol Endpoints• Compatible with all SAN and NAS Protocols:
- iSCSI- NFS v3 - FC- FCoE
• A Protocol Endpoint can support any one of the protocols at a given time
• Existing multi-path policies and NFS topology requirements can be applied to the PE
Why Protocol Endpoints?SAN / NAS
Virtual Datastore
Data PathProtocol Endpoint PE
vSphereVirtual Volumes
Storage Container
35
Protocol Endpoints• Today, there are different types of logical management
constructs to store VMDKs/objects:– NFS Mount Points– IP or block based datastores
• Datastores serve two purposes today:– Endpoints – receive SCSI or NFS reads, write
commands– Storage Container – for large number of VMs
metadata and data files
• Differences between Protocol Endpoints and Datastores:– PEs no longer stores VMDKs but it only becomes
the access point– Now you won’t need as many datastores or mount
points as before
• Certain offloading operation will be done via VASA and other will be done using the standard protocol commands
vSphere
storage fabric
PEprotocol endpointSCSI: proxy LUNNFS: mount-point
datastore = protocol endpoint + storage container
storage system
1 VVOL per VMDK
One entity on the fabric
36
Storage Container
vSphere Web Client
Storage Management UI
Datastore
Storage Container
• What does the vSphere Admins see?• Why are we still creating datastores in this
new model?
• What do the Storage Admins see?• How are the storage containers setup?
37
Storage ContainerStorage Containers
• Logical storage constructs for grouping of virtual volumes.
• Typically defined and setup by storage administrators on the array in order to define:
– Storage capacity allocations and restrictions
• Capacity is based on physical storage capacity
• Logically partition or isolate VMs with diverse storage needs and requirement
– Storage policy settings based on data service capabilities
• Minimum one storage container per array
• Maximum depends on the array
vSphere Virtual Volumes
SAN / NAS
Storage Containers
38
vSphere APIs for Storage Awareness (VASA)
What does the vSphere Admin see?
What does the Storage Admin see?
Storage policies
vSphere Web Client
Storage Management UI
Datastore
Storage Container
Storage Capabilities
virtual volumes
virtual machines
39
VASA Provider (VP)
• Software component developed by Storage Array Vendors
• ESX and vCenter Server connect to VASA Provider
• Provides storage awareness services
• Single VASA Provider can manage multiple arrays
• VASA Provider can be implemented within the array’s management server or firmware
• Responsible for creating Virtual Volumes
SAN / NAS
Virtual Datastore
DataPathProtocol Endpoint PE
vSphereVirtual Volumes
Storage Container
Vendor Provider (VASA)
Control Path
Control Path
40
Storage Capabilities and VM Storage Policies
• Storage Capabilities – are array based features and data services specifications that capture storage requirements that can be satisfied by a storage arrays advertised as capabilities.
• Storage capabilities define what an array can offer to storage containers as opposed to what the VM requires.
• Arrays Storage Capabilities are advertises to vSphere through the Vendor Provider and VASA APIs
• In vSphere Storage Capabilities are consumed via VM Storage Policy constructs.
• VM Storage Policies is a component of the vSphere Storage Policy-based management framework (SPBM)
SPBM
object manager
virtual disk
Datastore ProfileVM Storage Policy
vSphere VM Storage Policy Management Framework
Storage Capabilities for Storage Array
Access
Capacity
Published CapabilitiesSnapshot
Replication
Deduplication
QoS
Virtual Datastore
41
Storage Policy Based Management (SPBM) – VM Policies
42
Storage Policy Based Management (SPBM)
43
Storage Container
vvol
DATA
vvol
CONF
vvol
SWAP
vvol
DATA
vvol
CONF
vvol
SWAP
VM Provisioning Workflow
storage arrays
PE
vSphere Admin1. Create Virtual Machines2. Assign a VM Storage Policy3. Choose a suitable Datastore
Under the Covers
• Provisioning operations are translated into VASA API calls in order to create the individual virtual volumes.
Under the Covers
• Provisioning operations are offloaded to the array for the creation of virtual volumes on the storage container that match the capabilities defined in the VM Storage Policies
offload to array
Virtual Datastore
vSphereVirtual Volumes
44
Snapshots• Snapshots are a point in time copy on write
image of a Virtual Volume with a different ID from the original.
• Virtual Volumes snapshots are useful in the contexts of creating:– a quiesced copy for backup or archival
purposes, creating a test and rollback environment for applications, instantly provisioning application images, and so on.
• Two type of snapshots supported:– Managed Snapshot – Managed by ESX.
• A maximum of 32 snapshot are supported for fast clones
– Unmanaged Snapshot – Manage by the storage array.• Maximum snapshot dictated by the storage array
Managed Snapshot - vSphere
Unmanaged Snapshot - Array
45
vSphere Virtual Volumes Supported FeaturesSupported vSphere Features
• SPBM
• Thin Provisioning
• Linked Clones
• Native Snapshots
• Protocols: NFS3, iSCSI, FC, FCoE
• View Storage Accelerator (CBRC)
• vMotion
• SvMotion
• DRS
• XvMotion
• vSphere SDK (VC APIs)
• VDPA/VDP
• View
• vRealize Operations
• vRealize Automation
• Stateless / Host Profiles
Published Capabilities
Snapshot
Replication
Dedupe
Encryption
vSphereStorage Policy-Based Mgmt.
Virtual Volumes
Storage PolicyCapacity
Availability
Performance
Data Protection
Security
PEVASA Provider
PE
46
The Benefits of vSphere Virtual VolumesA More Efficient Operational Model For External Storage
Improves Resource Utilization
• Increase capacity utilization.• Eliminate overprovisioning• Reduce management overhead
• Eliminate inefficient handoffs between VI and Storage Admin
• Faster storage provisioning through automation
• Simplified change management through flexible consumption
• Self-service provisioning via cloud automation tools.
Simplifies Storage Operations
• Leverage native array-based capabilities
• Fine control at the VM level• Dynamic configuration on the fly• Ensure compliance through policy
enforcement using automation
Simplifies Delivery of Service Levels
47
vSphere Virtual Volumes Is An Industry-wide Initiative
Multiple Ready at GA
Unique capabilities
And Many More…
29 Partners in the Program
48
Use Cases and Case Studies
49
Customer Case Study: GDF Suez Energie Nederland
Challenge• Beperkte ruimte in de 3 decentrale DC• Verouderde fysieke servers• Nieuwe oplossing noodzakelijk voor snel rapid deployment
Solution• Virtual SAN geïmplementeerd op HP servers (Rack-mounted & Blades)
Results• Management: ‘SAN’ management in dezelfde interface• Lage kosten: Door efficiënt hergebruik beschikbare middelen optimaal resultaat• Deployment: Snelle deployment voor meetsystemen tbv rotatiesnelheden generatoren
en windmolens
“We kunnen nu tegen aanzienlijke lagere kosten dan bij een ‘traditionele’ SAN oplossing snel en eenvoudig de capaciteit op een energiecentrale uitbreiden. Het is nu niets meer dan een server of extra disks toevoegen” Rene Helweg
Virtualisatie Specialist
Energiebedrijf; productie, onderhoud en distributie van elektriciteit, aardgas en duurzame energie, 147.000 werknemers, 6DC’s in NL
50
Challenge• SAN op uitwijk locatie is EOL• Geen nieuwe leveranciers en extra supportcontracten gewenst• Verouderde inbelfaciliteiten voor externe leveranciers
Solution• Virtual SAN geïmplementeerd op HP servers• VMware Horizon View geïmplementeerd
Results• VDI: VDI op basis van Horizon View & microzonering voor externe leveranciers• Eenvoudig toegang realiseren voor externe leveranciers• Budget-vriendelijk: volwaardige uitwijklocatie voor mission critical systems tegen
lage kosten
“Onze uitwijklocatie kan met een minimale configuratie in de lucht worden gehouden waarbij alleen de meest kritische systemen constant aanwezig zijn op die omgeving. In geval van een calamiteit kunnen we eenvoudig extra disks plaatsen om de overige VMs te deployen.
Rene HelwegVirtualisatie Specialist
Energiebedrijf; productie, onderhoud en distributie van elektriciteit, aardgas en duurzame energie, 147.000 werknemers, 6DC’s in NL
Customer Case Study: GDF Suez Energie Nederland
51