58
Vaughn Stewart, VP, Enterprise Architect, Pure Storage Cody Hosterman, Technical Director, Pure Storage SER2355BE #VMworld #SER2355BE Best Practices for All- Flash Arrays with VMware vSphere VMworld 2017 Content: Not for publication or distribution

Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

  • Upload
    dokiet

  • View
    218

  • Download
    3

Embed Size (px)

Citation preview

Page 1: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

Vaughn Stewart, VP, Enterprise Architect, Pure StorageCody Hosterman, Technical Director, Pure Storage

SER2355BE

#VMworld #SER2355BE

Best Practices for All-Flash Arrays with VMware vSphere

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 2: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.3

VMworld CommunitySession by

AGENDA

⎯ Settings, configuration options, etc designed for every all-flash array regardless of vendor

⎯ Our philosophy…

⎯ What you need to consider

⎯ What you don’t need to think about

⎯ …and importantly whyVMworld 2017 Content: Not fo

r publication or distri

bution

Page 3: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.4

VMworld CommunitySession by

BEST PRACTICE BASICS

BASICS

o VMware is the expert on general vSphere storage settings

o Storage vendors are experts on vSphere with their arrays

SIMPLICITY IS THE ALWAYS THE GOAL

o Adopt VVols or Use large datastores – be sure to consider back and up restore times

o Limit use of RDMs – consider VVols when you can

o Avoid jumbo frames for iSCSI & NFS – inconsistent, marginal gains with added complexityVMworld 2017 Content: Not fo

r publication or distri

bution

Page 4: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.5

VMworld CommunitySession by

PERFORMANCE CONFIGURATIONVMFS, QUEUING, ETC.

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 5: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.6

VMworld CommunitySession by

MULTIPATHING—ROUND ROBIN

Set devices to use Round Robin

⎯ Default configuration is usually Most Recently Used—which is less than ideal

⎯ Maximizes performance for array devices by using all available pathsVMworld 2017 Content: N

ot for publicatio

n or distribution

Page 6: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.7

VMworld CommunitySession by

I/O OPERATIONS LIMITSet the I/O Operations Limit set to ‘1’ (or sufficiently low)—default is 1,000

⎯ How often ESXi switches logical paths for a given device

⎯ Some vendors have somewhat different numbers 1-20 or so. Follow their recommendation

Why change this?

⎯ Performance (NOT a panacea though…), more important for iSCSI

⎯ Path Failover time (reduces from 10’s of seconds to a few seconds normally)

⎯ I/O Balance

Most Recently Used 100% imbalance on array ports

Round Robin IO=1,000~20-30% imbalance on array

ports

Round Robin IO=1~ 1% imbalance on array ports

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 7: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.8

VMworld CommunitySession by

SATP RULES

Best option? SATP rule

⎯ Always set the SATP rule first thing, PRIOR to provisioning any storage to an ESXi host

⎯ Set via SSH or PowerCLI or Host Profiles

Do this once on every host!

esxcli storage nmp satp rule add -s "VMW_SATP_ALUA" -V “VENDOR" -M “MODEL" -P "VMW_PSP_RR" -O "iops=1"

SSH

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 8: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.9

VMworld CommunitySession by

COMMON QUESTIONS

What about:

⎯ Volume Size?

⎯ Volume Count?

⎯ VM/VMFS Density?

AFAs doesn’t necessarily have

requirements here…

…though, there are considerations…VMworld 2017 Content: N

ot for publicatio

n or distribution

Page 9: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.10

VMworld CommunitySession by

VMFS SCALE ENHANCEMENTS

Replaces traditional SCSI Reservations

• SCSI Reservations decreased cluster performance by locking out I/O during metadata changes

Hardware Assisted Locking

• Only lock the metadata they need

• Permits simultaneous access and does not lock out I/O

• Also known as Atomic Test and Set (ATS)VMworld 2017 Content: N

ot for publicatio

n or distribution

Page 10: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.11

VMworld CommunitySession by

WHAT IS A QUEUE DEPTH LIMIT?

A queue is a line, and a queue depth limit is how “wide” that line is. Essentially, how many “things” can be allowed through at once.

Example:

One grocery clerk can help one customer at a time (queue depth limit of 1). So, if there are two customers, one must wait for the first to finish (added latency).

If there are two clerks (queue depth limit of 2), two customers can be helped at a time and neither has to wait (no added latency)VMworld 2017 Content: N

ot for publicatio

n or distribution

Page 11: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.12

VMworld CommunitySession by

WHAT IS A QUEUE DEPTH LIMIT?

In terms of storage, a queue depth limit has many names:

• Outstanding I/Os

• Concurrent threads

• In-flight I/Os

If queue depth limit is 32, 32 I/Os can be processed at once. The 33rd must wait and the 33rd then has added latency because it has to wait.

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 12: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.13

VMworld CommunitySession by

QUEUE LIMITS, QUEUE LIMITS EVERYWHERE

• Storage Array Queue Depth Limit

• Device Queue Depth Limit

• vSCSI Adapter Queue Depth Limit

• Virtual Disk Queue Depth Limit VMworld 2017 Content: Not fo

r publication or distri

bution

Page 13: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.14

VMworld CommunitySession by

ESXI IS DESIGNED TO PROVIDE SOME LEVEL OF

FAIRNESS.VMworld 2017 Content: N

ot for publicatio

n or distribution

Page 14: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.15

VMworld CommunitySession by

STORAGE ARRAY QUEUE LIMITSWORK FROM THE BOTTOM

This is really the first consideration.

If a volume or a target on an array has a low limit—there is no point to increase anything above it.

For storage arrays with per-volume or per-target limits, volume/target parallelization is the key. Not ESXituning. If the array does not have this limit, move on.

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 15: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.16

VMworld CommunitySession by

HBA DEVICE QUEUE DEPTH LIMIT

This is a HBA setting that controls how many outstanding I/Os can be queued on a particular device

Values are configured via esxcli

Changing requires a reboot of ESXi

https://kb.vmware.com/kb/1267

Type Default Value Value Name

QLogic 64 qlfxmaxqdepth

Brocade 32 bfa_lun_queue_depth

Emulex 32 lpfc0_lun_queue_depth

Cisco UCS 32 fnic_max_qdepth

Software iSCSI 128 iscsivmk_LunQDepthDepending on your HBA vendor the default value varies:

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 16: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.17

VMworld CommunitySession by

DISK.SCHEDNUMREQOUTSTANDING

“No of outstanding IOs with competing worlds”

• Controls the number of active I/Os issued to a device when there is more than one VM

DSRNO can be set to a maximum of:

• 6.0 and earlier: 256

• 6.5 and on: Whatever the HBA Device Queue Depth Limit is

[root@esxi-01:~] esxcli storage core

device list -d

naa.624a937076a1e05df05441ba000253a3

(naa.624a937076a1e05df05441ba000253a3)

Vendor: PURE

Model: FlashArray

Revision: 8888

SCSI Level: 6

Is Pseudo: false

Status: on

<…>

Device Max Queue Depth: 128

No of outstanding IOs with competing

worlds: 128

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 17: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.18

VMworld CommunitySession by

DSNRO VS HBA DEVICE QUEUE DEPTH LIMIT

The actual effective device queue limit will be the minimum of:

• HBA Device Queue Depth Limit

• DSNRO

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 18: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.19

VMworld CommunitySession by

PARAVIRTUAL SCSI ADAPTER

For high-performance workloads, Paravirtual SCSI adapter is always best

Better CPU efficiency with heavy workloads

Higher default and maximum queue depths

VMware tools includes the drivers

https://kb.vmware.com/kb/1010398*A few slots are reserved for the virtualization layer, so actual available queue slots are slightly lower

Setting Value

Default Adapter Queue Depth

256

Maximum Adapter Queue Depth

1,024

Default Virtual Disk Queue Depth

64

Maximum Virtual Disk Queue Depth

256

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 19: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.20

VMworld CommunitySession by

PVSCSI CONTINUED…

• Simply only switching to PV SCSI virtual adapters or increasing the limits in it is unlikely to improve performance (or really at all)

• Device queue depth limit is usually 32 (min of HBA and DSNRO)

Otherwise, you are just moving the bottleneck from the guest to the ESXi kernel

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 20: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.21

VMworld CommunitySession by

QUEUE METRICS

DQLEN–this is the configured queue depth limit for the volume. This value is identified by looking at the configured HBA queue depth limit, and DSNRO

ACTV–this is the number of slots currently in use by the workload going to the volume. This value will never exceed DQLEN

QUED–this value is populated if the workload exceeds what DQLEN allows. If ACTV = DQLEN, anything over and beyond that will be queued to wait to be sent to the volume.

If QUED is non-zero, latency is added and reported in KAVG

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 21: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.22

SHOULD I CHANGE QUEUE DEPTH LIMITS WITH MY AFA?

IN GENERAL: NO

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 22: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.23

Little's Law: The long-term average number of customers in a stable system L is equal to the long-term

average effective arrival rate, λ, multiplied by the average time a customer spends in the system, W.

L = λWVMworld 2017 Content: N

ot for publicatio

n or distribution

Page 23: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.24

VMworld CommunitySession by

LITTLE’S LAW IN ACTION

Let’s use our grocery store analogy:

If one customer takes 1 minute to check out (get through the line) and there is one clerk, a store can serve 60 customers in an hour.

If there are two clerks, the store can serve 120 customers in an hour.

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 24: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.25

VMworld CommunitySession by

SOME QUICK MATH…

Let’s suppose the latency should be .5 ms:

• 1 second = 1,000 ms

• 1,000 ms / .5 ms per IO = 2,000 IOPS

• 2,000 IOPS * 32 outstanding I/Os = 64,000 IOPS

With this latency, we would expect 64,000 IOPS maximum per datastore, per host.

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 25: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.26

VMworld CommunitySession by

DO I NEED TO CHANGE THIS STUFF?

Depends. Usually no.

YES IF: you see host-introduced latency and/or you need more available throughput or IOPS, increase the queue depth limits.

NO BECAUSE: Most workloads are distributed across VMs, hosts and/or volumes (parallel queues)

NO BECAUSE: Low-latency arrays (AFAs) are less likely to need changes—they empty out the queue (i.e. service the I/Os) very fast—more IOPS with lower limits

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 26: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.27

vSPHERE PERFORMANCE FEATURES

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 27: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.28

VMworld CommunitySession by

FLASHARRAY CONSISTENT SUB-MS LATENCY

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 28: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.29

VMworld CommunitySession by

Storage I/O ControlStorage I/O Control?

Latency checks are based on device latency (Array + SAN) so with AFAs, this is unlikely to invoke

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 29: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.30

VMworld CommunitySession by

STORAGE DRSStorage DRS performance moves?

Storage DRS includes kernel latency (ESXiqueue + SAN + Array aka VM Observed Latency, aka GAVG) so it can be helpful with exceeded queue depth limits

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 30: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.31

VMworld CommunitySession by

vSPHERE IOPS LIMITS

IOPS Limits in vSphere might be an option to look at if you are opening up queues, to give more important VMs priority

In ESXi 6.5 can be set via policy

In ESXi 6.0 and earlier set per-VM manually:

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 31: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.32

VMworld CommunitySession by

I/O LATENCYIMPACTS CPU & MEMORY

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 32: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.33

VMworld CommunitySession by

MYTH BUSTING: I DON’T NEED THE PERFORMANCE OF AN AFA

The common view is all-flash storage accelerates applications – this is true

Most VMs are ‘Tier-2’ and thus most think they don’t need the performance of all-flash storage – this is a misunderstanding of the impact I/O latency has on Hypervisor CPU & Memory

Storage I/O is exponentially slower than CPU processing

All-Flash allows data to be processed faster resulting in more CPU cycles so you can process more data. The memory associated with the I/O is released faster.

Bottom Line: Reducing I/O Latency allows for…

1. More DB transactions per CPU (faster applications)

2. More VMs per CPU (reduced infrastructure costs)

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 33: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.34

STORAGE EFFICIENCYREDUCING THE TOTAL COST PER VM

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 34: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.35

VMworld CommunitySession by

DATA REDUCTION: A COLLECTION OF TECHNOLOGIES & IMPLEMENTATIONS

DEDUPLICATION:

• Inline or Background

• Nominal diff in final results

• Relates to performance impact relative to architecture design

• What Matters to Results

• Global vs Volume/Disk Grp

• One copy vs multiple copies

• Global is Better i.e. Inter-Vol XCOPY

• Granularity of Block Size

• 16KB / 8KB / 4KB / 512B

• Smaller is Better

THIN PROVISIONING:

• Thin to Start is Lowest Common Denominator

• What Matters to Results

• Override Host Thick Provisioning

• Zero & Pattern Removal

• VM Filesystem and Datastore UNMAP

• Remove deleted data from storage layer

• Found only with SCSI-Based Datastores & VVols

COMPRESSION:

• Inline or Background

• All inline is somewhat light-weight in order to preserve performance

• What Matters to Results

• Single vs Multiple algorithms

• CPU resource impact

• Align algorithm to data type

BOTTOM LINE:The more technologies,

the better the results

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 35: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.36

VMworld CommunitySession by

YOUR STORAGE IS STORINGDELETED DATA

Dead Space accumulation occurs in two places:

⎯ Datastore: On the VMFS after a VM has been deleted or moved.

⎯ In-VM: Inside a virtual machine’s virtual disk when a file has been deleted or moved.

VMFS Volume

Virtual Machines

Array Usage

Dead Space from VMFSDead Space from virtual disk

Delete VMDelete data inside VM

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 36: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.37

ESXi 5.5 & 6.0 MANUAL UNMAP OF VMFS

DATASTORESVMworld 2017 Content: N

ot for publicatio

n or distribution

Page 37: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.38

VMworld CommunitySession by

RUNNING MANUAL VMFS UNMAPESXi 5.5 and 6.0

UNMAP can be executed from a variety of ways:

SSH into ESXi:

esxcli storage vmfs unmap -l <VMFS> –n <block count>

PowerCLI:

$esxcli=get-esxcli -VMHost $esx -v2

$unmapargs = $esxcli.storage.vmfs.unmap.createargs()

$unmapargs.volumelabel = $datastore.Name

$unmapargs.reclaimunit = $blockcount

$esxcli.storage.vmfs.unmap.invoke($unmapargs)

Web Client Plugins:

vRealize Orchestrator:VMworld 2017 Content: N

ot for publicatio

n or distribution

Page 38: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.39

ESXi 6.5 AUTOMATIC UNMAP OF VMFS

DATASTORESVMworld 2017 Content: N

ot for publicatio

n or distribution

Page 39: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.40

VMworld CommunitySession by

AUTOMATIC UNMAP IN VMFS 6NEW IN VSPHERE 6.5

In vSphere 6.5 with VMFS 6, VMware (re)introduces Automatic UNMAP

Per-datastore setting

All-attached ESXi hosts run a background crawler and will asynchronously issue UNMAP

• Can take 12+ hours to reclaim

• Must have powered-on VMsVMworld 2017 Content: Not fo

r publication or distri

bution

Page 40: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.41

ESXi 6.0 & 6.5 AUTOMATIC

VM FILESYSTEM UNMAP

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 41: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.42

VMworld CommunitySession by

EnableBlockDelete Setting

ESXi can also automatically reclaim the array level dead space after the virtual disk has been shrunk.

Turn on EnableBlockDelete to complete the end-to-end UNMAP

This is not really required in ESXi 6.5+

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 42: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.43

VMworld CommunitySession by

VM FILESYSTEM UNMAP IN VSPHERE 6.X

VMFS Volume

Virtual Machines

Array Usage

Dead Space from virtual disk

Delete data inside VMIssue UNMAP from GuestCreate VMFill VM with dataESXi shrinks the VMDK

Requires:

• ESXi 6.0+

• VM Hardware version 11

• Thin virtual disk

• EnableBlockDelete

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 43: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.44

VMworld CommunitySession by

VM FILESYSTEM UNMAP: SHRINK THE VMDK

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 44: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.45

VMworld CommunitySession by

VM FILESYSTEM UNMAP RESULTS

Customer with 6.25 TBs of Thin VMs

Reduced to 1.25TBs

5:1 Data Reduction

Customer Enables VM Filesystem UNMAP

Reduced to 740 GBs

8:4 (unreported) Data Reduction

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 45: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.46

VMworld CommunitySession by

WINDOWS TIP: VM FILESYSTEM UNMAP

Two options:

1. Scheduled UNMAP via disk optimizer

1. weekly is default

2. Daily or Monthly

2. Automatic UNMAP— issue UNMAP when a file is deleted or moved

WINDOWS (2012 R2, 2016, 8, 10)

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 46: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.47

VMworld CommunitySession by

LINUX TIP: VM FILESYSTEM UNMAP

A few options:

1. Scheduled UNMAP via FSTRIM and CRON

2. Automatic UNMAP via filfsystem mount ‘discard’ option

pureuser@ubuntu:/mnt$ sudo mount /dev/sdd

/mnt/unmaptest -o discard

Recommended for Existing VMs1. Easiest to implement across existing deployed VMs2. Leverage /etc/cron.d & an automation tool (puppet)3. Non-disruptive compared to mount options

LINUX (EXT4, XFS, ETC.…)

Recommended for VM Templates

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 47: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.48

VMworld CommunitySession by

ESXI TIP: ONLY UNMAP WITH 6.5 PATCH 1FIXES ALIGNMENT TOLERANCE ISSUE

VMFS requires all UNMAPs sent from a guest to be aligned to 1 MB boundaries

Not all UNMAPs sent from guests will be, so UNMAP was blocked and nothing was reclaimed

This changes in ESXi 6.5 P1:

• Passes along any UNMAPs that are aligned

• Mis-aligned UNMAPs are instead zeroed out

Provides:

• 100 % reclamation on the FlashArray

• Much higher percentage of space returned on VMFS

• Allows fstrim in Linux

• Supports all allocation unit sizes in Windows

Not an issue for AFAs with zero removal or deduplication

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 48: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.49

VMworld CommunitySession by

ESXi 6.0 AND 6.5 DIFFERENCES

ESXi 6.0 ESXi 6.5 ESXi 6.5 P1

SCSI Version 2 6 6

Guest OS Support Manual Windows onlyManual Windows, Automatic Linux

Manual and Automatic Support in both Windows and Linux

Works with Change Block Tracking On?

No Yes Yes

Virtual Disk Type Supported

Thin Thin ThinVMworld 2017 Content: N

ot for publicatio

n or distribution

Page 49: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.50

VMworld CommunitySession by

UNMAP: THE MORE YOU KNOW

Datastore & VM filesystem UNMAP reduces storage

UNMAP is often unreported by arrays

UNMAP will only remove metadata pointers with with deduplicated data. No freeing of physical capacity.

UNMAP will remove unique data that has been deleted

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 50: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.51

WAIT A MINUTE AREN’T

THIN VMDKS SLOW?

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 51: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.52

VMworld CommunitySession by

THIN -VS- THICKAPPLIES TO AFAS… NOT NECESSARILY FOR HYBRID OR DISK STORAGE PLATFORMS

Historically Thin VMDKs traded off performance for efficiency

⎯ VMFS pauses I/O as new sectors were allocated to the VMDK and formatted

⎯ Performance differences can be observed when performance benchmarking unstructured data in a guest filesystem

Most performance sensitive applications pre allocate space (i.e. SQL Server, Oracle Database, etc) and perform optimally on Thin VMDKS

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 52: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.53

VIRTUAL VOLUMES

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 53: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.54

VMworld CommunitySession by

Virtual VolumesVirtual volumes provide per-VMDK granularity on the underlying array

• Array-based snapshots

• Policy-driven

• No VMFS in the way

• No more VMFS UNMAP!

Therefore:

1. Can apply array-features and/or VMware features at a per-VMDK granularity (QoS, replication, etc.)

2. Opens up data mobility: a VVol is a volume, so it can be shared with or copied to anything

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 54: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.55

VMworld CommunitySession by

Virtual Volumes and Performance

Does using virtual volumes bypass all of this queue stuff?

No more datastore = No more DSRNO. Right?

No.

VVols share a queue depth limit from the protocol endpoint they are bound to

Understand what the performance limits of a single volume (PE) on your array

The default queue depth limit of PEs is 128, but can go up to 4,096

*Quick math at 1 ms: 8,192,000 IOPS per PE per host!

Need more than one PE?

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 55: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

© 2017 PURE STORAGE INC.56

VMworld CommunitySession by

IN CONCLUSION

A BIG THANK YOU to you, the VMworld Community for Voting for this session

Please complete a survey for this session – SER2355BU

Follow us online:

Cody:

@CodyHosterman

http://CodyHosterman.com

Vaughn:

@vStewed

http://VaughnStewart.com

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 56: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 57: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

VMworld 2017 Content: Not fo

r publication or distri

bution

Page 58: Content: Not PRACTICE BASICS BASICS o VMware is the expert on general vSphere storage settings o Storage vendors are experts on vSphere with their arrays SIMPLICITY IS THE ALWAYS THE

VMworld 2017 Content: Not fo

r publication or distri

bution