49
Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology Abstract This white paper provides a detailed description of the technical aspects and benefits of deploying VMware Infrastructure version 3.5 on EMC ® Celerra ® and CLARiiON ® devices using Virtual Provisioning™. February 2009

Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Embed Size (px)

Citation preview

Page 1: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with

VMware Infrastructure Applied Technology

Abstract

This white paper provides a detailed description of the technical aspects and benefits of deploying VMware Infrastructure version 3.5 on EMC® Celerra® and CLARiiON® devices using Virtual Provisioning™.

February 2009

Page 2: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Copyright © 2009 EMC Corporation. All rights reserved.

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com

All other trademarks used herein are the property of their respective owners.

Part Number h6131

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 2

Page 3: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Table of Contents Executive summary ............................................................................................4 Introduction.........................................................................................................4

Audience ...................................................................................................................................... 5 Terminology ................................................................................................................................. 5

Overview..............................................................................................................7 CLARiiON Virtual Provisioning..................................................................................................... 7

Requirements ........................................................................................................................... 8 Celerra Virtual Provisioning ......................................................................................................... 8

Celerra NFS Virtual Provisioning.............................................................................................. 8 Celerra iSCSI Virtual Provisioning ........................................................................................... 9 Requirements ........................................................................................................................... 9

Considerations for VMware Infrastructure with Virtual Provisioning ..........10 VMFS, NFS, and RDM considerations with Virtual Provisioning............................................... 10

Device visibility and access.................................................................................................... 10 VMware File System datastore on thin devices ..................................................................... 11 NFS datastore on virtually provisioned file systems............................................................... 13 Raw device mapping volumes on thin devices ...................................................................... 15

Virtual machine considerations on thin devices......................................................................... 15 Creating VMware virtual machines on thin devices ............................................................... 15 Impact of guest operating system activities on thin pool utilization........................................ 21 Exhaustion of oversubscribed datastores .............................................................................. 21 Nondisruptive expansion of virtual disk on thin devices......................................................... 22 Expansion of a virtual datastore on thin devices.................................................................... 23

vCenter server considerations with Virtual Provisioning............................................................ 27 Cloning virtual machines using VMware Infrastructure Client................................................ 27 Cloning virtual machines using VMware vCenter Converter.................................................. 32 VMware VMotion, DRS, HA, and Virtual Provisioning ........................................................... 34 Cold migration and Virtual Provisioning ................................................................................. 35 Hot migration using Storage VMotion and Virtual Provisioning ............................................. 35

Considerations for storage-based features on thin devices ...................................................... 35 CLARiiON............................................................................................................................... 35 Celerra.................................................................................................................................... 38

Performance considerations ...........................................................................39 CLARiiON............................................................................................................................... 39 Celerra.................................................................................................................................... 39

CLARiiON Virtual Provisioning management.................................................40 Thin pool management .............................................................................................................. 40 Thin pool monitoring .................................................................................................................. 40

Celerra Virtual Provisioning management......................................................42 Thin file system and storage pool management........................................................................ 42 Thin file system and storage pool monitoring ............................................................................ 43

Exhaustion of oversubscribed pools ..............................................................47 Conclusion ........................................................................................................48 References ........................................................................................................49

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 3

Page 4: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Executive summary The continual focus of evolution of the EMC® CLARiiON® and Celerra® array technologies is to deliver enhanced product capabilities that improve effective storage capacity utilization as well as optimized performance, increased protection and security, more flexibility in interoperability support, and ease of use. The Virtual Provisioning™ feature in these products is aimed at improving storage utilization and performance delivery optimization.

Virtual Provisioning, generally known in the industry as “thin provisioning,” enables organizations to improve ease of use and increase capacity utilization for certain applications and workloads. The implementation of Virtual Provisioning for CLARiiON and Celerra storage arrays directly addresses improvements in storage infrastructure utilization, as well as associated operational requirements and efficiencies.

One of the biggest challenges for storage administrators is provisioning storage for new applications. Administrators typically allocate space based on anticipated future growth of applications. This is done to reduce future operational functions, such as incrementally increasing storage allocations or adding discrete blocks of storage as existing space is consumed. Using this approach results in overprovisioning (allocating more physical storage than an application will need for a long time) and incurring a higher cost than is necessary. Overprovisioning also leads to increased power, cooling, and floor space requirements. Even with the most careful planning, it may be necessary to provision additional storage in the future, which could potentially require an application outage. A second layer of storage overprovisioning occurs when a server and application administrator overallocate storage for their environment. The operating system sees the space as completely allocated, although only a fraction of the allocated space is actually used. For example, a file system that is created on a 100 GB LUN can hold a number of different files, and the total size of all the files cannot exceed 100 GB. But when the file system is initially created and two user files of 10 GB each are created in this file system, effectively, only 20 GB out of the 100 GB of storage space allocated is in use. But the remaining 80 GB of unused space cannot be reallocated to a different application, since the file system logically “owns” the entire 100 GB. EMC Virtual Provisioning addresses both of these issues. It allows more storage to be presented to an application than is physically available. More importantly, Virtual Provisioning allocates physical storage only when the storage is actually written to. This allows more flexibility in predicting future growth, reduces the initial cost of provisioning storage to an application, eliminates the waste that occurs as a result of overprovisioning, and eliminates the need to use further resources for subsequent storage allocations. Consolidating and optimizing IT resource utilization to manage and reduce IT cost has become a key focus for many enterprises that invest and rely heavily in technologies supporting their business goals. Virtualization technologies, such as VMware for server virtualization, have been gaining rapid adoption. Storage virtualization technology such as the Virtual Provisioning feature offered in EMC storage systems complements VMware technologies to deliver the full cost benefits from an effective IT virtualization implementation.

Introduction This white paper addresses the considerations for deploying VMware Infrastructure version 3.x environments on thinly provisioned devices. An understanding of the principles that are exposed here will allow the reader to deploy VMware Infrastructure environments with Virtual Provisioning in the most effective manner.

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 4

Page 5: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Audience This white paper is intended for storage architects, and server and VMware administrators responsible for deploying VMware Infrastructure on the CLARiiON CX4 family using FLARE® release 28 and Celerra using the DART operating environment release 5.5 (or later) with Virtual Provisioning.

Terminology Table 1. Protocol specification terms

Term Description

Fibre Channel (FC) A high speed networking protocol primarily used in storage area networks. The word Fibre is used as a generic term that can indicate copper or optical implementations of Fibre Channel products.

Internet Small Computer System Interface (iSCSI)

A protocol that enables transport of block data over IP networks and transfers data by carrying SCSI commands over IP networks.

Network File System (NFS)

A distributed file system that allows systems on the network to share remote file systems by allowing the systems to share a single copy of a directory.

Table 2. Basic CLARiiON array and Virtual Provisioning terms

Term Description

CLARiiON LUN Logical subdivisions of RAID groups in a CLARiiON storage system. LUN Migration A CLARiiON feature that dynamically migrates data to another LUN or metaLUN

without disrupting running applications MetaLUNs A collection of traditional LUNs that are striped or concatenated together and presented

to a host as a single LUN. Additional LUNs can be added dynamically, allowing metaLUNs to be expanded on the fly.

MirrorView™ Software designed for disaster recovery solutions by mirroring local production data to a remote disaster recovery site. It offers two products: MirrorView/Synchronous and MirrorView/Asynchronous.

RAID groups One or more disks grouped together under a unique identifier in a CLARiiON storage system.

Storage pool A general term used to describe RAID groups and thin pools. In the Navisphere® Manager GUI, the storage pool node contains RAID groups and thin pool nodes.

SAN Copy™ Data mobility software that runs on the CLARiiON.

SnapView™ Software used to create replicas of a source LUN. These point-in-time replicas can be pointer-based snapshots or full binary copies called clones or BCVs.

Thin LUN A logical unit of storage where physical space allocated on the storage system may be less than the user capacity seen by the host server.

Thin pool A group of disk drives used specifically by thin LUNs. There may be zero or more thin pools on a system. Disks may be a member of no more than one thin pool. Disks that are in a thin pool cannot also be in a RAID group.

Table 3. Basic Celerra array and Virtual Provisioning terms

Term Description

Automatic File System Extension

Configurable Celerra file system feature that automatically extends a file system created or extended with AVM when the high water mark (HWM) is reached.

Automatic Volume Management (AVM)

Feature of the Celerra Network Server that creates and manages volumes automatically. AVM organizes volumes into storage pools that can be allocated to file systems.

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 5

Page 6: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Term Description

Celerra Logical Unit Number (LUN)

Identifying number of a SCSI or iSCSI object in Celerra that processes SCSI commands. The LUN is the last part of the SCSI address for a SCSI object. (The LUN is actually an ID for the logical unit, but the term is often used to refer to the logical unit itself.)

Celerra Replicator™ A Celerra service that produces a read-only, point-in-time copy of a source file system or iSCSI LUN. This point-in-time copy can be either on the same Celerra system (local replication), or on another Celerra system (remote replication). The service periodically updates the copy, making it consistent with the source file system or iSCSI LUN.

Disk Volume A physical storage unit as exported from the storage array. All other volume types are created from disk volumes.

File system Method of cataloging and managing the files and directories on a storage system. High water mark (HWM) Trigger point at which the Celerra Network Server performs one or more actions, such

as extending a file system, as directed by the related feature's software/parameter settings.

iSCSI snapshot A point-in-time copy of an iSCSI LUN. Persistent Block Reservation (PBR)

Technique of reserving an adequate number of blocks in a file system to support the creation of a logical unit of a specified size. The blocks are reserved for the logical unit whether or not they are in use.

Regular iSCSI LUN iSCSI LUN that uses Persistent Block Reservation (PBR) to ensure that the file system has sufficient space for all data that might be written to the LUN.

Snapshot Logical Unit (SLU)

An iSCSI snapshot promoted to logical unit status and configurable as a disk device through an iSCSI initiator.

SnapSure™ On a Celerra system, a feature that provides read-only point-in-time copies, also known as checkpoints, of a file system.

Storage pool Automated Volume Management (AVM), a Celerra feature, organizes available disk volumes into groupings called storage pools. Storage pools are used to allocate available storage to Celerra file systems. Storage pools can be created automatically by AVM or manually by the user.

Table 4. Related VMware Infrastucture terms

Term Description

Cluster A cluster is a collection of ESX hosts and associated virtual machines that share resources and a management interface

Cold Migration Cold migration provides the ability to migrate a virtual machine from one physical ESX host to another, and/or from one storage system to another —with application service interruption

Data center The primary organizational structure used in vCenter Server, which contains hosts and virtual machines.

Datastore A special logical container, analogous to file system, that hides specifics of each storage device and provides a uniform model for storing virtual machine files. Depending on the type of storage used, ESX datastores can have a VMFS or NFS file system format.

ESX (formerly named ESX Server)

VMware’s high-end server product that installs directly on the physical hardware and therefore offers the best performance.

Guest operating system An operating system that runs on a virtual machine

Network File System (NFS)

File system on a NAS storage device. ESX 3.x supports NFS version 3 over TCP/IP. ESX can access a designated NFS volume located on an NFS server. ESX mounts the NFS volume and uses it for its storage needs. NFS is one of the two datastore formats that are available with ESX (the other is VMFS).

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 6

Page 7: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Term Description

Raw device mapping (RDM)

Raw device mapping volumes consists of a pointer in a .vmdk file and a physical raw device. The pointer in the .vmdk points to the physical raw device. The .vmdk file resides on a VMFS volume, which must reside on shared storage.

Storage VMotion Technology that allows users to migrate a virtual machine from one storage system to another while the virtual machine is up and running.

Templates A way to import virtual machines and store them as templates that can be deployed at a later time to create new virtual machines.

VMware Infrastructure (VI) Client

A VMware application that provides an interface for data center management and virtual machine access. VI Client is one of the available methods for accessing a VMware virtual data center.

Virtual disks Disks presented to a virtual machine from a VMFS volume.

Virtual machine A virtualized x86 PC environment on which a guest operating system and associated application software can run. Multiple virtual machines can operate on the same physical machine concurrently.

vCenter Server (formerly named Virtual Center)

A VMware Infrastructure management product that manages and provides valuable services for virtual machines and underlying virtualization platforms from a central, secure location.

VMotion VMotion technology provides the ability to migrate a running virtual machine from one physical ESX to another—without application service interruption

VMware File System (VMFS)

A clustered file system that stores virtual disks and other files that are used by virtual machines. VMFS is one of the two datastore formats that are available with ESX (the other is NFS).

Overview This section gives an overview of CLARiiON and Celerra Virtual Provisioning and outlines the requirements for enabling the Virtual Provisioning feature of CLARiiON and Celerra storage systems.

CLARiiON Virtual Provisioning CLARiiON thin LUNs are logical LUNs that can be used in many of the same ways that traditional CLARiiON LUNs are used. Unlike traditional CLARiiON LUNs, thin LUNs do not need to have physical storage completely allocated at the time the LUN is created and presented to a host. A thin LUN is not usable until it has been bound to a shared storage pool known as a thin pool. Multiple thin LUNs may be bound to any given thin pool. The thin pool is comprised of disks that provide the actual physical storage to support the thin LUN allocations.

When a thin LUN is created, 2 GB of physical storage is mapped from the thin pool to the LUN. When a write is performed on a thin LUN where more storage is needed besides the 2 GB physical storage allocated upfront, the CLARiiON’s mapping service allocates more storage to the thin LUN from the thin LUN pool; it allocates the amount of storage needed for the write in 8 KB chucks (extents) and is optimally packed. This approach reduces the amount of storage that is actually consumed.

When a read is performed on a thin LUN, the data read is retrieved from the appropriate disk in the thin pool to which the thin LUN is associated. If for some reason a read is performed against an unallocated portion of the thin device, zeroes are returned to the reading process.

When more physical data storage is required to service existing or future thin devices, the thin pool can be expanded by adding additional disks to existing thin pools dynamically (without a system outage). A thin pool can be expanded when it is approaching full storage allocations, and new thin LUNs can be created and associated with existing thin pools.

The following figure depicts the relationships between thin LUNs and their associated thin pools. There are six LUNs associated with thin Pool A and three LUNs associated with thin pool B.

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 7

Page 8: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Figure 1. Thin LUNs and thin pools containing disk drives

Requirements CX4 models running FLARE release 28.5.support the Virtual Provisioning feature. CLARiiON Virtual Provisioning Enabler must be purchased and installed on the storage system to create and manage thin pools and thin LUNs. Please check the latest FLARE 28.5 release notes available on Powerlink® for additional information.

Celerra Virtual Provisioning Celerra Virtual Provisioning technology is available for NFS and CIFS file systems, and for iSCSI LUNs. Celerra Virtual Provisioning allows you to allocate storage based on actual usage rather than on long-term usage projections. Although it appears to users that the maximum amount of storage has been allocated to them, in reality a much smaller amount of storage has been allocated, and more storage will be allocated when they actually need it.

Celerra NFS Virtual Provisioning Virtual Provisioning is a feature that can be enabled on the Celerra NFS file system; this feature must be used with the Automatic File System Extension feature. When configuring these features, the user must select values for the maximum size parameter and the high water mark (HWM) parameter. The Celerra Control Station will extend the file system when needed depending on the values of these parameters.

Automatic File System Extension guarantees that the file system usage (measured by the ratio of used space to allocated space) will always be at least 3 percent below the HWM. With Automatic File System Extension, when the file system usage reaches the HWM, an automatic extension event notification is sent to the Celerra sys_log and the file system is automatically extended. If Virtual Provisioning is enabled, the maximum size (rather than the amount of storage actually allocated) is presented to the NFS, CIFS, or FTP clients.

If there is not enough free storage space to extend the file system to the requested size, Automatic File System Extension extends the file system to use all of the available storage. (For example, if Automatic File System Extension requires 6 GB but only 3 GB is available, the file system automatically extends by 3 GB.) In this case, an error message appears indicating there was not enough storage space available to

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 8

Page 9: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

perform automatic extension. When there is no available storage, Automatic File System Extension fails. If this happens, the file system must be manually extended.

Celerra iSCSI Virtual Provisioning A Celerra iSCSI LUN is created within a standard Celerra file system, and emulates a SCSI disk device by using a dedicated file called a file storage object. The file storage object provides the physical storage space for data stored on the iSCSI LUN. By default, an iSCSI LUN is created using the Persistent Block Reservation (PBR) storage method (also called a regular iSCSI LUN). With PBR, the entire requested disk size is reserved for the LUN although it is not taken from the reservation pool.

However, when a virtually provisioned iSCSI LUN is created (using the Virtual Provisioning storage method), space is not reserved on the disk for the LUN. Additional space is allocated to the LUN only when it is actually required by the user. Therefore, it is important to ensure that file system space is available before data is added to the LUN. For this reason, Automatic File System Extension must be enabled on the file system on which the virtually provisioned LUN is created. EMC also recommends that you enable Virtual Provisioning on this file system to optimize storage utilization. When using a high water mark (HWM) for Automatic File System Extension, the number of blocks that have been used determines when the HWM is reached. File system extension can occur even when usage of the production LUN appears to be low from the host’s perspective. Snapshots, for example, consume additional file system space. In addition, deleting data from a LUN does not reduce the number of blocks allocated to the LUN. By default, a snapshot LUN (SLU) of a virtually provisioned iSCSI LUN is virtually provisioned, and an SLU of a regular iSCSI LUN is fully provisioned. For a regular iSCSI LUN to be virtually provisioned, the sparstws parameter on the Celerra Data Mover must be adjusted.

Requirements Celerra Virtual Provisioning is supported by all Celerra models with release 5.5 (or later). For NFS, Virtual Provisioning must be used with Automatic File System Extension; the file system should be created or extended using an AVM storage pool; and the Control Station must be running and operating properly for Automatic File System Extension to work correctly. Further information on using Virtual Provisioning with Celerra file systems can be found in the Managing EMC Celerra Volumes and File Systems with Automatic Volume Management technical module available on Powerlink. A virtually provisioned iSCSI LUN can be created only on a file system that has auto extension enabled. Further information on using Virtual Provisioning with Celerra iSCSI LUNs can be found in the Configuring iSCSI Targets on EMC Celerra technical module available on Powerlink.

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 9

Page 10: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Considerations for VMware Infrastructure with Virtual Provisioning This section discusses things to consider when deploying a VMware Infrastructure using Virtual Provisioning. In this configuration, the behavior of VMware Infrastructure features depends on the feature itself, how it is used, and the guest operating system running in the VMware Infrastructure environment. This section also discusses how the various VMware Infrastructure features interact with virtually provisioned devices.

VMFS, NFS, and RDM considerations with Virtual Provisioning

Device visibility and access CLARiiON thin LUNs appear like any other SCSI attached device to the VMware Infrastructure. An example of this is shown in Figure 2. The devices highlighted (vmhba0:0:5 and vmhba0:0:6) are thin LUNs that have been presented from a CLARiiON storage system. The thin LUN can be used to create a VMware File System, or be assigned exclusively to a virtual machine as a raw disk mapping (RDM). Similarly virtually provisioned Celerra iSCSI LUNs will be also discovered by the iSCSI software adapter (or an iSCSI HBA if connected to ESX).

Figure 2. CLARiiON thin LUNs viewed in VMware Infrastructure Client

Virtually provisioned network file systems appear like any other network file system to the VMware Infrastructure. An example of this is shown in Figure 3. The 10.0.3.2:/fs_thin is a virtually provisioned Celerra file system (“/fs_thin” in this example) on a Data Mover that can be reached through the 10.0.3.2 IP address. The virtually provisioned file system can be used to create an NFS datastore (such as fs_thin in this example).

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 10

Page 11: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Figure 3. Virtually provisioned file system viewed in VMware Infrastructure Client

VMware File System datastore on thin devices This section outlines the impact of creating a VMFS datastore on a CLARiiON thin LUN or a Celerra virtually provisioned iSCSI LUN.

VMware File System datastore creation and formatting The VMware File System (VMFS) has interesting and valuable characteristics when used in a virtually provisioned environment. In Figure 4, a VMFS is created on a 500 GB CLARiiON thin LUN (device vmhba0:0:5). The amount of storage required to store the VMFS metadata is a function of the size of the thin LUN or device. The metadata for the VMFS on the thin LUN vmhba0:0:5 consumes 563 MB of storage. This is shown in Figures 4 and 5. Figure 6 on page 13 shows how the CLARiiON operating system responds to the write activity generated by formatting the VMFS. In this case, since a capacity of 2 GB is already allocated during thin LUN creation in Navisphere, creating the 563 MB VMFS does not affect the consumed size of the thin LUN. Therefore, the VMFS does not write all of its metadata to disks when it is created. The VMFS formats and uses the reserved area for metadata as requirements arise. This also applies to a virtually provisioned Celerra iSCSI LUN.

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 11

Page 12: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Figure 4. VMware File System creation on a CLARiiON thin LUN

Figure 5. Metadata area reserved on the VMware File System

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 12

Page 13: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Figure 6. Consumed capacity from a thin pool in response to VMware File System format activity in Navisphere

NFS datastore on virtually provisioned file systems This section outlines the impact of creating an NFS datastore on a virtually provisioned Celerra file system.

NFS datastore creation and formatting With Celerra Virtual Provisioning, is it possible to set an initial allocation to the file system. Once created, the NFS datastore is presented to ESX with the maximum size (which the user specifies in the maximum size parameter) of the virtually provisioned file system rather than its actual allocated size. This file system, along with the NFS datastore that was created in it, is automatically extended as the file system usage grows. This extension is performed according to the setting of the Automatic File System Extension feature. Unlike VMFS, an NFS datastore is managed by the NAS storage system rather than by ESX itself. Therefore, in this case, no datastore metadata information is written by ESX. The Celerra file system metadata is stored on the Celerra file system and its size is a function of the size of the file system. Unlike the VMFS metadata, as the file system metadata is written by Celerra it is not virtually provisioned. The capacity consumed by the file system metadata is not presented to ESX. As shown in Figure 7, an NFS datastore fs_thin was created on the virtually provisioned Celerra file system “/fs_thin”. ESX is presented with only the maximum size of the file system (close to 500 GB). ESX is unaware that only 10 GB was allocated to the file system in Celerra. Upon its creation, the NFS datastore consumes 592 KB (NFS datastore used space).

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 13

Page 14: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Figure 7. NFS datastore creation on a virtually provisoned Celerra file system As shown in Figure 8, the file system is virtually provisioned with a maximum size of 500 GB and an initial size of 10 GB. As seen in this figure, the Celerra file system metadata, which is 156 MB in size, is not presented to ESX. Due to this file system metadata, ESX is presented with a 499.85 GB datastore (rather than 500 GB).

Figure 8. Consumed capacity from a virtually provisioned Celerra file system following a NFS datastore creation

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 14

Page 15: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Raw device mapping volumes on thin devices This section outlines the impact of creating a RDM volume on a CLARiiON thin LUN or a virtually provisioned iSCSI LUN. The creation of a RDM does not have any impact on the thin LUN, since the RDM volume does not format the thin LUN at the ESX level. Therefore, when an RDM volume (with physical or virtual compatibility) is presented to a virtual machine, and I/O is generated by the guest operating system running in the virtual machine, the VMware kernel does not play a direct role in transferring this I/O. In this configuration, the considerations for using thin LUNs are the same as the considerations for physical servers running the same operating system and applications.

Virtual machine considerations on thin devices

Creating VMware virtual machines on thin devices The same wizard (the New Virtual Machine wizard provided by VMware Infrastructure Client) is used to configure and manage VMware datastores on thin devices that is used for other datastores. To the wizard, these datastores are all the same. This is true for VMFS and NFS datastores. Creating virtual machines on a VMFS datastore Figure 9 shows the final step (in the New Virtual Machine wizard) to create a virtual machine with a 16 GB virtual disk on a VMFS datastore hosted on a thin device. (In this example the thin device is a virtually provisioned Celerra iSCSI LUN.) When Finish is clicked, the VMware Infrastructure Client performs a number of actions, including creating the virtual disk to support the virtual machine. Figure 10 shows the storage utilization of the VMFS, and the thin pool supporting the datastore, after the New Virtual Machine wizard is finished. The figure shows that a small amount of storage is initialized when a virtual machine is created. However, as shown in Figure 11, the VMware kernel reserves the storage requirement for the virtual machine on the VMFS for future use.

Figure 9. Using the VMware Infrastructure Client to create a new virtual machine

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 15

Page 16: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Figure 10. Thin pool utilization when creating a new virtual machine

Figure 11. VMware datastore utilization on creation of a new virtual machine

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 16

Page 17: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Virtual Provisioning works particularly well in the VMware Infrastructure, due to the default allocation mechanism provided by the VMware kernel API for creating new virtual disks, “zeroedthick”. In this allocation mechanism, the storage required for the virtual disks is reserved in the datastore, but the VMware kernel does not initialize all the blocks. The blocks are initialized by the guest operating system as writes are made to uninitialized blocks1. Therefore, capacity from the virtually provisioned device is allocated to the virtual disk only when it is needed. The VMware kernel provides a number of allocation mechanisms for creating virtual disks, in addition to zeroedthick. All the VMware kernel allocation mechanisms are listed in Table 5.

Table 5. Allocation policies when creating new virtual disks on a VMware datastore

Allocation mechanism (Virtual Disk format)

VMware kernel behavior

Zeroedthick All space is allocated at creation but it is not initialized with zeroes. However, the allocated space is wiped clean of any previous contents. This is the default policy when creating new virtual disks.

eagerzeroedthick This allocation mechanism allocates all of the space and initializes all of the blocks with zeroes. This allocation mechanism performs a write to every block of the virtual disk, and hence results in equivalent storage use in the thin pool.

thick A thick disk has all the space allocated at creation time. If the guest operating system performs a read from a block before writing to it, the VMware kernel may return stale data. EMC recommends not using this format.

Thin This allocation mechanism does not reserve any space on the VMFS when it creates a virtual disk. The space is allocated and zeroed on demand. This is the default allocation scheme when using the NFS protocol.

Rdm The virtual disk created in this mechanism is a mapping file that contains the pointers to the blocks of SCSI disk it is mapping. However, the SCSI INQ information of the physical media is virtualized. This format is commonly known as the “Virtual compatibility mode of raw disk mapping.”

Rdmp This format is similar to the rdm format. However, the SCSI INQ information of the physical media is not virtualized. This format is commonly known as the “Pass-through raw disk mapping.”

Raw This mechanism can be used to address all SCSI devices supported by the kernel except for SCSI disks.

2gbsparse The virtual disk created using this format is broken into multiple sparsely allocated extents (if needed), with each extent no more than 2 GB in size.

It can be seen from Table 5 that the “eagerzeroedthick” format is not ideal for use with virtually provisioned devices. Also, while the “thin” allocation policy appears ideal for virtual provisioned devices, it is not recommended when using a thin LUN. This is because the risk of exceeding the thin pool capacity is much higher when virtual disks are allocated using this policy. This happens because the oversubscription to physical storage occurs at two independent layers that currently do not communicate with each other. It is important to note that the current version of VMware Infrastructure does not offer an option to create virtual disks differently for virtual machines2. Creating virtual disks using a “thin” allocation mechanism requires the use of the CLI utility (vmkfstools) on the service console, or using remote CLI for ESXi. The use of command line should be avoided unless there is no other alternative.

1 The VMFS returns zeroes to the guest operating system if it attempts to read blocks of data that it has not previously written to. This is true even in cases where information from previous allocation is available—the VMFS does not present stale data to the guest operating system when the virtual disk is created using the “zeroedthick” format. 2 The New Virtual Machine wizard offers the capability of creating virtual or physical compatibility raw disk mapping.

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 17

Page 18: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Creating virtual machines on an NFS datastore With an NFS datastore, on the other hand, a different behavior is observed much due to the inherent nature of the NFS protocol and its difference from block-level protocols such as Fibre Channel or iSCSI. To illustrate NFS behavior, the New Virtual Machine wizard was used to create a virtual machine with a 16 GB virtual disk on an NFS datastore hosted on a virtually provisioned Celerra file system. Following the execution of this wizard, the VMware Infrastructure Client performs a number of actions including creation of the virtual disk that is required to support the virtual machine. Figure 12 shows the storage utilization of the NFS datastore and the virtually provisioned file system supporting the datastore after the New Virtual Machine wizard has completed. The figure clearly shows that only 0.2 MB of storage was initialized when a new virtual machine was created. As shown in Figure 13 on page 20, only a small amount of storage was reserved to the new virtual machine on the NFS datastore following the little that was actually written to the datastore when the virtual machine was created (virtual machine configuration files). This is unlike VMFS, in which the entire storage requirements for the new virtual machine were reserved when it was created (as shown in Figure 11). Figure 14 on page 20 provides further insight into the structure of the virtual disk. Although the encapsulated file of the virtual disk VM_thin_nfs-flat.vmdk is listed as 16 GB in size, it actually consumes only 152 KB on the file system. This behavior of the VMware Infrastructure with a virtually provisioned NFS file system is a result of the NFS protocol, not of Virtual Provisioning. With NFS, storage for a virtual machine is not reserved in advance; it is reserved when data is actually written to the virtual machine. This is because the NFS protocol is thinly provisioned by default. Data blocks in the file system are allocated to the NFS client (ESX in this case) only when they are needed. For this reason, unlike the VMFS datastore, the NFS datastore usage information on the VI Client matches the usage of the corresponding Celerra file system (as shown in Figures 12 and 13).

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 18

Page 19: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Figure 12. File system utilization when creating a new virtual machine over NFS

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 19

Page 20: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Figure 13. NFS datastore utilization on the creation of a new virtual machine

Figure 14. Structure of the virtual disk that was created for the new virtual machine on the NFS datastore It is important to note that the current version of VMware Infrastructure does not offer an option to create virtual disks differently for virtual machines3. Creating virtual disks using a “thin” allocation mechanism requires the use of the CLI utility (vmkfstools) on the service console, or using remote CLI for ESXi. For NFS, this is not required as the protocol is already “thin” by default.

3 The New Virtual Machine wizard offers the capability of creating virtual or physical compatibility raw disk mapping.

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 20

Page 21: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Impact of guest operating system activities on thin pool utilization As discussed in the previous section, only a small amount of storage is allocated when a new virtual machine is created. However, the utilization of the thin pool grows rapidly as the user performs various activities in the virtual machine. For example, the installation of an operating system in the virtual machine causes write I/Os to previously uninitialized blocks of data. These I/Os result in allocating additional storage in the thin pool associated with the thin device. The amount of storage used by the virtual machine depends on the behavior of the operating system, the logical volume manager, the file systems, and applications running inside the virtual machine. Poor allocation and reuse of blocks freed from deleted files can quickly result in thin pool allocation that is equal to the size of the virtual disks presented to the virtual machines. Nevertheless, thin pool allocation (to support a virtual machine) can never exceed the size of the virtual disks. Therefore, users should consider the behavior of the guest operating system and applications when configuring virtual disks for new virtual machines.

Exhaustion of oversubscribed datastores In some cases, when using thin devices with VMware Infrastructure, it may not be possible to create additional virtual machines even when the thin device that contains the datastore is not full. This is because with Virtual Provisioning the actual allocated capacity is not presented to ESX. With VMFS, as previously shown in Figure 11, ESX reserves the entire virtual disk capacity of a newly created virtual machine. That is despite the fact that only a fraction of this capacity may actually be allocated on the thin device itself (Figure 10). Therefore, even if the thin device itself is not full, the creation of a new virtual machine fails when the reserved datastore capacity is exceeded. Figure 15 shows the error message that is displayed in this case. Nevertheless, it is still possible to use the unallocated thin device capacity to create other datastores or for non-VMware use. With NFS, on the other hand, such a scenario does not occur, because the NFS protocol is thinly provisioned by design. As previously shown in Figures 12 and 13, the virtual disk capacity of a newly created virtual machine is not to be reserved on the file system. Storage is only to be reserved and allocated when it is needed. Therefore, the NFS datastore utilization always matches the file system utilization in Celerra. Additional virtual machines can be created on this datastore even when the capacity of all their virtual disks exceeds the datastore size. For both datastore formats, the CLARiiON and Celerra thin pool monitoring capabilities should be used to get advance notification that the thin pool that contains the datastore will soon be oversubscribed.

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 21

Page 22: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Figure 15. Error message when oversubscribing a VMFS datastore

Nondisruptive expansion of virtual disk on thin devices When a virtual machine is created, a virtual disk is formatted for it. This virtual disk should be sized according to the virtual machine’s needs. The use of Virtual Provisioning guarantees that an underutilized virtual disk does not consume any unneeded storage capacity. However, in some cases, the virtual disk must be expanded to accommodate the needs of the new virtual machine. ESX 3.5 provides two methods to nondisruptively expand a virtual disk. One method uses the vmkfstool CLI utility that is available on the ESX service console or the ESXi remote CLI. The other method uses the VMware vCenter Converter and its ability to configure an existing virtual machine and extend the virtual disk of the virtual machine. Both methods work well with virtually provisioned storage, including VMFS or NFS datastores. Following the extension, the virtual disk maintains its virtually provisioned characteristics. It is important to note that these two methods only extend the virtual disk. After it is extended, the guest OS in the virtual machine should discover the additional storage and format it as a new disk partition. As shown in Figure 15, the virtual disk of a virtual machine was extended from 16 GB to 32 GB. The Windows guest OS discovered this additional space as an unallocated partition within the same physical disk. The Disk Management Windows utility can now be used to format this partition so that the partition can be used.

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 22

Page 23: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Figure 16. Device structure as discovered by the Windows guest OS following a virtual disk expansion

Expansion of a virtual datastore on thin devices A properly configured thin device presented to VMware Infrastructure should not have a need for expansion even when some virtual disks in it need to be expanded. However, in some cases, a need to expand virtual disks may require to first expand the virtual datastore in which they are provisioned on. The use of Celerra together with the capabilities of the Virtual Provisioning technology allows for addressing such a need while still keeping an optimal utilization of storage.

VMFS datastore expansion Celerra provide ways to dynamically extend a thin device while preserving its virtual-provisioned characteristics. The use of these array-based dynamic LUN extension features ensures that the datastore is optimally distributed even after the thin LUN is extended. For a VMFS datastore, such extension should be followed by an extension of the VMFS datastore. This is done using the VMFS extents feature to add an additional extent to the datastore on the expanded thin storage. Figure 17 shows a dynamic extension of a virtually provisioned Celerra iSCSI LUN. In this example, the thin device was extended from 10 GB to 20 GB. Figures 18 and 19 illustrate the steps to extend the VMFS datastore on this thin device.

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 23

Page 24: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Figure 17. Dynamic extension of a thin LUN (virtually provisioned iSCSI LUN)

Figure 18. Discovery of an extended thin device following an array-based LUN extension

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 24

Page 25: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Figure 19. VMFS datastore extension on the extended thin device

It is important to note that with ESX 3.5, following an array-based LUN extension, virtual machines on the VMFS datastore that need to be extended must be powered off or suspended first. Figure 20 shows the error message that appears after a VMFS datastore is extended to occupy an extended LUN, and the virtual machines provisioned on the datastore are powered on.

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 25

Page 26: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Figure 20. VMFS datastore extension error following a thin device expansion with virtual machines powered on

If it is not possible to power off these virtual machines, one workaround is to create another thin device on CLARiiON or Celerra, and then add it as an additional VMFS extent. With this method the datastore data is concatenated across the two thin devices. This allows nondisruptive datastore extension. This workaround is less favorable as it may influence the overall performance of the datastore. NFS datastore expansion As with iSCSI, Celerra provides ways to dynamically extend a virtually provisioned file system. Furthermore, Automatic File System Extension provides a way for the file system to be extended automatically without manual system administrator intervention. Unlike VMFS, the extension of the NFS datastore is done only on the NAS system and no further configuration is required on the VMware Infrastructure. This is because the NFS datastore, unlike VMFS, is managed by the NAS system and not by ESX. Furthermore, for this reason it is possible to nondisruptively extend an NFS datastore while the virtual machines on it are powered on. After an extension, a refresh of the vCenter Server GUI may be needed to discover the changes on the NAS system. Figure 21 shows how the virtually provisioned file system extension is presented to the vCenter Server GUI.

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 26

Page 27: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Figure 21. NFS datastore extension following an extension of a virtually provisioned file system

vCenter server considerations with Virtual Provisioning VMware administrators must understand the limitations of using VMware Infrastructure with CLARiiON and Celerra Virtual Provisioning technology. The behavior of VMware Infrastructure features in a virtually provisioned configuration depends on virtual machines usage and the guest operating system running on the infrastructure. This section describes how the various VMware Infrastructure features interact with virtually provisioned devices. VMware’s DRS and VMotion technologies are not affected by Virtual Provisioning because they do not involve any storage relocation. Virtual machine clones, templates, and hot migration (using the VMware Infrastructure Client) are not “thin friendly” because VM Cloning fully allocates all blocks. However, there is a workaround for this using VMware vCenter Converter that is described in a later section. Like VM Cloning, VMware Templates also allocate all blocks. The workaround is to shrink vmdk’s before creating a template and to use the Compact option.

Cloning virtual machines using VMware Infrastructure Client In a large VMware Infrastructure environment it is impractical to manually install and configure the guest operating system on every new virtual machine. To address this, VMware provides multiple mechanisms to simplify the process of creating preconfigured new virtual machines:

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 27

Page 28: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

• Cloning: This is a wizard-driven method that allows users to select a virtual machine4 and clone it to a

new virtual machine.

• Creating template: Templates can be created by either cloning an existing virtual machine to a new template or by converting an existing virtual machine that is already in place. The process to clone an existing virtual machine to a new template involves copying the virtual disks associated with the source virtual machine.

• Deploy from template: The users can use a virtual machine4 to create a template. Once the template has been created, new virtual machines can be deployed from the template and customized to meet specific requirements. VMware Infrastructure provides wizards for both activities.

Detailed discussion of these options is beyond the scope of this paper. Readers should consult VMware documentation for details.

Cloning virtual machines and the impact on virtually provisioned devices The VI Client wizard used to clone virtual machines offers users a number of customizations. The users can select the ESX cluster on which the cloned virtual machine will be hosted, the resource pool to associate the virtual machine to, and the VMware datastore on which to deploy the cloned virtual machine. The only option that impacts the virtually provisioned devices is the process that is utilized to deploy the virtual disk for the cloned virtual machine. In the example that is presented here, a thin LUN “VP_VM_DS” is presented to a storage group that has ESX1 host connected to it

Figure 22 shows a screenshot of the Virtual LUN on CLARiiON after installing the Windows 2003 operating system on a VM. The Virtual LUN has 25 GB of space and only 4 GB has been consumed for the installation of the operating systems.

Figure 22. Thin pool allocation before the cloning process

4 All virtual machines cannot be cloned or converted to a template. Please consult the VMware documentation for further details.

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 28

Page 29: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Figure 23 shows a screenshot of the VMware Infrastructure Client while cloning is occurring. The example shows the cloning of the virtual machine “W2K3 VP VM 1” to a new virtual machine with the name “W2K3 VP VM 2”. The virtual disk associated to both virtual machines is located on the VMware datastore (VP_VM _DS) on a virtual LUN.

Figure 23. Cloning virtual machines using VMware Infrastructure Client

Figure 24 shows the thin pool allocation after the cloning process is complete in Navisphere Manager. The Navisphere screenshot shows that approximately 8 GB of storage associated with the source virtual machine has been cloned, instead of 4 GB. It should be noted that the cloning process converts the virtual disk on the source virtual machines that had been created using the "zeroedthick" format to the "eagerzeroedthick" format on the cloned virtual machine. Since VMware Infrastructure currently does not provide a mechanism to change the allocation policy for the virtual disk on the cloned virtual machine, the cloning process is inherently detrimental to the use of virtually provisioned devices.

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 29

Page 30: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Figure 24. Thin pool utilization after cloning a virtual machine

Figure 25 shows the screenshot that illustrates the VMware Infrastructure Client after the cloning process has been completed. It can be observed from the screenshot that VMware Infrastructure sees that 16 GB of space has been used to get two virtual machines installed.

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 30

Page 31: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Figure 25. Completed cloning of a virtual machine using VMware Infrastructure

This limitation of the virtual machine cloning wizard was noticed on VMFS and NFS datastores. VMware is aware of this limitation and is working closely with EMC engineering to rectify the behavior. Future releases of VMware Infrastructure will include enhancements that will make the cloning process Virtual Provisioning-friendly. Until then, EMC does not recommend using the cloning wizard to provision new virtual machines.

Creating templates from existing virtual machines As discussed earlier, VMware Infrastructure provides three mechanisms to simplify the provisioning of new virtual machines. The first option, cloning, was discussed in the previous section. Templates, the second option, can be created by either cloning an existing virtual machine or by converting an existing virtual machine in place.

The process to clone an existing virtual machine to a new template involves copying the virtual disks associated with the source virtual machine. The problem described in the previous section also occurs during the process to clone a virtual machine to a template. This was noticed on both VMFS and NFS datastores. Figure 26 shows the utilization of a virtually provisioned Celerra file system before and after a template was created from a virtual machine. In this example, the source virtual machine includes a virtually provisioned 16 GB virtual disk that consumed about 5 GB on disk. But, following its creation, the template was allocated with the entire virtual disk size and about 21 GB was allocated on the file system.

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 31

Page 32: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Figure 26. Virtually provsioned file system allocation before/after the template creation from a virtual machine

Therefore, if the datastore holding the templates is a thin device, the user will not benefit from the storage optimization provided by Virtual Provisioning technology.

However, unlike the cloning wizard, the Clone to Template wizard offers the option of using the Compact format. When this option is selected, the cloned virtual disk is allocated using the thin format.

Deploying new virtual machines from templates Once a template has been created, new virtual machines can be deployed from the template and customized to meet specific requirements. However, on both VMFS and NFS datastores, when a new virtual machine is created from the thin template, the disk is created using the “eagerzeroedthick” format. This, once again, defeats the purpose of a virtually provisioned environment as it results in a fully allocated virtual disk.

Cloning virtual machines using VMware vCenter Converter Using VMware vCenter Converter, a pre-existing virtual machine can be copied and deployed in one step that allows for customization if necessary.

This option of creating or copying virtual machines uses VMware vCenter Converter. By using a wizard, VMware vCenter Converter supplies functionality to copy and import a physical or standalone virtual machine into any VMware virtualization platform. Unlike the previous options, through the use of the VMware vCenter Converter Import Virtual Machine wizard (Figure 27) it is possible to simply deploy virtual machines with correct thin allocations, in both VMFS and NFS datastores, in a proper virtually provisioned environment.

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 32

Page 33: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Figure 27. VMware vCenter Converter Import Wizard

In order to use VMware vCenter Converter to deploy a new machine, the source virtual machine must be powered off. Once the virtual machine is powered off, the copy process can be initiated. Once a source virtual machine is selected, the option to alter virtual disk settings is offered. For VMware vCenter Converter to correctly allocate space that is aligned with the ideals of Virtual Provisioning, the virtual disk size must be altered. If the disk size is maintained, the process will result in a fully allocated vmdk file on the target thin pool. To correct this, the disk size must be increased by at least 1 MB (Figure 28). If the disk size is increased, the wizard will allocate space using the “thin” format instead of the “eagerzeroedthick” format, making it thin friendly.

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 33

Page 34: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Figure 28. VMware vCenter Converter Wizard

This process results in a correctly provisioned virtual machine that makes the most of what Virtual Provisioning has to offer. The new virtual machine takes up only 1.57 GB in the thin pool instead of the 8.00 GB that is allocated inside the virtual machine. VMware vCenter Converter calls this a “growable disk.” In other words, the vmdk file will only take up additional space on the specified datastore only when data is actually written to it.

VMware VMotion, DRS, HA, and Virtual Provisioning When VMware Distributed Resource Scheduling (DRS) and VMware High Availability (HA) are used with VMotion technology, it provides load balancing and automatic failover for virtual machines with ESX 3.x or ESXi. To use VMware DRS and HA, a cluster definition must be created using vCenter Server. The ESX hosts in a cluster share resources including CPU, memory, and disks. All virtual machines and their configuration files from such a cluster must reside on shared storage such as CLARiiON or Celerra storage, so that users can power on the virtual machines from any host in the cluster. Furthermore, the hosts must be configured to have access to the same virtual machine network so VMware HA can monitor heartbeats between hosts on the console network for failure detection.

EMC’s CLARiiON and Celerra thin devices behave like any other SCSI disk or network file system attached to the ESX kernel. If a thin device is presented to all nodes of a VMware DRS cluster group, vCenter Server allows live migration of viable virtual machines on the thin device from one node of the cluster to another using VMware VMotion. As the virtual machines remain in their current datastore these vCenter Server technologies do not trigger any conversion or reformatting of virtual disks and therefore will not affect the use of Virtual Provisioning.

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 34

Page 35: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Cold migration and Virtual Provisioning VMware Infrastructure supports cold migration of virtual machines from one ESX to another. The cold migration process can also be used to change the datastore hosting the virtual machine.

There is no impact to the cold migration process when a virtual machine is moved from one ESX to another while maintaining the location of the datastore containing the virtual machine files. However, changing the datastore location as part of the migration process can have a negative impact. The migration of the data to a VMFS or NFS datastore on a thin device is performed using the eagerzeroedthick format and results in unnecessary allocation from the thin pool. Therefore, the cold migration functionality cannot be used for migrating virtual machines from non-thin devices to thin devices.

Hot migration using Storage VMotion and Virtual Provisioning VMware Storage VMotion is a solution that enables users to perform live migration of virtual machine disk files across heterogeneous storage arrays with complete transaction integrity and no interruption in service for critical applications. Since the process of hot migration involves migrating the virtual disk associated with the source virtual machine, this process is not thin friendly.

The problem described in previous sections also occurs during hot migration on both VMFS and NFS datastores. Therefore, if the datastore holding the VMs is a thin device, the user will not benefit from the storage optimization provided by Virtual Provisioning technology.

Considerations for storage-based features on thin devices

CLARiiON SnapView SnapView™ replicates data only within the same CLARiiON storage array. SnapView supports two forms of data replication: SnapView snapshot and SnapView clone.

SnapView snapshot SnapView snapshots are logical point-in-time images of a LUN that take only seconds to create. When a snapshot session is started for a LUN, SnapView software uses a pointer-based, copy-on-first-change method to keep track of how the source LUN looks at a particular point in time.

SnapView clones SnapView clones provide users the ability to create fully populated binary copies of LUNs within a single storage system. Once populated, clones can be fractured from the source and presented to a secondary server to provide point-in-time replicas of data. Users will be able to perform local "thin to thin" replication with CLARiiON thin devices by using standard SnapView operations. This includes SnapView snapshots and clones.

Following is an example of cloning a "Thin Source LUN" using CLARiiON's SnapView clone technology. "Thin Source LUN" is a thin LUN on Thin Pool 0. It has a user capacity of 5 GB and consumed capacity of 3 GB. Thin Source LUN is assigned to a Windows server and has been cloned to "Thin Source LUN_Clone_1." As indicated in the screenshot, the clone of the source image is using only the user consumed capacity of 3 GB.

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 35

Page 36: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Figure 29. Snap clone replication

Also note that thin LUNs cannot be private LUNs. This means that reserved LUN pools for SnapView snapshots and clone private LUNs (CPLs) for SnapView clones cannot be thin LUNs. Because thin LUNs share a pool of storage it is possible for a LUN to run out of space, even if the allocation size is bigger. If this happens, clones’ functionality can be damaged.

LUN migration CLARiiON LUN migration allows users to change performance and other characteristics of existing LUNs without disrupting host applications. It moves data—with the change characteristics that the user selects—from a source LUN to a destination LUN of the same or larger size. LUN migration can be used on thin LUNs, traditional LUNs, and metaLUNs.

Figure 30 explains the three types of behavior that different CLARiiON local replication and LUN migration operations exhibit in relation to thin LUNs. Table 6 describes the behavior of each specific operation.

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 36

Page 37: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

thin LUN thin LUN

thin LUN

Fully allocated thin LUN

Traditional LUN

Traditional LUN

Figure 30. SnapView clones and LUN migration behavior on Virtual LUNs

Cloning or migrating a traditional LUN to a thin LUN does not save space initially. But the new thin LUN can be expanded so that configured user capacity exceeds allocated capacity; thereby adding some of the benefits of Virtual Provisioning in the process.

Table 6 indicates the specific replication support semantics for different CLARiiON replication products. If replication is supported, the respective slot indicates the result. X indicates that it is not supported in the first release of Virtual Provisioning.

Table 6. Protocol specification terms

Source LUN

Destination LUN

LUN Migration

SnapView snapshots

SnapView clones

RecoverPoint MirrorView SAN Copy

Traditional Traditional Traditional Snapshot5 Traditional Traditional Traditional Traditional

Traditional Thin Fully Provisioned6

Snapshot5 Fully Provisioned

Fully Provisioned

X X7

Thin Thin Thin Snapshot5 Thin Thin X X4

Thin Traditional Traditional Snapshot5 Traditional Traditional X X4

Remote replication is supported through RecoverPoint. RecoverPoint supports replication for virtual to virtual LUNs and traditional to virtual LUNs. Support for MirrorView and SAN Copy will be in future releases. 5 A thin LUN cannot be a reserved LUN. Destination LUN type does not mean anything in this case. The result is a traditional snapshot LUN. 6 Fully provisioned for the size of its source LUN since migration can be to a larger LUN. 7 SAN Copy does not allow a thin LUN push or pull copy. In cases where SAN Copy does not know the type of remote target participating in the SAN Copy session, it would allow the remote target to be configured as thin. If this copy is initiated, the remote thin LUN will become fully provisioned after the copy completes since the source LUN is not thinly provisioned.

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 37

Page 38: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Celerra

apSure Sure creates a point-in-time snapshot of a Celerra file system (checkpoint file system).

ey ere modified). Furthermore, a file restored from a checkpoint file system requires the same amount of

m that include a single virtual machine ith about 4 GB allocated to it.

Celerra SnCelerra Snap For a virtually provisioned file system, only allocated blocks will be included in the snapshot (after thwstorage that it consumed on the virtually provisioned file system. Figure 31 shows the utilization of a virtually provisioned file systew

Figure 31. Virtually provisioned file system utilization before a snapshot was created for the file system

al restored from the checkpoint file system. Figure 32 shows the utilization of the file

ed.

The virtual machine was then deleted from disk after a snapshot was created for the file system. The virtumachine was then system after the restore was completed. As seen in this figure, the storage allocated to the restore virtual machine is the same as the storage capacity that was allocated to the virtual machine before it was delet

Figure 32. Virtually provisioned file system utilization after the virtual machine was restored from the checkpoint file system

n iSCSI snapshot is a point-in-time representation of the data stored on an iSCSI LUN.

changed in the roduction LUN . A file restored from an iSCSI snapshot requires the same amount of storage that it

consumed on the virtually provisioned iSCSI LUN.

Celerra iSCSI snapshots A A snapshot of a virtually provisioned iSCSI LUN requires only as much space as data was

8p

8 This is the default behavior for a virtually provisioned Celerra iSCSI LUN. However, a snapshot of a fully provisioned LUN will be, by default, fully provisioned. This default behavior can be altered by adjusting the sparsetws Data Mover parameter.

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 38

Page 39: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Celerra Replicator Celerra Replicator is an asynchronous remote replication mechanism. It produces a read-only point-in-time opy of a source file system or an iSCSI LUN; it then periodically updates this copy to make it consistent ith the source object. With a virtually provisioned file system or iSCSI LUN, only data that is allocated

to the target object. Therefore, like with SnapSure and iSCSI snapshots, the

e ease of provisioning and improve capacity utilization for certain applications. This section describes the performance implications with

ance improvements using wide striping with thin provisioning. CLARiiON Virtual es for lower administrative overhead and space efficiency. A mapping service

cement per best practices. This automated data placement adds indexing overhead and

hen variable. If this

ariability is not desirable for a particular application, that application could be dedicated to its own

applicable for thin LUNs and can be found in the MC CLARiiON integration with VMware ESX Server white paper available on Powerlink.

provements over time as the virtually provisioned file system expands and as the data is used, deleted, or

file system occupies, there are smaller disk seeks required for random reads. Disk seeks are a rge component of I/O latency, so minimizing seeks can improve performance. With sequential read I/O,

For a virtually provisioned iSCSI LUN this performance improvement can be even greater due to the way

cwon the source object is copied destination file system or iSCSI LUN is also virtually provisioned.

Performance considerations CLARiiON and Celerra Virtual Provisioning are designed to provid

Virtual Provisioning.

CLARiiON Virtual Provisioning is appropriate for applications that can tolerate some performance variability. Some workloads see performProvisioning trades CPU cyclhandles data plarequires CPU cycles to manage, and thus has lower performance than traditional LUNs. When creating larger size thin pools, a larger number of LUNs can be leveraged by VMware Infrastructure for I/O. However, when multiple thin LUNs contend for shared spindle resources in a given pool, and wutilization reaches higher levels, the performance for a given application can become morevmoderate size thin pool. Alternatively, Navisphere Quality of Service Manager can be used to manage resource contention within the pool as well as contention between LUNs in different thin pools and RAID groups. Fibre Channel and SATA drives should be deployed in separate pools. Where possible, drives in a thin pool should be the same rpm and be the same size. In a VMware Infrastructure environment, larger LUN sizes are usually configured and presented to multiple ESX hosts; therefore, larger thin pools with multiple disk drives are needed. The VMware best practice guidelines outlined for traditional LUNs are alsoE

Celerra In general, some applications, I/O workloads, and storage deployment scenarios see performance improvements from using Virtual Provisioning. However, it is important to note that these immay changemodified. In a virtually provisioned file system, a performance improvement is noticed mostly with random and mix read I/O. Because the virtually provisioned file system initially occupies less space on disk than a fully provisionedlaon the other hand, disk seeks are already infrequent, and therefore a performance improvement would not be noticed. Write I/O will also not see much performance improvement as disk seek is usually not necessary or only minimal (except for random overwriting), and in large part will be cached anyway. It should be emphasized that this performance improvement may decrease over time as the file system is further used and extended, thus increasing the size of disk seeks and the corresponding latency.

iSCSI LUNs are implemented in Celerra and how this affects disk seek time. A Celerra iSCSI LUN is a single file object within a Celerra file system and the file system will attempt to keep the file spatially contiguous. Therefore, the virtually provisioned LUN will remain in contiguous disk space as it is created

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 39

Page 40: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

and extended. A fully provisioned iSCSI LUN will be built across a much larger space and the guest OS may spread data across that space according to its own organizational rules. Therefore, the fully provisioned iSCSI LUN may see much more disk seek than would the virtually provisioned LUN. As with

irtually provisioned file systems, this performance advantage will decrease over time as the LUN is

disk, and

at this benefit would decrease in time as data grows, more space is allocated, and more agmentation occurs as the data hotspots move.

s are also rra

ere is a possibility that applications using a thin tually there. This is ing into this condition

on CLARiiON storage platforms.

Usable pool capacity is the total physical capacity available to all LUNs in the pool. Allocated capacity is assigned to all thin LUNs. Subscribed capacity is the total host ool. When the thin LUN allocations begin to approach the capacity of

l. e

e user settable alert and the built-in alert ontinue to track the actual %full value as the pool continues to fill.

vextended. Furthermore, provisioning a group of similar virtual machines simultaneously, a common VMware Infrastructure scenario, in a virtually provisioned file system or iSCSI LUN can lead to data de-fragmentation among the provisioned virtual machines. In this case, the infrequently accessed OS and application binaries from all the provisioned virtual machines will be clustered in one area of the the more frequently accessed data from all virtual machines will be clustered in another. This should be good for performance, especially initially, as all of the data hotspots will be contiguous on disk. It is expected thfr The VMware best practice guidelines outlined for fully provisioned file systems and iSCSI LUNapplicable for virtually provisioned ones and can be found in the VMware ESX Server using EMC CeleStorage Systems Solutions Guide available on Powerlink.

CLARiiON Virtual Provisioning management

Thin pool management When storage is provisioned from a thin pool to support multiple thin devices there is usually more “virtual” storage provisioned to hosts than is supported by the underlying data devices. This is one of the main reasons for using Virtual Provisioning. However, thpool may grow rapidly and request more storage capacity from the thin pool than is acan undesirable situation. The next section discusses the steps necessary to avoid runn

Thin pool monitoring Along with Virtual Provisioning come several methodologies to monitor the capacity consumption of the thin pools. Navisphere Manager can be used to monitor the storage pool utilization as well as display the current space allocations. Users can also add alerts to objects that need to be monitored with event monitor, which will send alerts as email, page, SNMP traps, and so forth.

the total physical capacity currently reported capacity supported by the pthe pool, the administrator will be alerted. Two non-dismissible pool alerts are provided to track pool %fulOne is a user settable %full threshold at the warning severity level, which can range from 1% to 84%. Oncthe pool %full reaches 85%, the pool issues a built-in alert at the critical severity level. Both alerts trigger an associated event that can be configured for notification. Both thc

In Figure 33, the Thin Pool Alerts field on the Advanced tab of the Thin Pool Properties dialog box is set at 2%. Figure 34 shows the screenshot of Navisphere alerts when the thin pool reaches its user settable % full threshold.

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 40

Page 41: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Figure 33. User settable % full threshold

Figure 34. Navisphere Manager % full threshold alerts

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 41

Page 42: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Figure 35. Pool % full threshold

Adding drives to the pool nondisruptively increases available usaAllocated capacity is reclaimed by the pool when LUNs are delet

ble capacity

Celerra Virtual Provisioning management

Thin file system and storage pool management Celerra provides various settings to better manage the operation of virtually provisioned file systems and iSCSI LUNs for existing and future needs. These settings include High Water Mark and Maximum capacity, which are used by the Automatic File System Extension feature of the virtually provisioned file system. The file system is automatically extended whenever its usage exceeds the configured High Water

Subscribed Capacity

Over Subscribed

Capacity

ble pool capacity for all attached LUNs. ed.

Figure 36. Consumed and Availa

System administrators and storage administrators must put processes in effect to monitor the capacity for thin pools to make sure that they do not get filled. The pools can be dynamically expanded to include more

ata devices without application impact. d

Allocated Capacity

Usable Pool

CapacityConsumed Capacity

Available Shared

Capacity

All t d C it

Usable Pool

Capacity %Full Threshold Allocated Capacity

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 42

Page 43: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Mark (HWM). It is extended enough to get to 3% below the configured HWM, up to the configured Maximum Capacity. This enables the file system to expand according to changes in demand. A virtually provisioned iSCSI LUN, which is an object in the file system, can then be provisioned from this file system, and will expand based on its use up to its configured size. Figure 37 shows the available setting options for managing a virtually provisioned file system.

Figure 37. Management setting for a virtually provisioned file system As with CLARiiON Virtual Provisioning, there is still a possibility that applications using a Celerra thin device may grow rapidly and request more storage capacity than is actually available (oversubscription). This is an undesirable situation. The next section discusses the steps necessary to avoid oversubscription onCelerra storage platforms.

Celerra provides several methods to proactively monitor the utilization of virtually provisioned file systems and the storage pools on which they were created. Celerra also provides trending and prediction graphs for the utilization of virtually provisioned file system and storage pools. Figures 38 and 39 show the information that is provided on the utilization of a virtually provisioned file system and a virtually provisioned iSCSI LUN.

Thin file system and storage pool monitoring

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 43

Page 44: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Figure 38. Using Celerra Manager to find the utilization of a virtually provisioned file system

Figure 39. Using Celerra Manager to find the utilization of a virtually provisioned iSCSI LUN

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 44

Page 45: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Celerra Manager can also be used to configure proactive alerts when a virtually provisioned file system or storage pool are close to being oversubscribed. It is possible to customize these alert notifications according to file system and storage pool utilization, predicted time-to-fill, and overprovisioning. The alert notifications include logging the event in an event log file, sending an email, or generating a Simple Network Management Protocol (SNMP) trap. Two types of Storage Used notifications can be configured:

• Current size – how much of the currently allocated file system/storage pool capacity is used

• Maximum size – how much of the configured maximum file system/storage pool capacity is used (when the file system/storage pool will be fully extended)

Figure 40 illustrates the two types of Storage Used alert notifications that can be configured on a virtually provisioned Celerra file system, or on a storage pool. These alert notifications can also be configured based on capacity levels (for example, MB, GB, or TB) rather than on percentages.

Figure 40. Storage Used alert notifications for virtually provisioned Celerra file systems and storage pools Figure 41 shows how to configure a Storage Used alert notification on file systems or storage pools using Celerra Manager.

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 45

Page 46: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Figure 41. Storage Used alert notification configuration for file systems or storage pools

Similarly, as shown in Figure 42, alert notifications can be configured based on the Celerra time-to-fill predications for file systems and storage pools.

Figure 42. Storage Projection alert notification configuration for file systems or storage pools

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 46

Page 47: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

When using Celerra Virtual Provisioning for file systems and iSCSI LUNs it is essential to track the utilization of the storage using these monitoring capabilities. This way it is possible to address an upcoming storage shortage ahead of time and avoid impacting users and applications.

The Configuring Celerra Events and Notifications technical module, available on Powerlink, provides further information on settings event notifications in Celerra.

Exhaustion of oversubscribed pools Different behaviors can be observed when a thin pool has no space available for new extent allocation, depending on the activity that causes the thin pool capacity to be exceeded.

If the thin pool capacity is exceeded when a new virtual machine is deployed using the vCenter Server wizard, the error message is posted by vCenter Server. An I/O error message is displayed when the thin pool capacity is exceeded while using command line utilities on the service console.

The behavior of virtual machines when the thin pool reaches full capacity depends on a number of factors including the guest operating system, the application running in the virtual machine, and the format that was utilized to create the virtual disks. The virtual machines behave no differently than the same configuration running in a non-virtual environment. Consult the white papers listed in the “References” section for further details. In general, the following is true for VMware Infrastructure environments:

• Virtual machines configured with virtual disks using the “eagerzeroedthick” format continue to operate without any disruption.

• Virtual machines that do not require additional storage allocations continue operate normally.

d after additional storage is added to the thin pool. In this case, if the virtual machine hosts an ACID-compliant application (such as relational databases), the application performs a recovery process to achieve a transactionally consistent point in time.

The ESX VMkernel continues to be responsive as long as ESX is installed on a device with sufficient free storage for critical processes.

to

• If any virtual machine in the environment is impacted due to lack of additional storage, other virtual machines continue to operate normally as long as those machines do not require additional storage.

• Some of the virtual machine may need to be restarte

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 47

Page 48: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

Conclusion Both CLARiiON and Celerra Virtual Provisioning provide a simple, noninvasive, and economical way tprovide storage for VMware Infrastructure environments. As shown in Table 7, some VMware Infrastructure operations work very nicely with this type of co

o

nfiguration, while others may not allow VMware administrators to realize the full benefits of the Virtual Provisioning technology. Furthermore, the

ion of datastores work very well

use of storage-based technologies for the extension, cloning, and replicatwith Virtual Provisioning.

Table 7. VMware Infrastructure operations/use cases with virtually provisioned storage

Datastore provisioned over VMware Infrastructure operation/use case Using NFS VMFS RDM

Virtual machine creation vCenter Server Virtual machine cloning from a virtual machine vCenter Server

Virtual machine cloning from a template vCenter Server

Virtual machine cloning to a template vCenter Server compact compact

Vi ual machine cloning from a vi al machine VMware vCenter Converter rt

rtu

Vi ual Disk Extension vmkfstools rt VMware vCenter Converter,

Co d migration – virtual to full, virtual to virtual, full to virtual

vCenter Server l All three

All three

Hot migration – virtual to full, virtual to virtual, full to virtual Storage VMotion All All

three three

Array-based datastore extension

Array-based datastore migration Replicator CLARiiON LUN Migration, Celerra

Array-based datastore cloning CLARiiON SnapView, Celerra SnapSure, Celerra iSCSI snapshots

Array-based datastore replication EMC RecoverPoint, Celerra Replicator

Legend Virtual provisioning Virtual provisioning Not applicable benefits obtained benefits lost

Virtualization technologies, such as VMware for server virtualization, are becoming more and more popular. As seen in this paper, the Virtual Provisioning feature offered in EMC storage systems works with VMware technologies to deliver optimum benefits available in effective IT virtualization implementations.

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 48

Page 49: Implementing Virtual Provisioning on EMC CLARiiON · PDF fileImplementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology ... VMware

References The following documents and resources can be found on Powerlink, EMC’s password-protected extranet for partners and customers: EMC CLARiiON • EMC CLARiiON Virtual Provisioning white paper • VMware ESX Server using EMC CLARiiON Storage Systems Solutions Guide

• H VMware ESX Server Version 2.x

lumes with Automatic Volume Management

n EMC module nts and N ical module

MC Celerra Storage Systems Solutions Guide 3 wi sentation

nd res e found on VMware.coms

upport/pubs/vi_pages/vi_pubs_35_3i_i.html

• EMC CLARiiON integration with VMware ESX Server white paper ost Connectivity Guide for

EMC Celerra • Managing EMC Celerra Vo and File Systems technical

module • Configuring iSCSI Targets o Celerra technical• Configuring Celerra Eve• VMw

otifications technare ESX Server using E

• Using VMware Infrastructure th EMC Storage customer pre

The following documents a• VMware resource document

ources can b :

http://www.vmware.com/s

ation /vi3_35/esx_3/r35/vi3_35_25_san_cfg.pdf

• Fibre Channel SAN Configur Guide Basic SAN http://www.vmware.com/pdf

ide df/vi3_3 n_cfg.pdf

• iSCSI SAN Configuration Gu http://www.vmware.com/p 5/esx_3/r35/vi3_35_25_iscsi_sa

• SAN System Design and Deploymei3_

nt Guide san_design_deploy.pdf http://www.vmware.com/pdf/v

Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Infrastructure Applied Technology 49