79
VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. CONSOLIDATING AND PROTECTING VIRTUALIZED ENTERPRISE ENVIRONMENTS WITH DELL EMC XTREMIO X2 Abstract This white paper describes the components, design, functionality, and advantages of hosting a VMware-based multisite virtual server on the DELL EMC XtremIO X2 All-Flash array and protecting this environment with DELL EMC RecoverPoint, RP4VMS, AppSync and VMware SRM. December 2017 WHITE PAPER VMware Integrated Replication and Disaster Recovery with DELL EMC XtremIO X2, RecoverPoint, RP4VMS, AppSync and VMware SRM

CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

  • Upload
    vudiep

  • View
    225

  • Download
    0

Embed Size (px)

Citation preview

Page 1: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

CONSOLIDATING AND PROTECTING VIRTUALIZED ENTERPRISE ENVIRONMENTS WITH DELL EMC XTREMIO X2

Abstract

This white paper describes the components, design, functionality, and advantages of hosting a VMware-based multisite virtual server on the DELL EMC XtremIO X2 All-Flash array and protecting this environment with DELL EMC RecoverPoint, RP4VMS, AppSync and VMware SRM.

December 2017

WHITE PAPER

VMware Integrated Replication and Disaster Recovery with DELL EMC XtremIO X2, RecoverPoint, RP4VMS, AppSync and VMware SRM

Page 2: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

2 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Contents

Abstract ............................................................................................................................................................. 1

Executive Summary ........................................................................................................................................... 4

Introduction ........................................................................................................................................................ 4

Business Case .................................................................................................................................................. 5

Solution Overview .............................................................................................................................................. 5

Dell EMC XtremIO X2 for VMware Environments .............................................................................................. 8

XtremIO X2 Overview ......................................................................................................................................................... 9

Architecture ....................................................................................................................................................................... 10

Multi-dimensional Scaling ................................................................................................................................................. 11 XIOS and the I/O Flow ...................................................................................................................................................... 13

XtremIO Write I/O Flow ................................................................................................................................................. 14

XtremIO Read I/O Flow ................................................................................................................................................. 16

System Features ............................................................................................................................................................... 17

Inline Data Reduction .................................................................................................................................................... 17 Thin Provisioning ........................................................................................................................................................... 18

Integrated Copy Data Management .............................................................................................................................. 19

XtremIO Data Protection ............................................................................................................................................... 21

Data at Rest Encryption ................................................................................................................................................ 21

Write Boost .................................................................................................................................................................... 22

VMware APIs for Array Integration (VAAI) ........................................................................................................................ 23 Dashboard ..................................................................................................................................................................... 25

Notifications ................................................................................................................................................................... 27

Configuration ................................................................................................................................................................. 28

Reports .......................................................................................................................................................................... 29

Hardware ....................................................................................................................................................................... 31

Inventory ........................................................................................................................................................................ 32

XtremIO X2 Space Management and Reclamation in vSphere Environments ................................................. 32

VMFS Datastores Reclamation ......................................................................................................................................... 33 Asynchronous Reclamation of Free Space on VMFS 6 Datastore ............................................................................... 33

Space Reclamation Granularity .................................................................................................................................... 33

In-Guest Space Reclamation for Virtual Machines ........................................................................................................... 35

Space Reclamation for VMFS 6 Virtual Machines ........................................................................................................ 35

Space Reclamation for VMFS5 Virtual Machines ......................................................................................................... 35

Space Reclamation prerequisites ................................................................................................................................. 35 In-Guest Unmap Alignment Requirements ................................................................................................................... 36

EMC VSI for VMware vSphere Web Client Integration with XtremIO X2 .......................................................... 38

Page 3: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

3 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Setting Best Practices Host Parameters for XtremIO X2 Storage Array .......................................................................... 40

Provisioning VMFS Datastores ......................................................................................................................................... 40

Provisioning RDM Disks .................................................................................................................................................... 41 Setting Space Reclamation ............................................................................................................................................... 41

Creating Native Clones on XtremIO VMFS Datastores ................................................................................................ 42

Working with XtremIO X2 XVCs ........................................................................................................................................ 42

XtremIO X2 Storage Analytics for VMware vRealize Operations Manager....................................................... 43

XtremIO X2 Content Pack for vRealize Log Insight .......................................................................................... 45

XtremIO X2 Workflows for VMware vRealize Orchestrator .............................................................................. 47

Compute Hosts: Dell PowerEdge Servers ....................................................................................................... 49

Compute Integration – Dell OpenManage ........................................................................................................................ 49

Firmware Update Assurances ........................................................................................................................................... 50

Enabling Integrated Copy Data Management with XtremIO X2 & AppSync 3.5 ............................................... 51

Registering a New AppSync System ................................................................................................................................ 52

Restoring a Datastore from a Copy................................................................................................................................... 54 Managing Virtual Machine Copies .................................................................................................................................... 55

File or Folder Restore with VMFS Datastores .................................................................................................................. 56

RecoverPoint Snap-Based Replication for XtremIO X2.................................................................................... 58

Snap-Based Replication Use Cases ............................................................................................................................. 59

XtremIO Virtual Copies (XVCs) ..................................................................................................................................... 59

Replication Flow ................................................................................................................................................................ 59

XtremIO Volumes Configured on the Production Copy ................................................................................................ 59

XtremIO Volumes Configured on the Target Copy ....................................................................................................... 61 Configuring RecoverPoint Consistency Groups ............................................................................................................ 64

Registering vCenter Server ........................................................................................................................................... 65

Configuring the Consistency Group for Management by SRM ..................................................................................... 66

Configuring Site Recovery with VMware vCenter Site Recovery Manager 6.6 ................................................. 66

Point-in-Time Recovery Images ........................................................................................................................................ 68

Testing the Recovery Plan ................................................................................................................................................ 69

Failover .............................................................................................................................................................................. 70

RecoverPoint 5.1.1 for VMS ............................................................................................................................ 71

References ...................................................................................................................................................... 78

How to Learn More .......................................................................................................................................... 79

Page 4: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

4 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Executive Summary

This white paper describes the components, design and functionality of a VMware-based multisite Virtual Server Infrastructure (VSI), running consolidated, virtualized enterprise applications protected by DELL EMC RecoverPoint or RP4VMs, all hosted on a DELL EMC XtremIO X2 All-Flash array.

This white paper discusses and highlights the advantages presented to enterprise IT operations and applications already virtualized or considering hosting virtualized enterprise application deployments on a DELL EMC XtremIO X2 All-Flash array. The primary issues examined in this white paper include:

• Performance of consolidated virtualized enterprise applications

• Business continuity and disaster recovery considerations

• Management and monitoring efficiencies

Introduction

The goal of this document is to showcase the benefits of deploying a multisite VMware-based virtualized enterprise environment hosted on a DELL EMC XtremIO X2 All-Flash array. This document provides information and procedures highlighting XtremIO's ability to consolidate multiple business-critical enterprise application workloads within a single cluster, providing data efficiencies, consistent predictable performance and multiple integration vectors to assist in disaster recovery and business continuity, as well as monitoring and managing of the environment.

This document demonstrates how the integrated solution of a DELL EMC XtremIO X2 All-Flash array, coupled with VMware-based virtualized infrastructure, is a true enabler for architecting and implementing a multisite virtual data center to support Business Continuity and Disaster Recovery (BCDR) services during data center failover scenarios.

This document outlines a process for implementing a cost-effective BCDR solution to support the most common disaster readiness scenarios for a VMware-based infrastructure hosted on a DELL EMC XtremIO X2 All-Flash array. It provides reference material for data center architects and administrators creating a scalable, fault-tolerant and highly available BCDR solution. This document demonstrates the advantages of RecoverPoint array-based replication and RecoverPoint for VMs for XtremIO X2 and discusses examples of replication options relating to Recovery Point Objectives (RPO). Combining XtremIO X2 with Dell EMC AppSync simplifies, orchestrates and automates the process of generating and consuming copies of production data.

Among the benefits of this solution are ease of setup, linear scalability, consistent performance and data-storage efficiencies, as well as the various integration capabilities available for a VMware-XtremIO-based environment. These integration capabilities, across the various products used within this solution, provide customers increased management, monitoring and business continuity options.

This document demonstrates that the DELL EMC XtremIO X2 All-Flash array, when paired with EMC RecoverPoint replication technology, both physical and virtual, in support of a VMware-based virtualized data center architecture, delivers an industry-leading ability to consolidate business-critical applications and provide an enterprise-level business continuity solution as compared with today's alternative all-flash array offerings.

Page 5: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

5 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Business Case

A well designed and efficiently orchestrated enterprise-class data center ensures the organization meets the operational policies and objectives of the business through predictable performance and consistent availability of the business-critical applications supporting the actualization of the organization's goals. Due to the non-insignificant cost required to manage data layout across the entire infrastructure, scalability and management are additional and important challenges for enterprise environments, with the main goal being the avoidance of contention between independent workloads competing for shared storage resources.

This document offers a solution design allowing for consistent performance of consolidated production applications without the possibility of contention from organizational development activities and the storage efficiencies and dynamism demanded by modern-day test and development activities. Together with a demonstration of XtremIO's ability to consolidate multiple concurrent enterprise application workloads onto a single platform without penalty, this solution highlights an innovative data protection scheme involving RecoverPoint native integration with the XtremIO X2 platform. In this solution, the recovery point objective for protected virtual machines reduces to less than sixty seconds. Space efficient point-in-time (PiT) copies of production databases without penalty for BCDR and DevOps requirements is available.

XtremIO X2 brings tremendous value by providing consistent performance at scale by means of always-on inline deduplication, compression, thin provisioning and unique data protection capabilities. Seamless interoperability with VMware vSphere by means of VMware APIs for Array Integration (VAAI), Dell EMC Solutions Integration Service (SIS) and Virtual Storage Integrator's (VSI) ease of management make choosing this best-of-breed all-flash array for desktop virtualization purposes even more attractive.

XtremIO X2 is a scale-out and scale-up storage system capable of growing in storage capacity, compute resources and bandwidth capacity whenever you enhance storage requirements for the environment. With the advent of multi-core server systems and the number of CPU cores per processor (following Moore's law), we are able to consolidate an increasing number of virtual workloads on a single enterprise-class server. When combined with XtremIO X2 All-Flash Array, we can consolidate vast numbers of virtualized servers on a single storage array, thereby achieving high consolidation at great performance from a storage and a computational perspective.

Solution Overview

The solutions described in Figure 1 and Figure 2 represent a two-site virtualized, distributed data center environment. The consolidated virtualized enterprise applications run on the production site. These include Oracle and Microsoft SQL database workloads, as well as additional Data Warehousing profiles. These workloads make up our pseudo-organization's primary production workload. For the purposes of this proposed solution, these workloads are essential to the continued fulfillment of crucial business operational objectives. They should behave as expected consistently, remain undisrupted, and in the course of a disaster event impacting the primary data center, be migrated and resume on a secondary site with minimal operational interruption.

We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO X2 array and the features and benefits it provides to VMware environments. The software layer is also discussed later in the document, including the configuration details for VMware vSphere, VMware SRM and Dell EMC plugins for VMware environments such as VSI, ESA and AppSync.

We follow this with details about our replication solutions - based on DELL EMC RecoverPoint and RP4VMS - that when paired with XtremIO X2, deliver an industry-leading ability to consolidate business-critical applications and provide an enterprise-level business continuity solution.

Page 6: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

6 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Physical Replication Architecture Topology - XtremIO X2 Combined with RecoverPoint and VMware SRM Figure 1.

Virtual Replication Architecture Topology - XtremIO X2 Combined with RecoverPoint for VMs Figure 2.

Page 7: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

7 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Table 1. Solution Hardware HARDWARE QUANTITY CONFIGURATION NOTES

DELL EMC XtremIO X2 2 Two Storage Controllers (SCs) with: Two dual socket Haswell CPUs 346GB RAM DAEs configured with: 18 400 GB SSDs drives

XtremIO X2 X2-S 400GB 18 DRIVES

DELL EMC RecoverPoint 5.1 4 Gen 6 hardware 1 RPA cluster per site with 2RPAs per cluster.

Brocade 6510 SAN switch 4 32 or 16 Gbps FC switches 2 switches per site, dual FC fabric configuration

Mellanox MSX1016 10GbE 2 10 or 1 Gbps Ethernet switches Infrastructure Ethernet switch PowerEdge FC630 16 Intel Xeon CPU E5-2695 v4 @ 2.10GHz

524 GB 2 for management cluster and 6 for workload cluster in each site

Table 2. Solution Software SOFTWARE QUANTITY CONFIGURATION

vCenter Server Appliance VM 6.5 update 1 2 16 vCPU 32 GB Memory 100 GB VMDK

VMware Site Recovery Manager Server 6.6 VM 2 4 vCPU 16 GB Memory 40 GB VMDK

MSSQL Server 2017 VM 2 8 vCPU 16 GB Memory 100 GB VMDK

VSI for VMware vSphere 7.2 VM 1 2 vCPU 8 GB Memory 80 GB VMDK

RecoverPoint for VMs 5.1.1 4 4 vCPU 16 GB Memory 40 GB VMDK

vRealize Operations Manager 6.6 VM 1 4 vCPU 16 GB Memory 256 GB VMDK

VMware Log Insight 4.5 VM 1 4 vCPU 8 GB Memory 256 GB VMDK

AppSync 3.5 VM 1 4 vCPU 16 GB Memory 40 GB VMDK

vRealize Orchestrator 7.3 1 2 vCPU 4 GB Memory 32 GB VMDK

vSphere ESXi 6.5 update 1 16 N/A ESA Plugin for VROPS 4.4 1 N/A RecoverPoint Storage Replication Adapter 2.2.1 2 N/A

Page 8: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

8 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Dell EMC XtremIO X2 for VMware Environments

Dell EMC's XtremIO X2 is an enterprise-class scalable all-flash storage array that provides rich data services with high performance. It is designed from the ground up to unlock flash technology's full performance potential by uniquely leveraging the characteristics of SSDs and uses advanced inline data reduction methods to reduce the physical data that must be stored on the disks.

XtremIO X2’s storage system uses industry-standard components and proprietary intelligent software to deliver unparalleled levels of performance, achieving consistent low latency for up to millions of IOPS. It comes with a simple, easy-to-use interface for storage administrators and fits a wide variety of use cases for customers in need of a fast and efficient storage system for their data centers, requiring very little planning to set-up before provisioning.

XtremIO X2 storage system serves many use cases in the IT world, due to its high performance and advanced abilities. One major use case is for virtualized environments and cloud computing. Figure 3 shows XtremIO X2’s incredible performance of an intensive live VMware production environment. We can see an extremely high IOPS (~1.6M) stats handled by XtremIO X2 storage array with a latency mostly below 1 msec. In addition, we can see an impressive data reduction factor of 6.6:1 (2.8:1 for deduplication and 2.4:1 for compression) which lowers the physical footprint of the data.

Intensive VMware Production Environment Workload for XtremIO X2 Array Perspective Figure 3.

XtremIO leverages flash to deliver value across multiple dimensions:

• Performance (consistent low-latency and up to millions of IOPS)

• Scalability (using a scale-out and scale-up architecture)

• Storage efficiency (using data reduction techniques such as deduplication, compression and thin-provisioning)

• Data Protection (with a proprietary flash-optimized algorithm named XDP)

• Environment Consolidation (using XtremIO Virtual Copies or VMware's XCOPY)

Page 9: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

9 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

XtremIO Key Values for Virtualized Environments Figure 4.

XtremIO X2 Overview

XtremIO X2 is the new generation of the Dell EMC's All-Flash Array storage system. It adds enhancements and flexibility in several aspects to the already proficient and high-performant storage array's former generation. Features such as scale-up for a more flexible system, write boost for a more sensible and high-performing storage array, NVRAM for improved data availability and a new web-based UI for managing the storage array and monitoring its alerts and performance stats, add the extra value and advancements required in the evolving world of computer infrastructure.

The XtremIO X2 Storage Array uses building blocks called X-Bricks. Each X-Brick has its own compute, bandwidth and storage resources. Each X-Brick can be clustered with additional X-Bricks to grow in both performance and capacity (scale-out). Each X-Brick can also grow individually in terms of capacity, with an option to add to up to 72 SSDs in each brick.

XtremIO architecture is based on a metadata-centric content-aware system, which helps streamlining data operations efficiently without requiring any movement of data post-write for any maintenance reason (data protection, data reduction, etc. – all done inline). Using unique fingerprints of the incoming data, the system lays out the data uniformly across all SSDs in all X-Bricks in the system, and controls access using metadata tables. This contributes to an extremely balanced system across all X-Bricks in terms of compute power, storage bandwidth and capacity.

Using the same unique fingerprints, XtremIO is equipped with exceptional always-on inline data deduplication abilities, which highly benefits virtualized environments. Together with its data compression and thin provisioning capabilities (both inline and always-on), it achieves incomparable data reduction rates.

System operation is controlled by storage administrators via a stand-alone dedicated Linux-based server called the XtremIO Management Server (XMS). An intuitive user interface is used to manage and monitor the storage cluster and its performance. The XMS can be either a physical or a virtual server and can manage multiple XtremIO clusters.

With its intelligent architecture, XtremIO provides a storage system that is easy to set-up, needs zero tuning by the client and does not require complex capacity or data protection planning, as the system handles it on its own.

Page 10: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

10 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Architecture

An XtremIO X2 Storage System is comprised of a set of X-Bricks that form a cluster. This is the basic building block of an XtremIO array. There are two types of X2 X-Bricks available: X2-S and X2-R. X2-S is for environments whose storage needs are more I/O intensive than capacity intensive, as they use smaller SSDs and less RAM. An effective use of the X2-S is for environments that have high data reduction ratios (high compression ratio or significant duplicated data) which lowers the capacity footprint of the data significantly. X2-R X-Bricks clusters are made for the capacity intensive environments, with larger disks, more RAM and a larger expansion potential in future releases. The two X-Brick types cannot be mixed together in a single system. Therefore, decide which type is suitable for your environment in advance.

Each X-Brick is comprised of:

• Two 1U Storage Controllers (SCs) with:

o Two dual socket Haswell CPUs

o 346GB RAM (for X2-S) or 1TB RAM (for X2-R)

o Two 1/10GbE iSCSI ports

o Two user interface interchangeable ports (either 4/8/16Gb FC or 1/10GbE iSCSI)

o Two 56Gb/s InfiniBand ports

o One 100/1000/10000 Mb/s management port

o One 1Gb/s IPMI port

o Two redundant power supply units (PSUs)

• One 2U Disk Array Enclosure (DAE) containing:

o Up to 72 SSDs of sizes 400GB (for X2-S) or 1.92TB (for X2-R)

o Two redundant SAS interconnect modules

o Two redundant power supply units (PSUs)

An XtremIO X2 X-Brick Figure 5.

The Storage Controllers on each X-Brick are connected to their DAE via redundant SAS interconnects.

An XtremIO X2 storage array can have one or multiple X-Bricks. Multiple X-Bricks are clustered together into an XtremIO X2 array, using an InfiniBand switch and the Storage Controllers' InfiniBand ports for back-end connectivity between Storage Controllers and DAEs across all X-Bricks in the cluster. The system uses the Remote Direct Memory Access (RDMA) protocol for this back-end connectivity, ensuring a highly-available ultra-low latency network for communication between all components of the cluster. The InfiniBand switches are the same size (1U) for both X2-S and X2-R cluster types, but include 12 ports for X2-S and 36 ports for X2-R. By leveraging RDMA, an XtremIO X2 system is essentially a single shared-memory space spanning all of its Storage Controllers.

4U

First Storage Controller

DAE2U

Second Storage Controller

1U

1U

Page 11: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

11 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

The 1GB port for management is configured with an IPv4 address. The XMS, which is the cluster's management software, communicates with the Storage Controllers via the management interface. Through this interface, the XMS communicates with the Storage Controllers and sends storage management requests such as creating an XtremIO X2 Volume, mapping a Volume to an Initiator Group, etc.

The second 1GB/s port for IPMI interconnects the X-Brick's two Storage Controllers. IPMI connectivity is strictly within the bounds of an X-Brick and never connects to an IPMI port of a Storage Controller in another X-Brick in the cluster.

Multi-dimensional Scaling

With X2, an XtremIO cluster has both scale-out and scale-up capabilities, enabling a flexible growth capability adapted to the customer's unique workload and needs. Scale-out is implemented by adding X-Bricks to an existing cluster. The addition of an X-Brick to an existing cluster increases its compute power, bandwidth and capacity linearly. Each X-Brick that is added to the cluster brings with it two Storage Controllers, each with its CPU power, RAM and FC/iSCSI ports to service the clients of the environment, together with a DAE with SSDs to increase the capacity provided by the cluster. Adding an X-Brick to scale-out an XtremIO cluster is for environments that grow both in capacity and in performance needs, such as in the case of an increase in the number of active users and the data that they hold, or a database that grows in data and complexity.

An XtremIO cluster can start with any number of X-Bricks that fits the environment's initial needs and can currently grow to up to 4 X-Bricks (for both X2-S and X2-R). Future code upgrades of XtremIO X2 will allow up to 8 supported X-Bricks for X2-R arrays.

Scale Out Capabilities – Single to Multiple X2 X-Brick Clusters Figure 6.

Page 12: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

12 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Scale-up of an XtremIO cluster is implemented by adding SSDs to existing DAEs in the cluster. Adding SSDs to existing DAEs to scale-up an XtremIO cluster is for environments that currently grow in capacity needs and have no need for extra performance. This occurs, for example, when the same number of users has an increasing amount of data to save, or when an environment grows in both capacity and performance needs, but has only reached its capacity limits with room to grow in performance with its current infrastructure.

Each DAE can hold up to 72 SSDs and is divided into up to 2 groups of SSDs called Data Protection Groups (DPGs). Each DPG can hold a minimum of 18 SSDs and can grow by increments of 6 SSDs up to a maximum of 36 SSDs. In other words, 18, 24, 30 or 36 are the possible numbers of SSDs per DPG. Up to 2 DPGs can occupy a DAE.

SSDs are 400GB per drive for X2-S clusters and 1.92TB per drive for X2-R clusters. Future releases will allow customers to populate their X2-R clusters with 3.84TB sized drives, doubling the physical capacity available in their clusters.

Multi-Dimensional Scaling Figure 7.

Page 13: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

13 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

XIOS and the I/O Flow

Each Storage Controller within the XtremIO cluster runs a specially customized lightweight Linux-based operating system as the base platform of the array. The XtremIO Operating System (XIOS) handles all activities within a Storage Controller and runs on top of the Linux-based operating system. XIOS is optimized for handling high I/O rates and manages the system's functional modules, RDMA communication, monitoring, etc.

X-Brick Components Figure 8.

XIOS has a proprietary process-scheduling-and-handling algorithm designed to meet the specific requirements of a content-aware, low-latency and high-performing storage system. It provides efficient scheduling and data access, full exploitation of CPU resources, optimized inter-sub-process communication and minimized dependency between sub-processes that run on different sockets.

The XtremIO Operating System gathers a variety of metadata tables on incoming data that includes data fingerprint, its location in the system, mappings and reference counts. The metadata is used as the fundamental insight for performing system operations, such as laying out incoming data uniformly, implementing inline data reduction services and accessing the data on read requests. The metadata is also involved in communication with external applications (such as VMware XCOPY and Microsoft ODX) to optimize integration with the storage system.

Regardless of which Storage Controller receives an I/O request from the host, multiple Storage Controllers on multiple X-Bricks cooperate to process the request. The data layout in the XtremIO X2 system ensures that all components share the load and participate evenly in processing I/O operations.

An important functionality of XIOS is its data reduction capabilities. Inline data deduplication and compression achieves data reduction. Data deduplication and data compression complement each other. Data deduplication removes redundancies, whereas data compression compresses the already deduplicated data before writing the data to the flash media. XtremIO is an always-on thin-provisioned storage system, further realizing storage savings by the storage system, which never writes a block of zeros to the disks.

XtremIO integrates with existing SANs through 16Gb/s Fibre Channel or 10Gb/s Ethernet iSCSI connectivity to service hosts' I/O requests.

Page 14: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

14 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

XtremIO Write I/O Flow

In a write operation to the storage array, the incoming data stream reaches any one of the Active-Active Storage Controllers and is broken into data blocks. For every data block, the array fingerprints the data with a unique identifier and stores it in the cluster's mapping table. The mapping table maps the host's Logical Block Addresses (LBA) to the blocks' fingerprints and the blocks' fingerprints to its physical location in the array (the DAE, SSD and offset the block is located at). The fingerprint of a block has two objectives: (1) to determine if the block is a duplicate of a block that already exists in the array and (2) to distribute blocks uniformly across the cluster. The array divides the list of potential fingerprints among Storage Controllers in the array and gives each Storage Controller a range of fingerprints to manage. The mathematical process that calculates the fingerprints results in a uniform distribution of fingerprint values. As a result, fingerprints and blocks are evenly spread across all Storage Controllers in the cluster.

A write operation works as such:

1. A new write request reaches the cluster.

2. The new write is broken into data blocks.

3. For each data block:

1. A fingerprint is calculated for the block.

2. An LBA-to-fingerprint mapping is created for this write request.

3. The fingerprint is checked to see if it already exists in the array.

• If it exists:

o The reference count for this fingerprint is incremented by one.

• If it does not exist:

1. A location is chosen on the array where the block is written (distributed uniformly across the array according to fingerprint value).

2. A fingerprint-to-physical location mapping is created.

3. The data is compressed.

4. The data is written.

5. The reference count for the fingerprint is set to one.

Page 15: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

15 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Deduplicated writes are of course much faster than original writes. Once the array identifies a write as a duplicate, it updates the LBA-to-fingerprint mapping for the write and updates the reference count for this fingerprint. No data is additionally written to the array and the operation is completed quickly, adding an extra benefit to inline deduplication. Figure 9 shows an example of an incoming data stream which contains duplicate blocks with identical fingerprints.

Incoming Data Stream Example with Duplicate Blocks Figure 9.

As mentioned, fingerprints also help to decide where to write the block in the array. Figure 10 shows the incoming stream after duplicates were removed as it is being written to the array. The blocks are divided to their appointed Storage Controller according to their fingerprint values ensuring a uniform distribution of the data across the cluster. The blocks are transferred to their destinations in the array using Remote Direct Memory Access (RDMA) via the low-latency InfiniBand network.

Incoming Deduplicated Data Stream Written to the Storage Controllers Figure 10.

The actual write of the data blocks to the SSDs is asynchronous. At the time of the application write, the system places the data blocks in the in-memory write buffer and protects it using journaling to local and remote NVRAMs. Once it is written to the local NVRAM and replicated to a remote one, the Storage Controller returns an acknowledgment to the host. This guarantees a quick response to the host, ensures low-latency of I/O traffic and preserves the data in case of system failure (power-related or any other). When enough blocks are collected in the buffer (to fill up a full stripe), the system writes them to the SSDs on the DAE. Figure 11 demonstrates the phase of writing the data to the DAEs after a full stripe of data blocks is collected in each Storage Controller.

Storage Controller

Storage Controller

DAE

Storage Controller

Storage Controller

DAE

CA38

C90

Data

134F

871

Data

0325

F7A

Data

F3AF

BA3

Data

AB45

CB7

Data

2014

7A8

Data

963F

E7B

Data

Data

DataData

DataData

Data Data

X-Brick 1

X-Brick 2

F, …

2, A, …

1, 9, …

0, C, …

Infin

iBan

d

Page 16: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

16 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Full Stripe of Blocks Written to the DAEs Figure 11.

XtremIO Read I/O Flow

In a read operation, the system first performs a look-up of the logical address in the LBA-to-fingerprint mapping. The found fingerprint is then located in the fingerprint-to-physical mapping and the data is retrieved from the right physical location. In the same fashion as write operations, the read load is also evenly shared across the cluster, blocks are evenly distributed and all Volumes are accessible across all X-Bricks. If the requested block size is larger than the data block size, the system performs parallel data block reads across the cluster and assembles them into bigger blocks before returning them to the application. A compressed data block is decompressed before it is delivered to the host.

XtremIO has a memory-based read cache in each Storage Controller. The read cache is organized by content fingerprint. Blocks whose contents are more likely to be read are placed in the read cache for faster retrieval.

A read operation works as such:

1. A new read request reaches the cluster.

2. The read request is analyzed to determine the LBAs for all data blocks and a buffer is created to hold the data.

3. For each LBA:

1. The LBA-to-fingerprint mapping is checked to find the fingerprint of each data block to be read.

2. The fingerprint-to-physical location mapping is checked to find the physical location of each of the data blocks.

3. The requested data block is read from its physical location (read cache or a place in the disk) and transmitted to the buffer created in step 2 in the Storage Controller that processes the request via RDMA over InfiniBand.

4. The system assembles the requested read from all data blocks transmitted to the buffer and sends it back to the host.

Storage Controller

Storage Controller

DAE

Storage Controller

Storage Controller

DAE

Data Data Data Data P1 P2DataDataDataDataDataData

Data Data Data Data P1 P2DataDataDataDataDataData

Data Data Data Data P1 P2DataDataDataDataDataData

Data Data Data Data P1 P2DataDataDataDataDataData

Data

DataData

DataData

Data Data

X-Brick 1

X-Brick 2

Page 17: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

17 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

System Features

The XtremIO X2 Storage Array provides and offers a wide range of built-in features that require no special license. The architecture and implementation of these features are unique to XtremIO and are designed around the capabilities and limitations of flash media. We will list some key features included in the system.

Inline Data Reduction

XtremIO's unique Inline Data Reduction is achieved by these two mechanisms: Inline Data Deduplication and Inline Data Compression

Data Deduplication

Inline Data Deduplication is the removal of duplicate I/O blocks from a stream of data prior to it being written to the flash media. XtremIO inline deduplication is always on, meaning no configuration is needed for this important feature. The deduplication is at a global level, meaning no duplicate blocks are written over the entire array. Being an inline and global process, no resource-consuming background processes or additional reads and writes (which are mainly associated with post-processing deduplication) are necessary for the feature's activity, which increases SSD endurance and eliminates performance degradation.

As mentioned earlier, deduplication on XtremIO is performed using the content's fingerprints (see XtremIO Write I/O Flow on page 14). The fingerprints are also used for uniform distribution of data blocks across the array. This provides inherent load balancing for performance and enhances flash wear-level efficiency, since the data never needs to be rewritten or rebalanced.

XtremIO uses a content-aware, globally deduplicated Unified Data Cache for highly efficient data deduplication. The system's unique content-aware storage architecture enables achieving a substantially larger cache size with a small DRAM allocation. Therefore, XtremIO is the ideal solution for difficult data access patterns, such as "boot storms" that are common in VSI environments.

XtremIO has excellent data deduplication ratios, especially for virtualized environments. SSD usage is smarter, flash longevity is maximized, the logical storage capacity is multiplied and total cost of ownership is reduced.

Figure 12 shows the CPU utilization of our Storage Controllers during VMware production workload. When new blocks are written to the system, the hash calculation is distributed across all Storage Controllers. We can see here the excellent synergy across our X2 cluster, when all our Active-Active Storage Controllers' CPUs share the load and effort, as the CPU utilization between all is virtually equal for the entire workload.

XtremIO X2 CPU Utilization Figure 12.

Page 18: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

18 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Data Compression

Inline data compression is the compression of data prior to writing the data to the flash media. XtremIO automatically compresses data after all duplications are removed, ensuring that the compression is performed only for unique data blocks. The compression is performed in real-time and not as a post-processing operation. As a result, compression does not overuse the SSDs or impact performance. Compressibility rates depend on the type of data written.

Data Compression complements data deduplication in many cases and saves storage capacity by storing only unique data block in the most efficient manner. Data compression is always inline and never performed as a post-processing activity. Therefore, XtremIO writes the data only once, always. It increases overall endurance of the flash array's SSDs In a VSI environment, deduplication dramatically reduces the required capacity for the virtual servers. Consequently, compression reduces the specific user data. As a result, a single X-Brick can manage an increased number of virtual servers. Therefore, less physical capacity is required to store the data, increasing the storage array's efficiency and dramatically reducing the $/GB cost of storage, even when compared to hybrid storage systems.

We can see the benefits and capacity savings for the deduplication-compression combination demonstrated in Figure 13.

Data Deduplication and Data Compression Demonstrated Figure 13.

In the above example, the twelve data blocks written by the host are first deduplicated to four data blocks, demonstrating a 3:1 data deduplication ratio. Following the data compression process, the four data blocks are then each compressed, by a ratio of 2:1, resulting in a total data reduction ratio of 6:1.

Thin Provisioning

XtremIO storage is natively thin provisioned, using a small internal block size. All Volumes in the system are thin provisioned, meaning the system only consumes capacity as needed. No storage space is ever pre-allocated before writing.

XtremIO's content-aware architecture permits blocks to be stored at any location in the system (when the metadata is used to refer to their location) and the data is written only when unique blocks are received. Therefore, as opposed to disk-oriented architecture, no space creeping or garbage collection is necessary on XtremIO, Volume fragmentation does not occur in the array and no defragmentation utilities are needed.

This XtremIO feature enables consistent performance and data management across the entire life cycle of a Volume, regardless of the system capacity utilization or the write patterns of clients.

Data Written by Host3:1

Data Deduplication

2:1 Data

Compression

6:1 Total Data Reduction

This is the only data written to

the flash media.

Page 19: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

19 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

This characteristic allows manual and frequent automatic reclaiming of unused space directly from VMFS datastores and virtual machines that has the following benefits:

• The allocated disks can be used optimally and the actual space reports are more accurate.

• More efficient snapshots (called XVCs - XtremIO Virtual Copies). Blocks that are no longer needed are not protected by additional snapshots.

Integrated Copy Data Management

XtremIO pioneered the concept of integrated Copy Data Management (iCDM) – the ability to consolidate both primary data and its associated copies on the same scale-out all-flash array for unprecedented agility and efficiency.

XtremIO is one of a kind in its capabilities to consolidate multiple workloads and entire business processes safely and efficiently, providing organizations with a new level of agility and self-service for on-demand procedures. XtremIO provides consolidation, supporting on-demand copy operations at scale while maintaining delivery of all performance SLAs in a consistent and predictable way.

Consolidation of primary data and its copies in the same array has numerous benefits:

• It can make development and testing activities up to 50% faster, creating copies of production code quickly for development and testing purposes, then refreshing the output back into production for the full cycle of code upgrades in the same array. This dramatically reduces complexity and infrastructure needs, as well as development risks, and increases the quality of the product.

• Production data can be extracted and pushed to all downstream analytics applications on-demand as a simple in-memory operation. Copies of the data are high performance and can get the same SLA as production copies without compromising production SLAs. XtremIO offers this on-demand as both self-service and automated workflows for both application and infrastructure teams.

• Operations such as patches, upgrades and tuning tests can be quickly performed using copies of production data. Diagnosing problems of applications and databases can be done using these copies, and applying the changes back to production can be done by refreshing copies back. The same goes for testing new technologies and combining them in production environments.

• iCDM can also be used for data protection purposes, as it enables creating many copies at low point-in-time intervals for recovery. Application integration and orchestration policies can be set to auto-manage data protection, using different SLAs.

XtremIO Virtual Copies

XtremIO uses its own implementation of snapshots for all iCDM purposes, called XtremIO Virtual Copies (XVCs). XVCs are created by capturing the state of data in Volumes at a particular point in time and allowing users to access that data when needed, no matter the state of the source Volume (even deletion). They allow any access type. XVCs can be taken either from a source Volume or from another Virtual Copy.

XtremIO's Virtual Copy technology is implemented by leveraging the content-aware capabilities of the system, optimized for SSDs, with a unique metadata tree structure that directs I/O to the right timestamp of the data. This allows efficient copy creation that can sustain high performance, while maximizing the media endurance.

Page 20: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

20 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

A Metadata Tree Structure Example of XVCs Figure 14.

When creating a Virtual Copy, the system only generates a pointer to the ancestor metadata of the actual data in the system, making the operation very quick. This operation does not have any impact on the system and does not consume any capacity at the point of creation, unlike traditional snapshots, which may need to reserve space or copy the metadata for each snapshot. Virtual Copies capacity consumption occurs only when changes are made to any copy of the data. Then, the system updates the metadata of the changed Volume to reflect the new write, and stores its blocks in the system using the standard write flow process.

The system supports the creation of Virtual Copies on a single, as well as on a set, of Volumes. All Virtual Copies of the Volumes in the set are cross-consistent and contain the exact same point in time for them all. This can be done manually by selecting a set of Volumes for copying, or by placing Volumes in a Consistency Group and making copies of that Consistency Group.

Virtual Copy deletions are lightweight and proportional only to the amount of changed blocks between the entities. The system uses its content-aware capabilities to handle copy deletions. Each data block has a counter that indicates the number of instances of that block in the system. If a block is referenced from some copy of the data, it will not be deleted. Any block whose counter value reaches zero is marked as deleted and will be overwritten when new unique data enters the system.

With XVCs, XtremIO's iCDM offers the following tools and workflows to provide the consolidation capabilities:

• Consistency Groups (CG) – Grouping of Volumes to allow Virtual Copies to be taken on a group of Volumes as a single entity.

• Snapshot Sets – A group of Virtual Copies of Volumes taken together using CGs or a group of manually chosen Volumes.

• Protection Copies – Immutable read-only copies created for data protection and recovery purposes.

• Protection Scheduler – Used for local protection of a Volume or a CG. It can be defined using intervals of seconds/minutes/hours or can be set using a specific time of day or week. It has a retention policy based on the number of copies wanted or the permitted age of the oldest XVC.

• Restore from Protection – Restore a production Volume or CG from one of its descendant Snapshot Sets.

• Repurposing Copies – Virtual Copies configured with changing access types (read-write / read-only / no-access) for alternating purposes.

• Refresh a Repurposing Copy – Refresh a Virtual Copy of a Volume or a CG from the parent object or other related copies with relevant updated data. It does not require Volume provisioning changes for the refresh to take effect, but only host-side logical Volume management operations to discover the changes.

Page 21: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

21 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

XtremIO Data Protection

XtremIO Data Protection (XDP) provides a "self-healing" double-parity data protection with very high efficiency to the storage system. It requires very little capacity overhead and metadata space, and does not require dedicated spare drives for rebuilds. Instead, XDP leverages the "hot space" concept, where any free space available in the array can be utilized for failed drive reconstructions. The system always reserves sufficient distributed capacity for performing at least a single drive rebuild. In the rare case of a double SSD failure, the second drive is rebuilt only if there is enough space to rebuild the second drive or when one of the failed SSDs is replaced.

The XDP algorithm provides:

• N+2 drives protection

• Capacity overhead of only 5.5%-11% (depends on the number of disks in the protection group)

• 60% more write-efficient than RAID1

• Superior flash endurance to any RAID algorithm, due to the smaller number of writes and even distribution of data

• Automatic rebuilds that are faster than traditional RAID algorithms

As shown in Figure 15, XDP uses a variation of N+2 row and diagonal parity that provides protection from two simultaneous SSD errors. An X-Brick DAE may contain up to 72 SSDs organized in two Data Protection Groups (DPGs). XDP is managed independently on the DPG level. A DPG of 36 SSDs will result in capacity overhead of only 5.5% for its data protection needs.

N+2 Row and Diagonal Parity Figure 15.

Data at Rest Encryption

Data at Rest Encryption (DARE) provides a solution to securing critical data even when the media is removed from the array, for customers in need of such security. XtremIO arrays utilize a high-performance inline encryption technique to ensure that all data stored on the array is unusable if the SSD media is removed. This prevents unauthorized access in the event of theft or loss during transport, and makes it possible to return/replace failed components containing sensitive data. DARE has been established as a mandatory requirement in several industries, such as health care, banking, and government institutions.

At the heart of XtremIO's DARE solution lies the use of the Self-Encrypting Drive (SED) technology. An SED has a dedicated hardware that is used to encrypt and decrypt data as it is written to or read from the drive. Offloading the encryption task to the SSDs enables XtremIO to maintain the same software architecture whether encryption is enabled or disabled on the array. All XtremIO's features and services (including Inline Data Reduction, XtremIO Data Protection, Thin Provisioning, XtremIO Virtual Copies, etc.) are available on an encrypted cluster as well as on a non-encrypted cluster, and performance is not impacted when using encryption.

1 2

2 3

3 4

D0 D1

3 4

4 5

5 1

D2 D3

k = 5 (prime)

5

1

2

D4

1

2

3

P Q

4 5 1 2 3 4

k-1

5

Page 22: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

22 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

A unique Data Encryption Key (DEK) is created during the drive manufacturing process and does not leave the drive at any time. The DEK can be erased or changed, rendering its current data unreadable forever. To ensure that only authorized hosts can access the data on the SED, the DEK is protected by an Authentication Key (AK) that resides on the Storage Controller. Without the AK, the DEK is encrypted and cannot be used to encrypt or decrypt data.

Data at Rest Encryption in XtremIO Figure 16.

Write Boost

In the new X2 storage array, the write flow algorithm was improved significantly to improve array performance, countering the rise in compute power and disk speeds and taking into account common applications' I/O patterns and block sizes. As mentioned when discussing the write I/O flow, the commit to the host is now asynchronous to the actual writing of the blocks to disk. The commit is sent after the changes are written to a local and remote NVRAMs for protection, and are written to the disk only later, at a time that best optimizes the system's activity. In addition to the shortened procedure from write to commit, the new algorithm addresses an issue relevant to many applications and clients: a high percentage of small I/Os creating load on the storage system and influencing latency, especially on bigger I/O blocks. Examining customers' applications and I/O patterns, the algorithm finds that many I/Os from common applications come in small blocks, under 16K pages, creating high loads on the storage array. Figure 17 shows the block size histogram from the entire XtremIO install base. The percentage of blocks smaller than 16KB is highly evident. The new algorithm takes care of this issue by aggregating small writes to bigger blocks in the array before writing them to disk, making them less demanding on the system, which is now more capable of taking care of bigger I/Os faster. The test results for the improved algorithm were amazing: the improvement in latency for several cases is around 400% and allows XtremIO X2 to address application requirements with 0.5msec or lower latency.

Page 23: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

23 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

XtremIO Install Base Block Size Histogram Figure 17.

VMware APIs for Array Integration (VAAI)

VAAI was first introduced as VMware's improvements to host-based VM cloning. It offloads the workload of cloning a VM to the storage array, making cloning much more efficient. Instead of copying all blocks of a VM from the array and back to it for the creation of a new cloned VM, the application lets the array do it internally. This utilizes the array's features and saving host and network resources that are no longer involved in the actual cloning of data. This procedure of offloading the operation to the storage array is backed by the X-copy (extended copy) command to the array, which is used when cloning large amounts of complex data.

XtremIO is fully VAAI compliant, allowing the array to communicate directly with vSphere and provide accelerated storage vMotion, VM provisioning and thin provisioning functionality. In addition, XtremIO's VAAI integration improves X-copy efficiency even further by making the whole operation metadata driven. Due to its inline data reduction features and in-memory metadata, no actual data blocks are copied during an X-copy command and the system only creates new pointers to the existing data. This is all done inside the Storage Controllers' memory. Therefore, the operation saves host and network resources and does not consume storage resources, leaving no impact on the system's performance, as opposed to other implementations of VAAI and the X-copy command.

Page 24: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

24 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Figure 18 illustrates the X-copy operation when performed against an XtremIO storage array and shows the efficiency in metadata-based cloning.

VAAI X-Copy with XtremIO Figure 18.

The XtremIO features for VAAI support include:

• Zero Blocks / Write Same – Used for zeroing-out disk regions and provides accelerated Volume formatting.

• Clone Blocks / Full Copy / X-Copy – Used for copying or migrating data within the same physical array, an almost instantaneous operation on XtremIO due to its metadata-driven operations.

• Record Based Locking / Atomic Test & Set (ATS) – Used during creation and locking of files on VMFS Volumes, such as during power-down and powering-up of VMS.

• Block Delete / Unmap / Trim – Used for reclamation of unused space using the SCSI unmap feature.

Ptr Ptr Ptr Ptr Ptr Ptr

A B C D

Metadata in RAM

Data on SSDXtremIO

X-Copy command (full clone)

A

VM1

Addr 1 Addr 2 Addr 3 Addr 4 Addr 5 Addr 6

Ptr Ptr Ptr Ptr Ptr Ptr

A B C D

Copy metadata pointers

Data on SSDXtremIO

B

VM1

Addr 1 Addr 2 Addr 3 Addr 4 Addr 5 Addr 6

Ptr Ptr Ptr Ptr Ptr Ptr

A B C D

Ptr Ptr Ptr Ptr Ptr Ptr

Metadata in RAM

Data on SSDXtremIO

C• No data blocks are copied.• New pointers are created to the existing data.

VM1 VM2New

Addr 1New

Addr 2New

Addr 3New

Addr 4New

Addr 5New

Addr 6Addr 1 Addr 2 Addr 3 Addr 4 Addr 5 Addr 6

Page 25: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

25 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Figure 19 shows the exceptional performance during multiple VMware cloning operations. X2 is handling storage bandwidths as high as ~160GB/s with over 220k IOPS (read+write), resulting in a quick and efficient production delivery.

Multiple VMware Cloning Operations (X-Copy) from XtremIO X2 Perspective Figure 19.

Other features of XtremIO X2 (some of them will be described in next sections):

• Even Data Distribution (uniformity)

• High Availability (no single points of failures)

• Non-disruptive Upgrade and Expansion

• RecoverPoint Integration (for replications to local or remote arrays)

• XtremIO Management Server

The XtremIO Management Server (XMS) is the component that manages XtremIO clusters (up to 8 clusters). It is preinstalled with the CLI, GUI and RESTful API interfaces, and can be installed on a dedicated physical server or a VMware virtual machine.

The XMS manages the cluster through the management ports on both Storage Controllers of the first X-Brick in the cluster, and uses a standard TCPI/IP connection to communicate with them. It is not part of the XtremIO data path, thus can be disconnected from an XtremIO cluster without jeopardizing usual I/O tasks. A failure on the XMS only affects monitoring and configuration activities, such as creating and attaching Volumes. A virtual XMS is naturally less vulnerable to such failures.

The GUI is based on a new Web User Interface (WebUI), which is accessible via any browser, and provides easy-to-use tools for performing most system operations (certain management operations must be performed using the CLI). Some of the useful features of the new WebUI are described in the following sections.

Dashboard

The Dashboard window presents a main overview of the cluster. It has three panels:

• Health - the main overview of the system's health status, alerts, etc.

• Performance (shown in Figure 20) – the main overview of the system's overall performance and top used Volumes and Initiator Groups

• Capacity (shown in Figure 21) – the main overview of the system's physical capacity and data savings

Page 26: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

26 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

XtremIO WebUI – Dashboard – Performance Panel Figure 20.

XtremIO WebUI – Dashboard – Capacity Panel Figure 21.

The main Navigation menu bar is located on the left side of the UI. Users can select one of the navigation menu options pertaining to XtremIO's management actions. The main menus contain the Dashboard, Notifications, Configuration, Reports, Hardware and Inventory.

Page 27: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

27 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Notifications

In the Notifications menu, we can navigate to the Events window (shown in Figure 22) and the Alerts window, showing major and minor issues related to the cluster's health and operations.

XtremIO WebUI – Notifications – Events Window Figure 22.

Page 28: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

28 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Configuration

The Configuration window displays the cluster's logical components: Volumes (shown in Figure 23), Consistency Groups, Snapshot Sets, Initiator Groups, Initiators, and Protection Schedulers. Through this window, we can create and modify these entities, using the action panel on the top right side.

XtremIO WebUI – Configuration Figure 23.

Page 29: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

29 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Reports

In the Reports menu, we can navigate to different windows to show graphs and data of different aspects of the system's activities, mainly related to the system's performance and resource utilization. The menu options we can choose to view include Overview, Performance, Blocks, Latency, CPU Utilization, Capacity, Savings, Endurance, SSD Balance, Usage or User-defined reports. We can view reports using different resolutions of time and components: selecting specific entities we want to view reports on in the "Select Entity" option (shown in Figure 24) that appears above when in the Reports menus, or selecting predefined and custom days and times to review reports for (shown in Figure 25).

XtremIO WebUI – Reports – Selecting Specific Entities to View Figure 24.

XtremIO WebUI – Reports – Selecting Specific Times to View Figure 25.

Page 30: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

30 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

The Overview window shows basic reports on the system, including performance, weekly I/O patterns and storage capacity information. The Performance window shows extensive performance reports that mainly include Bandwidth, IOPS and Latency information. The Blocks window shows block distribution and statistics of I/Os going through the system. The Latency window (shown in Figure 26) shows Latency reports, including latency as a function of block sizes and IOPS metrics. The CPU Utilization window shows CPU utilization of all Storage Controllers in the system.

XtremIO WebUI – Reports – Latency Window Figure 26.

The Capacity window (shown in Figure 27) shows capacity statistics and the change in storage capacity over time. The Savings window shows Data Reduction statistics and change over time. The Endurance window shows SSD's endurance status and statistics. The SSD Balance window shows how much the SSDs are balanced with data and the variance between them all. The Usage window shows Bandwidth and IOPS usage, both overall and divided to reads and writes. The User-defined window allows users to define their own reports to view.

Page 31: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

31 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

XtremIO WebUI – Reports – Capacity Window Figure 27.

Hardware

In the Hardware menu, we can overview our cluster and X-Bricks with visual illustrations. When viewing the FRONT panel, we can choose and highlight any component of the X-Brick and view information about it in the Information panel on the right. In Figure 28 we can see extended information on Storage Controller 1 in X-Brick 1, but we can view information on more granular specifics such as local disks and Status LEDs. We can further click on the "OPEN DAE" button to see visual illustration of the X-Brick's DAE and its SSDs, and view additional information on each SSD and Row Controller.

XtremIO WebUI – Hardware – Front Panel Figure 28.

In the BACK panel, we can view an illustration of the back of the X-Brick and see every physical connection to the X-Brick and inside of it, including FC connections, Power, iSCSI, SAS, Management, IPMI and InfiniBand, filtered by the "Show Connections" list at the top right. An example of this view is seen in Figure 29.

Page 32: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

32 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

XtremIO WebUI – Hardware – Back Panel – Show Connections Figure 29.

Inventory

In the Inventory menu, we can see all components of our environment with information about them, including: XMS, Clusters, X-Bricks, Storage Controllers, Local Disks, Storage Controller PSUs, XEnvs, Data Protection Groups, SSDs, DAEs, DAE Controllers, DAE PSUs, DAE Row Controllers, Infiniband Switches and NVRAMs.

As mentioned earlier, other interfaces to monitor and manage an XtremIO cluster through the XMS server are available. The system's Command Line Interface (CLI) provides all the functionality of the GUI, as well as additional functionality. A RESTful API is another pre-installed interface in the system that allows HTTP-based commands to manage clusters. A PowerShell API Module is also an option to use Windows' PowerShell console to administer XtremIO clusters.

XtremIO X2 Space Management and Reclamation in vSphere Environments

VMFS file systems are managed by the ESXi hosts. Because of this, block storage arrays have no visibility inside a VMFS Volume so when any data is deleted by vSphere the array is unaware of it and it remains allocated on the array. In XtremIO storage array, all LUNs are thin provisioned and that space could be immediately allocated to another device/application or just returned to the pool of available storage. Space consumed by files that have been deleted or moved is referred to as "dead space".

Reclaiming the dead space from an XtremIO X2 storage array frequently has the following benefits:

• The allocated disks can be used optimally and the actual space reports are more accurate.

• More space is available for use of the virtual environment.

• More efficient replication when using RecoverPoint since it will not replicate blocks that are no longer needed.

The feature that can be used to reclaim space is called Space Reclamation, which uses the SCSI command called unmap. Unmap can be issued to underlying thin-provisioned devices to inform the array that certain blocks are no longer needed by the host and can be "reclaimed". The array can then return those blocks to the pool of free storage.

The VMFS 6 datastore can send the space reclamation command automatically. With the VMFS5 datastore, Space reclaim can be done manually via an esxcli command or via the VSI plugin, which will be detailed later in this document.

Storage space inside the VMFS datastore can be freed by deleting or migrating a VM, consolidating an XVC and so on. Inside the virtual machine, storage space is freed when files are deleted on a thin virtual disk. These operations leave blocks of unused space on the storage array. However, when the array is not aware that the data was deleted from the blocks, the blocks remain allocated by the array until the datastore releases them. VMFS uses the SCSI unmap command to indicate to the array that the storage blocks contain deleted data, so that the array can deallocate these blocks.

Page 33: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

33 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Unmap Process Figure 30.

Dead space can be reclaimed using one of the following options:

• Space Reclamation Requests from VMFS Datastores - Deleting or removing files from a VMFS datastore frees space within the file system. This free space is mapped to a storage device until the file system releases or unmaps it. ESXi supports reclamation of free space, which is also called the unmap operation.

• Space Reclamation Requests from Guest Operating Systems - ESXi supports the unmap commands issued directly from a guest operating system to reclaim storage space. The level of support and requirements depend on the type of datastore where your virtual machine resides.

VMFS Datastores Reclamation

Asynchronous Reclamation of Free Space on VMFS 6 Datastore

On VMFS 6 datastores, ESXi supports the automatic asynchronous reclamation of free space. VMFS 6 can run the unmap command to release free storage space in the background on thin-provisioned storage arrays that support unmap operations.

Asynchronous unmap processing has several advantages:

• Unmap requests are sent at a constant rate, which helps to avoid any instant load on the backing array.

• Freed regions are batched and unmapped together.

• Unmap processing and truncate I/O paths are disconnected, so I/O performance is not impacted.

Space Reclamation Granularity

Granularity defines the minimum size of a released space sector that an underlying storage can reclaim. Storage cannot reclaim sectors that are smaller in size than the specified granularity.

For VMFS 6, reclamation granularity equals to the block size. When you specify the block size as 1 MB, the granularity is also 1 MB. Storage sectors smaller than 1 MB are not reclaimed.

Automatic unmap is an asynchronous task and reclamation will not occur immediately and will typically take 12 to 24 hours to complete. Each ESXi 6.5 host has an unmap "crawler" that will work in tandem to reclaim space on all VMFS 6 Volumes they have access to.

Page 34: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

34 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Space Reclamation Priority Figure 31.

Manual Reclamation of Free Space on VMFS5 Datastore

VMFS5 and earlier file systems do not unmap free space automatically. We recommend using the esxcli storage vmfs unmap command to reclaim space manually using the parameter --reclaim-unit=20000’, indicating the number of vmfs blocks to unmap per iteration.

Esxcli Command for Manual Space Reclamation Figure 32.

Using the space reclamation feature in VSI, you can reclaim unused storage on datastores, hosts, clusters, folders and storage folders on XtremIO storage arrays. It allows us to schedule space reclamation on a daily basis, or run it once, for a specific datastore or on all datastores under the same datastore cluster.

Setting Space Reclamation Scheduler via VSI Plugin Figure 33.

Figure 34 shows the logical space in use before and after space reclamation.

Page 35: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

35 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Logical Space in Use Before and After Space Reclamation Figure 34.

In-Guest Space Reclamation for Virtual Machines

Space Reclamation for VMFS 6 Virtual Machines

Inside a virtual machine, storage space is freed when, for example, you delete files on a thin virtual disk. The guest operating system notifies VMFS about freed space by sending the unmap command. The unmap command sent from the guest operating system releases space within the VMFS datastore. The command then proceeds to the array, so that the array can reclaim the freed blocks of space.

VMFS 6 generally supports automatic space reclamation requests that are generated from the guest operating systems, and passes these requests to the array. Many guest operating systems can send the unmap command and do not require any additional configuration. Guest operating systems that do not support automatic unmaps might require user intervention

Generally, guest operating systems send the unmap commands based on the unmap granularity they advertise. VMFS 6 processes unmap requests from the guest OS only when the space to reclaim equals 1 MB or is a multiple of 1 MB. If the space is less than 1 MB or is not aligned to 1 MB, the unmap requests are not processed.

Space Reclamation for VMFS5 Virtual Machines

Typically, the unmap command generated from the guest operation system on VMFS5 cannot be passed directly to the array. You must run the esxcli storage vmfs unmap command to trigger unmaps on the array.

However, for a limited number of guest operating systems, VMFS5 supports the automatic space reclamation requests.

Space Reclamation prerequisites

To send the unmap requests from the guest operating system to the array, the virtual machine must meet the following prerequisites:

• The virtual disk must be thin-provisioned.

• Virtual machine hardware must be of version 11 (ESXi 6.0) or later.

• The advanced setting EnableBlockDelete must be set to 1.

• The guest operating system must be able to identify the virtual disk as thin.

ESXi 6.5 expands support for in-guest unmap to additional guest types; ESXi 6.0 in-guest unmap is supported only for Windows Server 2012 R2 and later. ESXi 6.5 introduces support for Linux operating systems. The underlying reason for this is that ESXi 6.0 and earlier only supported SCSI version 2. Windows uses SCSI-2 unmap and therefore could take advantage of this feature set. Linux uses SCSI version 5 and could not. In ESXi 6.5, VMware enhanced their SCSI support to go up to SCSI-6, which allows Linux-based guests to issue commands that they could not issue before.

Page 36: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

36 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

In-Guest Unmap Alignment Requirements

VMware ESXi requires that any unmap request sent down by a guest must be aligned to 1 MB. For a variety of reasons, not all unmap requests will be aligned as such and in ESXi 6.5 and earlier a large percentage fails. In ESXi 6.5 P1, ESXi has been altered to be more tolerant of misaligned unmap requests. See the VMware patch information here:

https://kb.vmware.com/kb/2148989

Prior to this, any unmap request that was even partially misaligned would fail entirely leading to no reclamation. In ESXi 6.5 PI, any portion of unmap requests that are aligned will be accepted and passed along to the underlying array. Misaligned portions will be accepted but not passed down. Instead, the affected blocks to which the misaligned unmaps refer will be zeroed out with WRITE SAME. The benefit of this behavior on the XtremIO X2 is that zeroing is identical in behavior to unmap so all of the space is reclaimed regardless of any misalignment.

In-Guest Unmap in Windows OS

Starting with ESXi 6.0, In-Guest unmap is supported with Windows 2012 R2 and later Windows-based operating systems. For a full report of unmap support with Windows, refer to Microsoft documentation. NTFS supports automatic unmap by default—this means (assuming the underlying storage supports it) Windows will issue unmap to the blocks a file consumed once the file has been deleted or moved.

Windows also supports manual unmap, which can be run on-demand or per a schedule. This is performed using the Disk Optimizer tool. Thin virtual disks can be identified in the tool as Volume media types of "thin provisioned drive”. These are the Volumes that support unmap.

Manual Space Reclamation using Optimize Drives Utility Inside a Windows Virtual Machine Figure 35.

In- Guest Unmap in Linux OS

Starting with ESXi 6.5, In-Guest unmap is supported with Linux-based operating systems and most common file systems. To enable this behavior, it is necessary to use Virtual Machine Hardware Version 13 or later. Linux supports both automatic and manual methods of unmap.

Linux file systems do not support automatic unmap by default—this behavior needs to be enabled during the mount operation of the file system. This is achieved by mounting the file system with the "discard" option.

Mounting Drive Using the Discard Option Figure 36.

When mounted with the discard option, Linux will issue unmap to the blocks a file consumed once the file has been deleted or moved.

Page 37: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

37 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

With vSphere 6.5, SPC-4 is fully supported so you can run space reclamation inside the Linux OS using either manual CLI or a crone job. In order to check that the Linux OS does indeed support space reclamation, run the “sg_vpd” command as seen in Figure 37 and look for the LBPU:1 output. Running the sg_inq command will actually show if SPC-4 is enabled at the Linux OS level or not.

Running sg_vpd and sg_inq Commands to Verify Support for Space Reclamation Figure 37.

Figure 38 shows the I/O pattern during an in-guest unmap process. The unmap commands appear to be sent from ESXi in 100 MB chunks until the space reclamation process completes.

In-Guest Space Reclamation Pattern from XtremIO Perspective Figure 38.

Page 38: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

38 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

EMC VSI for VMware vSphere Web Client Integration with XtremIO X2

EMC Solutions Integration Service 7.2 (EMC SIS) provides us with unique storage integration capabilities between VMware vSphere 6.5 and EMC XtremIO X2 (XMS 6.0.0 and above). The EMC VSI (Virtual Storage Integrator) 7.2 plugin for VMware vSphere web client can be registered via EMC SIS.

The plugin enables VMware administrators to view, manage and optimize EMC storage for their ESX/ESXi servers. It consists of a graphical user interface and the EMC Solutions Integration Service (SIS), which provides communication and access to XtremIO array(s).

The VSI plugin allows the users to interact with their XtremIO array directly from the vCenter web client. This provides VMware administrators with the capabilities to monitor, manage and optimize their XtremIO hosted storage from a single GUI. For example, a user can provision VMFS datastores and RDM Volumes, create full clones using XtremIO Virtual Copy technology, view on-array used logical capacity of datastores and RDM Volumes, extend datastore capacity, and do bulk provisioning of datastores and RDM Volumes.

Incorporating the VSI plugin into an existing vSphere infrastructure involves deploying a free to use, pre-packaged OVA, and then registering the connection from the VSI Solution Integration Service (SIS) to both the vCenter Server and the XtremIO cluster. Installation requires a minimum of 2.7GB, if thin provisioned, and maximum of 80GB storage capacity, if thick provisioned.

VSI Plugin OVF Deployment Figure 39.

Page 39: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

39 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

After the VSI virtual application is powered on and the SIS becomes available, the vCenter server should be first registered with the VSI plugin. Following this action, the SIS instance can then be registered within the vCenter server via the web client.

Registering VSI Solutions Integration Service Within the vCenter Server Web Client Figure 40.

From the vCenter Inventory listing within the web client, we can register XtremIO X2 system with the vCenter Server by specifying the XMS details.

Registering XtremIO Storage System Within the vCenter Server Web Client Figure 41.

Page 40: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

40 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Setting Best Practices Host Parameters for XtremIO X2 Storage Array

The VSI plugin can be used for modifying ESXi host/cluster storage-related settings, setting multipath management and policies and for invoking space reclamation operations from an ESX server or from a cluster.

The VSI plugin is the best way to enforce the following XtremIO-recommended best practices for ESX servers:

• Enable VAAI.

• Set Queue depth on FC HBA to 256.

• Set multi-pathing policy to "round robin" on each of the XtremIO SCSI Disks.

• Set I/O path switching parameter to 1.

• Set outstanding number of I/O request limit to 256.

• Set the "SchedQuantum" parameter to 64.

• Set the maximum limit on disk I/O size to 4096.

Configuring XtremIO X2 Recommended Settings using the VSI Plugin Figure 42.

Provisioning VMFS Datastores

New VMFS datastores can be created using the VSI plugin, and backed-up by XtremIO Volumes at the click of a button. The VSI plugin interacts with EMC XtremIO to create Volumes of the required size, map them to the appropriate Initiator Groups and create a VMFS datastore on vSphere, ready for use. When VMFS datastores start to run out of free space, you can add more storage space by extending them, using the VSI plugin.

Page 41: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

41 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Create a Datastore using the EMC VSI Plugin Figure 43.

Provisioning RDM Disks

RDM disks can be provisioned directly from XtremIO at the virtual machine level. The process creates a LUN on the XtremIO storage arrays, maps it to the ESXi cluster where the virtual machine resides and attaches it as a physical/virtual RDM disk to the Virtual machine.

Provisioning RDM Disks Figure 44.

Setting Space Reclamation

Using the space reclamation feature in VSI, we can reclaim unused storage on datastores, hosts, clusters, folders and storage folders on XtremIO storage arrays. We can schedule space reclamation on a daily basis, or run it once, for a specific datastore or on all datastores under the same datastore cluster.

Page 42: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

42 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Setting Space Reclamation Scheduler via VSI Plugin Figure 45.

Creating Native Clones on XtremIO VMFS Datastores

The Native Clone feature uses the VMware Native Clone API to create a clone of a virtual machine in a VMFS datastore. This function is especially useful for cloning a large number of machines, while specifying various options such as containing folder, destination datastore, cluster, naming pattern, customization specification and more.

Creating Native Clones Figure 46.

Working with XtremIO X2 XVCs

The following actions for XtremIO XVCs (XtremIO Virtual Copies) can be performed directly from the VSI plugin, providing maximum protection for critical virtual machines and datastores, backed up by XtremIO X2 XVC technology:

• Creating XVCs of XtremIO datastores

• Viewing XtremIO XVCs generated for virtual machine restore

• Mounting a datastore from an XVC

• Creating a writable or read-only XVC

• Creating and managing XVC schedules

• Restoring virtual machines and datastores from XtremIO XVCs

Page 43: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

43 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Managing XtremIO XVC (Snapshot) Schedules Figure 47.

XtremIO X2 Storage Analytics for VMware vRealize Operations Manager

VMware vRealize Operations Manager is a software product that collects performance and capacity data from monitored software and hardware resources. It provides users with real-time information about potential problems in their infrastructure. vRealize Operations Manager presents data and analysis in several ways:

• Through alerts that warn of potential or occurring problems.

• In configurable dashboards and predefined pages that show commonly needed information.

• In predefined reports, EMC Storage Analytics links vRealize Operations Manager with the EMC Adapter.

EMC Storage Analytics (ESA) is a management pack for VMware vRealize Operations Manager that enables the collection of analytical data from EMC resources. ESA complies with VMware management pack certification requirements and has received the VMware Ready certification.

The XtremIO X2 Adapter is bundled with a connector that enables vRealize Operations Manager to collect performance metrics on an X2 array. The adapter is installed with the vRealize Operations Manager user interface. EMC Storage Analytics uses the power of existing vCenter features to aggregate data from multiple sources and process the data with proprietary analytic algorithms.

XtremIO X2 Storage Analytics solution provides a single, end-to-end view of virtualized infrastructures (servers to storage) powered by the VMware vRealize Operations Manager analytics engine. EMC Storage Analytics (ESA) delivers actionable performance analysis and proactively facilitates increased insight into storage resource pools to help detect capacity and performance issues, so they can be corrected before they cause a major impact. ESA provides increased visibility, metrics and a rich collection of storage analytics and metrics for XtremIO X2 for clusters, Data Protection Groups, XVCs, SSD Disks, Storage Controllers, Volumes and X-Bricks.

XtremIO X2 Storage Analytics further extend the integration capabilities across EMC and VMware solutions to provide out-of-the-box analytics and visualization across your physical and virtual infrastructure. Storage Analytics provide preconfigured, customizable dashboards so users can optimally manage their storage environment.

Page 44: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

44 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

The preconfigured dashboards include:

1. Performance - Provides greater visibility across the VMware and storage domains in terms of end-to-end mapping. Mappings include storage system components, storage system objects and vCenter objects. It enables health scores and alerts from storage system components, such as storage processors and disks, to appear on affected vCenter objects, such as LUNs, datastores and VMs.

XtremIO Performance Dashboard Figure 48.

2. Overview - Populates heat maps that show administrators the health of their system and reflect which workloads are stressed.

XtremIO Overview Dashboard Figure 49.

Page 45: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

45 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

3. Metrics - Provides metrics based on “normal” behavior of that application workload (which it learns over a period of time), after which it can analyze and make sense of all the data that has been collected and appropriately point out anomalies in behavior. This dashboard displays resource and metrics for storage systems and graphs of resource metrics.

XtremIO Metrics Dashboard Figure 50.

XtremIO X2 Content Pack for vRealize Log Insight

VMware vRealize Log Insight delivers automated log management through log analytics, aggregation and search. An integrated cloud operations management approach provides the operational intelligence and enterprise-wide visibility needed to proactively enable service levels and operational efficiency in dynamic hybrid cloud environments.

VMware vRealize Log Insight provides real-time log administration for heterogeneous environments that span across physical, virtual and cloud environments. Log Insight provides:

• Universal Log Collection

• Powerful Log Analytics

• Enterprise-class Scalability

• Ease of Use and Deployment

• Built-in vSphere Knowledge

The Dell EMC XtremIO X2 Content Pack, when integrated into VMware vRealize Log Insight, provides predefined dashboards and user-defined fields specifically for XtremIO arrays to enable administrators to conduct problem analysis and analytics on their array(s).

The vRealize Log Insight Content Pack with dashboards, alerts and chart widgets generated from XtremIO logs, visualizes log information generated by XtremIO X2 devices to ensure a clear insight into the performance of the XtremIO X2 flash storage connected to the environment.

The XtremIO X2 Content Pack includes 3 predefined dashboards, over 20 widgets, and alerts for understanding the logs and graphically representing the operations, critical events and faults of the XtremIO X2 storage array.

The XtremIO X2 Content Pack can be installed directly from the Log Insight Marketplace. Once installed, the Content Pack uses the syslog protocol to send remote syslog data from an XtremIO X2 array to the Log Insight Server. Log Insight IP should be set on the XtremIO console under Administration Notification Syslog Configuration in the list of Targets.

Page 46: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

46 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

XtremIO Content Pack Figure 51.

XtremIO management server dashboard collects all events sent from XMS over time and allows search and graphical display of all the events of X-Bricks managed by this XMS.

XtremIO Management Server Dashboard Figure 52.

Page 47: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

47 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

XtremIO errors dashboard collects all error and faults sent from the XMS over time and allows search and graphical display of all the errors and faults of X-Bricks managed by this XMS.

XtremIO Errors Dashboard Figure 53.

XtremIO X2 Workflows for VMware vRealize Orchestrator

VMware vRealize Orchestrator is an IT process automation tool that allows automated management and operational tasks across both VMware and third-party applications. XtremIO workflows for vRealize Orchestrator facilitate the automation and orchestration of tasks that involve the XtremIO X2 Storage Array. It augments the capabilities of VMware’s vRealize Orchestrator solution by providing access to XtremIO X2 Storage Array-specific management workflows.

The XtremIO workflows for VMware vRealize Orchestrator contain both basic and high-level workflows.

A basic workflow is a workflow that allows for the management of a discrete piece of XtremIO functionality, such as Consistency Groups, Clusters, Initiator Groups, Protection Schedulers, Snapshot Sets, Tags, Volumes, RecoverPoint and XMS Management.

A high-level workflow is a collection of basic workflows put together in such a way as to achieve a higher level of automation, simplicity and efficiency than what is available from the available basic workflows. The high-level workflows in the XtremIO Storage Management and XtremIO VMware Storage Management folders combine both XtremIO and VMware specific functionality into a set of high-level workflows.

The workflows in the XtremIO Storage Management folder allow for rapid provisioning of datastores to ESXi hosts and VMDKs/RDMs to VMs. The VM Clone Storage workflow, for instance, allows rapid cloning of datastores associated with a set of source VMs to a set of target VMs accompanied by automatic VMDK reattachment to the set of target VMs.

Another example is the Host Expose Storage workflow in the XtremIO VMware Storage Management folder, which allows a user to create Volumes, create any necessary Initiator Groups and map those Volumes to a host, all from one workflow. All the input needed for this workflow is supplied prior to the calling of the first workflow in the chain of basic workflows that are utilized.

The XtremIO workflows for VMware vRealize Orchestrator allows the vRealize architect to either rapidly design and deploy high-level workflows from the rich set of supplied basic workflows or utilize the pre-existing XtremIO high-level workflows to automate the provisioning, backup and recovery of XtremIO storage in a VMware vCenter environment.

Page 48: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

48 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

VRO and XtremIO X2 Integration Architecture Figure 54.

XtremIO X2 Workflows for VMware vRealize Orchestrator Figure 55.

Page 49: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

49 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Compute Hosts: Dell PowerEdge Servers

For our environment, we set up two homogenous clusters at each site: one cluster with 6 ESXi servers for hosting VSI servers and a second cluster with 2 ESXi servers for virtual platforms, which are used to manage the VSI infrastructure. We used Dell's PowerEdge FC630 as our ESX hosts, as they have the compute power to deal with an environment at such a scale, and are a good fit for virtualization environments. Dell PowerEdge servers work with the Dell OpenManage systems management portfolio that simplifies and automates server lifecycle management, and can be integrated with VMware vSphere with a dedicated plugin.

Compute Integration – Dell OpenManage

Dell OpenManage is a program providing simplicity and automation of hardware management tasks and monitoring for both Dell and multi-vendor hardware systems. Among its capabilities are:

• Rapid deployment of PowerEdge servers, operating systems and agent-free updates

• Maintenance of policy-based configuration profiles

• Streamlined template-driven network setup and management for Dell Modular Infrastructure

• Providing a "geographic view" of Dell-related hardware

Dell OpenManage can integrate with VMware vCenter using the OpenManage Integration for VMware vCenter (OMIVV), which provides VMware vCenter with the ability to manage a data center's entire server infrastructure, both physical and virtual. It can assist with monitoring the physical environment, send system alerts to the user, roll out firmware updates to an ESXi cluster, etc. The integration is more profitable when using Dell PowerEdge servers as the ESX hosts of the VMware environment.

Figure 56 shows an example of a cluster's hardware information provided by the OpenManage Integration for VMware vCenter.

Dell Cluster Information Menu Provided by the Dell OpenManage Plugin for VMware Figure 56.

Page 50: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

50 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

The OpenManage Integration enables users to schedule firmware updates for clusters from within VMware vCenter web client. In addition, users can schedule the firmware update to run at a future time. This feature helps users to perform the firmware updates at the scheduled maintenance window without having to be present personally to attend the firmware updates. This capability reduces complexity by natively integrating the key management capabilities into the VMware vSphere Client console. It minimizes risk with hardware alarms, streamlined firmware updates and deep visibility into inventory, health and warranty details.

Firmware Update Assurances

• Sequential execution: To make sure not all the hosts are brought down to perform firmware updates, the firmware update is performed sequentially, one host at a time.

• Single failure stoppage: If an update job fails on a server being updated, the existing jobs for that server continues; however, the firmware update task stops and does not update any remaining servers.

• One firmware update job for each vCenter: To avoid the possibility of multiple update jobs interacting with a server or cluster, only one firmware update job for each vCenter is allowed. If a firmware update is scheduled or running for a vCenter, a second firmware update job cannot be scheduled or invoked on that vCenter.

• Entering Maintenance Mode: If an update requires a reboot, the host is placed into maintenance mode prior to the update being applied. Before a host can enter maintenance mode, VMware requires that you power off or migrate guest virtual machines to another host. This can be performed automatically when DRS is set to fully automated mode.

• Exiting Maintenance Mode: Once the updates for a host have completed, the host will be taken out of maintenance mode, if a host was in maintenance mode prior to the updates.

Applying Firmware Update Directly from vSphere Web Client Figure 57.

Page 51: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

51 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Table 3 lists ESX hosts details at our environment.

Table 3. ESX Hosts Details Used for VSI Infrastructure PROPERTIES 4+12 ESX HOSTS

System make Dell

Model PowerEdge FC630

CPU cores 36 CPUs x 2.10GHz

Processor type Intel Xeon CPU E5-2695 v4 @ 2.10GHz

Processor Sockets 2

Cores per socket 18

Logical processors 72

Memory 524 GB

Ethernet NICs 4

Ethernet NICs type QLogic 57840 10Gb

iSCSI NICs 4

iSCSI NICs type QLogic 57840 10Gb

FC adapters 4

FC adapters type QLE2742 Dual Port 32Gb

On-board SAS controller 1

Enabling Integrated Copy Data Management with XtremIO X2 & AppSync 3.5

Dell EMC AppSync simplifies, orchestrates and automates the process of generating and consuming copies of production data. Deep application integrations coupled with abstraction of underlying Dell EMC storage and replication technologies empower application owners to satisfy copy demands for data repurposing, operational recovery and disaster recovery, all directly from the single user interface of AppSync. Storage administrators need to only create the initial setup and manage the policies, resulting in agile, transformative application workflows, and a collaborative environment for application and storage administrators.

Combining with XtremIO X2, an administrator can manage the protection, replication and cloning of databases and applications, including Oracle, Microsoft SQL Server, Microsoft Exchange, File Systems and VMware datastores on block and file storage. After defining service plans, application owners can protect, recover and clone their own data quickly using Dell EMC integrated Copy Data Management (iCDM) and XVC.

XVCs share the same metadata and physical data blocks with the production source Volume on initial copy creation. With the unique redirect-on-unique-write technology, changes to the production source Volume or XVC Volume are tracked with separate metadata entries. This is unlike products with copy-on-write technologies, where data needs to be copied to the copies before being changed and performance overhead increases as the number of copies increases. With XVCs, you can have many copies, without impacting performance on a production source Volume. When being read from or written to, the code paths within XtremIO for accessing an XVC Volume or a production source Volume are identical.

Therefore, XVCs are the perfect way to address the needs of backup acceleration, rapid operational recovery and the repurposing of data in agile environments.

Page 52: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

52 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

The AppSync iCDM Bundle enables XtremIO customers to take advantage of AppSync’s application workflow orchestration and automation capabilities, creating a powerful iCDM solution that increases operational efficiency and data center agility. This bundled offering includes the following functionality:

• Full support and maintenance for AppSync with XtremIO support contract

• Unlimited XVCs/copies

• Unlimited refresh/restore operations

• Unlimited number of hosts

• All supported AppSync applications

• Unlimited TBs - No restriction on capacity

• Unlimited mount times

• Unlimited monitoring & reporting

In the following section, we will highlight the benefits of integrating XtremIO and AppSync with the vSphere environment using DELL EMC VSI plugin.

The AppSync data protection feature enables VMware administrators to manage service plans and datastore copies, restore virtual machines, and view and modify the settings for AppSync server directly from the vSphere Web Client.

Registering a New AppSync System

This procedure registers the current vCenter to the AppSync server in order to manage all AppSync activity from the vSphere web client.

AppSync Registration in the vSphere Web Client Figure 58.

The next step is connecting AppSync to XtremIO X2 storage arrays, which will enable us to use XVC technology to backup virtual machines and datastore. This is done directly from the AppSync web interface.

Page 53: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

53 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

XtremIO Registration in the AppSync Web Client Figure 59.

This integration allows us to subscribe each datastore that resides on an XtremIO LUN with a built-in service plan or a user-defined service plan that contains the protection level, number of copies, scheduler and VM consistency. Creating Application Consistent vs. Crash Consistent copies differs in that the VM Consistent copy includes the running programs and processes in memory, whereas the Crash Consistent option does not. The default setting is recommended for backup acceleration and operational recovery scenarios. Setting VM consistency to YES, the vSphere Snapshot is initiated before each array level Snapshot.

AppSync Service Plans Figure 60.

Once taken, datastore copies can be viewed directly via the vSphere web client. Each copy represents an XtremIO XVC and can be mounted or restored to a specific point in time.

Page 54: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

54 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Restoring a Datastore from a Copy

It is possible to initiate an on-demand mount of a datastore copy from the datastore's Copies page, service plan's Copies tab or from a data center's Datastore page.

The following virtual machines operations are available during restore process:

• For VMs present at start of restore:

o Power down VMs on protected datastore before restore: Powers off any virtual machines that are present before starting the restore operation.

o Perform VM operations after restore: Select one of the following:

– Return VMs back to state found at start of restore

– Register all virtual machines

– Register and power up all virtual machines

• For VMs not present at start of restore, select one of the following options:

o Register all virtual machines

o Register and power up all virtual machines

AppSync Management Tab for Datastores Restore Operations Figure 61.

XtremIO Datastore XVC (Snapshot) Which Are Generated by AppSync According to the Selected Service Plans Figure 62.

Page 55: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

55 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Managing Virtual Machine Copies

Using VSI, an administrator can view and manage virtual machine copies at data center level and virtual machine level. At the data center level, an administrator can restore previously deleted/corrupted virtual machines that were protected by AppSync.

This option allows us to select a point in time for a copy with a specific timestamp. If other VMs were also protected along with the selected virtual machine, the Multiple VM Restore page is displayed.

Managing Virtual Machine Copies Figure 63.

The following options are available for Virtual Machine Restore:

• Original location - Restores to the location where the virtual machine was present at the time of protection.

• Alternate location - Restores to a location selected from the following options (all are mandatory):

o vCenter Server: You can select either the same vCenter Server where the datastore with the virtual machine was at the time of protection or a different server.

o Data center

o Host

o Datastore

The following options are available if the VM being restored already exists in the restore location:

• Fail the restore: AppSync checks for the existence of the virtual machines in the restore location. For those virtual machines that exist in the restore location, the restore operation is aborted. For the rest, the restore operation continues. This is a precautionary option.

• Create a new virtual machine: AppSync creates a new virtual machine before restoring.

• Unregister the virtual machine: If the virtual machines selected for restore exist in the restore location, AppSync unregisters them from the inventory before restoring.

Page 56: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

56 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Virtual Machine Restore Options Figure 64.

File or Folder Restore with VMFS Datastores

Files or folders stored on virtual disks on a virtual machine in VMFS datastores can be restored through AppSync.

The virtual disks stored in a VMFS datastore that are protected by an AppSync service plan can be used for file or folder level restore by specifying the location for mounting the virtual disk copy.

Within AppSync, file or folder level restore is a three-phase process:

1. AppSync mounts the XtremIO datastore XVC to the ESX server on which the virtual machine with the AppSync agent resides.

2. The vCenter server adds the virtual disks from the XtremIO datastore XVCs to the mount VM without powering off the VM.

3. AppSync agent performs a filesystem mount to the mount VM.

To complete the restore, the final step is performed manually outside of AppSync. You must copy the files or folders from the location where the virtual disk is mounted to a location of your choice.

AppSync allows us to perform the restore of a file or folder of a virtual disk without installing any agent on the virtual machine. This process can be done from AppSync under the Protected Virtual Machines tab or the virtual machine's Copies page.

This procedure requires a Windows server to serve as a proxy. This virtual machine on which the copy is mounted and restored must be 64-bit with Windows 2012 and above as the operating system. The AppSync host plugin must be installed on it and it should be registered with the AppSync server.

Page 57: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

57 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

AppSync Windows Proxy Settings Figure 65.

The protected virtual machines can then be selected. At this point, the VMDK will be assigned from the specific point while being directly selected to the proxy server as an additional drive, allowing the necessary file to be extracted and copied directly to the virtual machine.

AppSync VMDK Attachment Figure 66.

Starting with version 6.1, AppSync functionality will be integrated with the XtremIO XMS, including datastore restore, individual virtual machine restore, service plan creation/modification and more.

Page 58: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

58 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

AppSync Virtual Machines Management from XtremIO XMS Figure 67.

AppSync Volumes Management from XtremIO XMS Figure 68.

RecoverPoint Snap-Based Replication for XtremIO X2

Dell EMC RecoverPoint provides continuous data protection for comprehensive operational and disaster recovery. XtremIO X2 introduces an innovative array architecture based on the very efficient use of XVCs. The replication mechanism used by RecoverPoint for XtremIO X2 Volumes is based on exploiting the capabilities of XtremIO X2 array to achieve unique capabilities, which are significantly different from all other RecoverPoint replication mechanisms, both if the XtremIO array contains production Volumes and if it contains copy Volumes. These differences in the replication mechanism, which affect replication, image access and failover, are described in the following sections.

It is vital to mention some of the key differences between snap-based replication in XtremIO and Async replication:

• Write interception – With XtremIO at the production, there is no write splitter and no extra installations are required on the array. This is opposed to Async replication of Symmetrix VMAX, VNX and VPLEX which employs a write splitter integrated into the array operating environment.

• Target side storage – When XtremIO is at the target, RecoverPoint is distributing to XVCs. Moreover, the replica Volume is a reference to an array-based snap. In contrast, when non-XtremIO arrays are at the target, RecoverPoint writes to journal Volumes and the data is being distributed to the replica Volumes by the target RPAs.

• Granularity of points-in-time – Asynchronous replication without snap-based replication means near-zero RPO with AnyPiT capability. In snap-based replication for XtremIO, the number of points-in-time is dictated by the maximum XVCs that can be created for a given Volume(s). A minimum of 60 seconds RPO can be achieved in snap-based replication.

Page 59: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

59 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Snap-Based Replication Use Cases

• Use Case 1: High performance environments – snap-based replication would be suitable for write intensive host environments since RecoverPoint with snap-based replication replicates deltas between array-based Snapshots without intercepting the writes in real-time as they are sent to the storage array.

• Use Case 2: Limited WAN bandwidth – In cases where there is limited available bandwidth, snap-based replication in periodic mode can provide WAN savings because of write folding. Write folding is an addition to other bandwidth reduction techniques RecoverPoint leverages which are Deduplication and Compression. We will discuss the different snap-based replication modes later on in this document

• Use Case 3: Relaxed RPO – In situations where there are less-stringent RPO requirements, snap-based replication can be configured. Additionally, requirement for small number of Business Continuity copies will be suitable for Periodic snap-based replication as the replication interval can be configured to suite the low frequency of points-in-time.

XtremIO Virtual Copies (XVCs)

XVCs are regular Volumes created as writeable snapshots.

Creating XVCs does not affect system performance, and an XVC can be taken either directly from a source Volume or from other XVCs. XVCs are inherently writeable, but can be created as read-only. Currently, RecoverPoint release 4.1SP2 creates and manages only writeable XVCs.

When an XVC is created, the following steps occur:

1. Two empty containers are created in-memory.

2. Snapshot SCSI personality is pointing to the new snapshot sub-node.

3. The SCSI personality which the host is using, is linked to the second node in the internal data tree.

Replication Flow

In snap-based replication for XtremIO, there are two cases where the replication flows are substantially different from splitter-based/normal replication or other snap-based replication mechanisms.

XtremIO Volumes Configured on the Production Copy

a. RecoverPoint creates the first snapshot from the root Volume.

b. RecoverPoint requests a DIFF between the first snapshot and the root Volume. Note that the DIFF of the first snapshot and the root Volume will return all the written data on the root Volume.

c. RecoverPoint performs initialization based on the DIFF result. Note that this will trigger a Full Sweep, which means that that the production and target Volumes are being read and only different blocks will be transferred across the wire. This is an efficient replication method since it minimizes WAN consumption even if these are Volumes being replicated for the first time. Full Sweep in XtremIO’s case is based on a DIFF between the first snapshot and the root Volume. The DIFF will return a bitmap of only the written blocks.

d. RecoverPoint creates the second snapshot and the SCSI personality of the snapshot is moved to the new snapshot.

e. RecoverPoint requests a Diff between the second snapshot and the first snapshot.

f. RecoverPoint deletes the first snapshot.

g. RecoverPoint performs initialization based on the DIFF between the two snapshots.

h. Steps d- g are being repeated continuously.

Page 60: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

60 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Replication Flow – XtremIO at Production Figure 69.

Page 61: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

61 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

XtremIO Volumes Configured on the Target Copy

a. RecoverPoint creates a snapshot from the root Volume, also referred to as the working snap.

b. RecoverPoint distributes to that working snap.

c. RecoverPoint creates another snap and the snapshot SCSI personality is moved to the new snapshot.

d. RecoverPoint promotes the first snapshot. In this operation, the references to the root Volume are changed to point to the first snapshot. Furthermore, the SCSI personality is moved as well. This promotion is done every 30 minutes.

Replication Flow – XtremIO at Target Figure 70.

Zoning

It is recommended to zone the RecoverPoint Appliances to all available storage controllers in an even manner. This means that per fabric, all RPA FC ports should be zoned to all Storage controller FC ports. RecoverPoint built-in Multipathing software will work with a subset of paths, evenly across all available storage controllers in a round-robin fashion.

For simplicity purposes, one zone per fabric containing all RPA ports and Storage controller ports can be configured.

RPA Initiator Registration in XtremIO

RecoverPoint Appliances should be registered as a standard host. It is recommended to register the initiators as Linux OS initiators. Each port on the RPA should be registered separately in XtremIO. Afterwards, all RPA ports of the same designated cluster should be grouped to a single Initiator Group on XtremIO. If the RPA cluster is going to be deployed on XtremIO, then the RecoverPoint Repository Volume must be mapped to the RPA Initiator Group of that RP cluster.

Page 62: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

62 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Registering the XMS in RecoverPoint

For every RecoverPoint cluster in which XtremIO replication is required (either at the production, the target or both), XtremIO’s management server (XMS) must be registered in order to enable communication between RecoverPoint and XtremIO.

The registration can be done via Unisphere for RecoverPoint, CLI or RESTAPI.

In Unisphere for RecoverPoint, navigate to RPA Clusters > Select Appropriate RP cluster > Storage -> Add.

As of RecoverPoint 5.1, the RPA initiators are automatically registered in the XMS. All the RPA initiators of the given RP cluster would be registered. This is enabled by another new capability in RecoverPoint 5.1: registration of an XMS even if there are no Volumes exposed through arrays managed by that XMS.

XMS Registration in Unisphere for RecoverPoint Figure 71.

Automatic Journal Provisioning

As of RecoverPoint 5.1, RecoverPoint can automatically provision a journal Volume for an XtremIO-based copy. RecoverPoint would create the Volume on the relevant XtremIO array and assign it to the appropriate RPA Initiator Group. The provisioned Volume would be 2GB and only be provisioned if there were no journal Volumes assigned to that copy already.

If the auto-provisioned journal Volume is removed through RecoverPoint, it is also removed from the array. The auto-provisioned journal Volumes are also deleted when the Consistency Group or Copy is removed.

Automatic Journal Creation in Protect Volumes Wizard Figure 72.

Page 63: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

63 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Volume Auto-Matching

A new capability introduced in RecoverPoint 5.1 is the ability to match selected production Volumes automatically to exposed replica Volumes. There are some rules for this auto-matching to occur:

• Only applicable for replication from XtremIO to XtremIO.

• Volumes must be of the exact same size.

• Volumes can be matched if both production and replica have the same name. Another option for matching is a common prefix.

• Can only be done through Unisphere for RecoverPoint.

Default Replication Mode

If XtremIO is at the production copy then Snap-based replication will be automatically configured for Periodic mode with 1-minute interval. This can be changed on a per-link basis via editing of the link policy during Consistency Group creation or after it has already been created.

Link Policy Configuration on CG Creation Figure 73.

Distributed Consistency Group

In order to obtain higher throughput rates, the CG can be configured as a DGC (Distributed Consistency Group). DCGs offer better performance than non-distributed (regular) Consistency Groups, as DCGs run on a minimum of two RPAs (one primary RPA and one secondary RPA). There is only a small improvement in performance when a group is run on three RPAs. However, there is a steep improvement in performance when a group is run on four RPAs.

Group Policy Configuration on CG Creation Figure 74.

Page 64: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

64 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Exposing Volumes to Initiator Groups

Clicking Expose Volumes to Initiator Groups opens a popup screen displaying a list of Initiator Groups. The list (which filters out Initiator Groups of RPAs) enables you to expose new auto-provisioned Volumes to a specific host. By default, no Initiator Group is selected.

Exposing Volumes to Initiator Groups Figure 75.

Configuring RecoverPoint Consistency Groups

For this solution, VMware vCenter Site Recovery Manager (SRM) leverages RecoverPoint continuous remote replication (CRR) to provide external replication between protected and recovery sites.

Once RecoverPoint is installed and replication sites established, the next step in setting up and testing a disaster recovery plan using SRM with RecoverPoint is to configure RecoverPoint Consistency Groups for the VMware Volumes that are to be protected and managed by SRM.

The general procedure is as follows:

1. Create Consistency Groups.

2. Configure copies.

3. Add journals.

4. Add replication sets.

5. Enable group.

6. Start replication.

This process is described in detail in the EMC RecoverPoint Administrator’s Guide, located at http://support.emc.com.

Page 65: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

65 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Figure 76 shows the Consistency Groups in the RecoverPoint Management Console.

Consistency Groups in the RecoverPoint Management Console Figure 76.

Registering vCenter Server

Registering VMware Vcenter with RecoverPoint creates a connection between RecoverPoint and a VMware vCenter Server, which allows RecoverPoint to display the VMware view of virtual machines configured for replication.

Every LUN and raw device mapping accessed by each virtual machine has the following details:

• Replication status: fully configured for replication or not configured for replication.

• For LUNs or devices configured for replication by RecoverPoint, the following parameters are displayed: Consistency Group, copy (Production, Local, Remote), replication set, and which datastore for each LUN or raw device mapping is configured for replication.

Page 66: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

66 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

VMware View of Virtual Machines Configured for Replication Figure 77.

Configuring the Consistency Group for Management by SRM

After the Consistency Group is created and SRM is installed, the Consistency Group must be configured for SRM to manage it. This is done in the RecoverPoint management console. Select the Consistency Group to protect by SRM and go to the ‘Group Policy’ tab to adjust the settings.

Figure 78 shows the external management of the Consistency Group to SRM in the RecoverPoint Management console.

External Management for cg01 Set for SRM Management Figure 78.

Configuring Site Recovery with VMware vCenter Site Recovery Manager 6.6

VMware vCenter Site Recovery Manager (SRM) is a plugin to VMware vCenter, so disaster recovery tasks can be executed from the same centralized interface used to manage other virtual machine administrative tasks such as creation, migration and deletion.

Page 67: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

67 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

However, SRM is not a built-in component of vCenter; it is a separate server process with its own separate database. The server processes for SRM and vCenter can co-exist on the same server or reside on different servers. Similarly, both SRM and vCenter data repositories can be created in a single database or in separate databases.

Installing and configuring SRM with RecoverPoint includes the following tasks:

1. Configure SRM databases at both sites.

2. Install the SRM server at both sites.

3. Pair the Protected and Recovery site.

4. Set up Inventory Mappings.

5. Install EMC RecoverPoint Storage Replication Adapter (SRA) for VMware vCenter Site Recovery Manager (SRM).

6. Configure protection groups.

7. Create the recovery plan.

8. Install DELL EMC VSI for VMware vSphere.

For this solution, a single protection group was created to failover the five production servers. Multiple protection groups can be created as long as the protection groups match the number of RecoverPoint Consistency Groups that are managed by SRM.

SRM Protection Group Figure 79.

A single production recovery plan was used in the solution. The recovery plan specifies how SRM recovers the virtual machines in the protection group. Just like the protection group, it is possible to have multiple recovery plans to recover mission critical, business critical or business important systems independently. When using multiple recovery plans, SRM will execute one recovery plan at a time.

Page 68: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

68 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

SRM Recovery Plan Figure 80.

Point-in-Time Recovery Images

The combination of RecoverPoint and the EMC VSI plugin offers administrators an extremely powerful tool in relation to BCDR planning activities. As long as VMware SRM is tasked with management of the RecoverPoint Consistency Group and the VSI plugin has been fully integrated, system administrators will have the ability to specify the Point-in-Time (PiT) copy used during a SRM Test Recovery Plan or disaster recover scenarios. This means that system administrators can actively assign what PiT instance of their protected datasets are brought online at the secondary site.

The procedure required to choose the desired PiT copy to be used during the Test Recovery Plan or failover is relatively simple, with the only caveat being that the PiT copies available will be dependent on the Snapshot Pruning and Retention settings defined in the CG Replication policies. By default, the most recent synchronized copy of the protected Volumes is brought online at the recovery site in the case of a test or actual SRM Recovery plan being initiated. To specify a particular PiT image for VSI and RecoverPoint to use during a recovery operation, the administrator can, via the vCenter web client, select a particular VM from the desired Consistency Group and examine the EMC VSI tab that lists the VM's associated Consistency Groups. After selecting the desired Consistency Group, the administrators are presented with a listing of all available PiT copies for this RecoverPoint CG. To select the PiT image that the VMs of this Consistency Group will revert to post-completion of the recovery plan, the administrator simply chooses the desired time and image and then applies and verifies the choice.

Page 69: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

69 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Specifying the Desired PiT Image for a RecoverPoint Consistency Group Figure 81.

As mentioned, this functionality offers an extremely powerful capability to data center administrators with responsibilities for BCDR planning and implementation. With this feature, administrators can now revert a user-defined set of VMs to any particular Point-in-Time (based on retention period, defined RPO and Snapshot maximums) and protect against compromised environments, virus infections and configuration errors that can go uncaught for some time, before leading to organizational IT downtime and possible data loss or corruption.

The described functionality can also be extended to the provisioning of desired PiT datasets for use in the test and development activities of the organization. With proper collaboration between the development, database and data center administration teams, appropriate datasets for test/dev activities can be identified, bookmarked and provisioned to the secondary test and development site, all from the single pane of the vCenter server web client.

Testing the Recovery Plan

An SRM recovery plan can be tested at any time without disrupting replication or ongoing operations at the protected site.

The non-disruptive test is carried out in an isolated environment at the recovery site, using a temporary copy of the replicated data. It runs all the steps in the recovery plan except powering down of the virtual machines at the protected site and assumption of control of replicated data by devices at the recovery site.

This facility allows the recovery plan to be tested for disaster recovery compliance confirming timings and reliability. Another use case is having a temporary test/ development environment to troubleshoot issues or host/application patching validation.

Initiate the test required by selecting the Production Recovery Plan and clicking Test.

Page 70: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

70 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Figure 82 shows the initiating of the recovery of the Production Recovery Plan.

Testing Recovery Plan Figure 82.

When running a test plan, SRM creates an isolated network on the recovery site and enables image access on the RecoverPoint Consistency Groups. The test VMs will be powered up and any customizations that need to be done to them will be applied.

At this point, the environment is available on the recovery site and administrators can verify application functionality in the secure environment. Once the environment has been validated, click Cleanup to return the recovery plan back to a ready state. The cleanup powers down the VMs, unmounts the storage and disables image access on RecoverPoint.

Cleanup Figure 83.

Failover

For the purposes of a planned migration or disaster recovery, it is possible to run the recovery plan available in the SRM management pane. This plan mimics the behavior of the test recovery plan, but does not leave the production VMs active and in place on the primary site.

When a migration is initiated, the failover process will stop if errors are detected. If the disaster recovery option is specified, the process will continue regardless of any error conditions.

Once the failover or recovery plan is completed, the protected VMs will be present and operational on the secondary site, and will represent the administrator's chosen Point-in-Time image for the specific Consistency Groups.

To revert these same migrated VMs back to the primary site, the system administrator needs to re-protect the specific Consistency Groups and then re-run the Recovery plan to reverse the VM distribution.

Page 71: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

71 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Failover Figure 84.

RecoverPoint 5.1.1 for VMS

RecoverPoint for VMS is a software-only data protection solution for protecting VMware VMS, enabling replication from any storage type to any storage type, leveraging Virtual RecoverPoint Appliances (vRPAs) integrated in the VMware ESXi hosts as part of a VMware High Availability cluster. This is achieved via a RecoverPoint write-splitter embedded in the ESXi hypervisor.

Starting with 5.1.1 release, RecoverPoint for VMs uses standard IP communication between the splitters to the vRPA on the same site, rendering the need for iSCSI communication. Splitters to vRPA IP communication are supported over vSphere 6.0 or later. Running earlier vSphere versions still requires iSCSI communication mode.

RecoverPoint for VMS simplifies operational recovery and disaster recovery with built-in orchestration and automation capabilities accessible via VMware vCenter. vAdministrators can manage the protection lifecycle of VMs via the RecoverPoint for VMS plugin through the VMware vSphere Web Client.

In conjunction with the continuing virtualization of data assets, customers require a Business Continuity/Disaster Recovery (BC/DR) capability that is synonymous with their underlying virtualization technology. RecoverPoint for virtual machines for VMware (referred to RecoverPoint for VMs) is synonymous with this requirement. RecoverPoint for VMs provides customers with this capability by enabling local AND/OR remote replication of virtual machines (VMS) on a per-VM basis (containing VMDK and/or RDM files), and allowing recovery to any Point-in-Time (PiT) snapshot with orchestration features during Recover to Production or Failover.

Page 72: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

72 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

RP4VMS 5.1.1 allows the following new capabilities:

• Enables zero RPO: RecoverPoint for VMs offers the best RPO in the market, down to RPO=0 seconds, by enabling synchronous replication.

• Multiple RPOs: Allows setting a different RPO for each virtual machine under the same Consistency Group.

• Dynamically Switch: RecoverPoint for VMs can dynamically switch between Sync and Async replication, in order to provide the best protection possible, according to real-time bandwidth limitations.

• Supports vMotion and Storage vMotion: allows continuous protection while VMware is changing the environment.

• Multiple Disk Types: RecoverPoint for VMs support both virtual disk types: VMDKs and RDMs.

RP4VMs Connectivity Figure 85.

The Protect VM(s) Wizard allows a VM to be protected in isolation in its own Consistency Group or as a member of an existing Consistency Group containing other federated VMS. It also adds the ability to protect multiple VMS (VM batch protection) as part of the same protection flow via "Protect additional VM(s) using this group".

Note: When an additional VM is added to an existing Consistency Group its VMDK(s) is added as a new replication set(s). The journal history is no longer deleted due to the VMDK addition without journal loss feature introduced with RecoverPoint for VMS 5.0 and only a Volume sweep occurs on the new replication set. The Protect VM(s) Wizard appears after selecting "Protect".

Selecting Protection Method Figure 86.

Page 73: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

73 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

The Production copy VM policy settings are then configured, including the journal configuration. Note that in the screenshot example below, a datastore has already been registered as the journal resource pool. The Add option allows the addition of datastores to form multiple storage resource pools for the journal from the Protect VM(s) Wizard as opposed to registering them prior to protection.

Note: If more than one journal datastore is registered and automatic provisioning is selected, all datastores must satisfy the journal size requirement otherwise the journal provisioning will fail.

Configuring Production Settings Figure 87.

The replica copy name is then assigned and the remote vRPA cluster is selected to facilitate the creation of a remote replica copy VM.

Adding a Copy at the DR Site Figure 88.

The replica copy VM settings are then selected including the replication mode and journal configuration.

Configuring Copy Settings Figure 89.

Page 74: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

74 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

A replica ESXi host resource for the replica VM is selected along with a datastore for the replica VM. It is possible to create the replica VM on a designated datastore or alternatively on an existing VIM. If the option to use an existing replica VM is selected, a sanity check of the replica VM is undertaken to determine if the virtual disk(s) is/are identical in size to the protected VM.

Selecting Copy Resources Figure 90.

Selecting Copy Storage Figure 91.

Page 75: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

75 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

RecoverPoint for VMs 5.1 release now allows the user to define the Failover network.

Note that this can be updated via the RecoverPoint for VMS Management plugin after the VM has been protected.

Defining Failover Networks Figure 92.

Once the VM is protected, the status of the VM can be viewed from both the RecoverPoint for VMs tab under the Manage option of the VM from the vCenter Web Client displayed shown in Figure 93 or via the RecoverPoint for VMS Management option from the Production vCenter Web Client shown in Figure 33.

Page 76: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

76 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Figure 93 below shows the protected DB Server W2K12 VM in the inventory along with the Local replica copy shadow VM which is created as part of the protection process. The remote replica copy shadow VM (DR) is shown in the inventory of the DR vCenter.

Note: A replica shadow VM is identified by the rp. at the front of the virtual machine name and the .copy.shadow extension at the end of the virtual machine name. User action on replica shadow VMS is not supported.

Single Virtual Machine Protection Using RecoverPoint for VMS Figure 93.

Once the replication has been initiated, RecoverPoint for VMs starts replicating the virtual machine’s data between the production datastore and the DR datastore. Figure 94 shows IOPS, bandwidth and latency stats on our XtremIO X2 X-Brick.

XtremIO X2 Performance Overview During RecoverPoint for VMS Initial Replication Figure 94.

Page 77: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

77 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

Recovery Operations

RecoverPoint for VMS enables local AND remote replication, allowing Testing, Failover or Recover Production of a Consistency Group and its constituent VM(s) to any point in time. Administrators can manage the lifecycle of VMs directly using the RecoverPoint for VMS plugin via the vSphere Web Client and perform test, failover, fail back and recovery production to any point in time by leveraging the replica copy journal.

The Test Copy, Recover Production and Failover tasks can be initiated from the VM or Consistency Group but each activity affects the Consistency Group and all VMS within. Both the Recover Production and Failover tasks initiate the pre-requisite Test Copy (image access) prior to the recover or failover activity, as this is a mandatory step to allow customers to check the integrity of the VM(s) prior to completing the Recover Production or Failover task.

Test Copy, Recover Production, and Failover Operations for Protected VM or Consistency Group Figure 95.

Test Copy: RecoverPoint for VMS allows you to select any point in time image (snapshot) for testing as a stand-alone activity or in conjunction with the Failover or Recover Production recovery activities. Note that the use of Test Copy also provides a recovery capability at it potentially facilities individual file recovery.

Failover: The Failover process of failing over to a replica VM in the event of a disaster at the Production site allows system operations to continue from the replica copy VM. During Failover, transfer is paused and access to the original production source VM(s) is blocked. Failover promotes the shadow VM at the replica to the role of Production. The original Production VM becomes the replica copy VM adopting the role of a replica copy with an rp. prefix and .shadow extension.

Production Recovery: Recovering production restores the Production/source VM from a replica copy VM at the selected point in time using an image/snapshot from the replica copy journal. The Recover Production wizard guides you through the process of correcting file or logical corruption, by rolling the Production/source VM to a previous point in time.

Page 78: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

78 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries.

References

1. Dell EMC XtremIO Main Page – http://www.dellemc.com/en-us/storage/xtremio-all-flash.htm

2. Introduction to EMC XtremIO 6.0

3. Dell EMC XtremIO X2 Specifications – http://www.emc.com/collateral/specification-sheet/h16094-xtremio-x2-specification-sheet-ss.pdf

4. Dell EMC XtremIO X2 Datasheet – http://www.emc.com/collateral/data-sheet/h16095-xtremio-x2-next-generation-all-flash-array-ds.pdf

5. XtremIO X2 vSphere Demo – http://www.emc.com/video-collateral/demos/microsites/mediaplayer-video/xtremio-x2-vsphere-demo.htm

6. EMC Host Connectivity Guide for VMware ESX Server – https://www.emc.com/collateral/TechnicalDocument/docu5265.pdf

7. XtremIO CTO Blog (with product announcements and technology deep dives) – https://xtremio.me/

8. XtremIO XCOPY Chunk Sizes – https://xtremio.me/2017/07/31/xcopy-chunk-sizes-revisited-and-data-reduction-as-well/

9. 2016 XtremIO with VDI Reference Architecture – https://xtremio.me/2016/07/25/a-new-vdi-reference-architecture/

10. Dell EMC Virtual Storage Integrator (VSI) Product Page – https://www.emc.com/cloud-virtualization/virtual-storage-integrator.htm

11. Dell EMC PowerEdge FC630 Specification Sheet – https://i.dell.com/sites/doccontent/shared-content/data-sheets/en/Documents/Dell-PowerEdge-FC630-Spec-Sheet.pdf

12. Dell OpenManage Systems Management Tools – http://en.community.dell.com/techcenter/systems-management/w/wiki/1757.dell-openmanage-systems-management-tools

13. Dell OpenManage Integration for VMware vCenter – http://en.community.dell.com/techcenter/systems-management/w/wiki/1961.openmanage-integration-for-vmware-vcenter

14. VMware vSphere 6.5 Configuration Maximum Guide – https://www.vmware.com/pdf/vsphere6/r65/vsphere-65-configuration-maximums.pdf

15. Performance Best Practices for VMware vSphere 6.5 – https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/performance/Perf_Best_Practices_vSphere65.pdf

16. VMware Site Recovery Manager Product Page - https://www.vmware.com/products/site-recovery-manager.html

17. Dell EMC RecoverPoint for Virtual Machines Administrator's Guide - https://www.emc.com/collateral/TechnicalDocument/docu85469.pdf

18. AppSync Protecting and Recovering VMware Datastores - https://www.emc.com/video-ollateral/demos/microsites/mediaplayer-video/appsync-for-vmware.htm

19. Dell EMC RecoverPoint Product Page - https://www.emc.com/storage/recoverpoint/recoverpoint.htm

20. EMC Storage Analytics Product Guide -https://www.emc.com/collateral/TechnicalDocument/docu85487.pdf

Page 79: CONSOLIDATING AND PROTECTING VIRTUALIZED … · In-Guest Unmap Alignment Requirements ... We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO

High Availability, Data Protection, and Data Integrity in the Dell EMC XtremIO X2 Architecture © 2017 Dell Inc. or its subsidiaries.

© 2017 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be trademarks of their respective owners. Reference Number H16821

Learn more about Dell EMC XtremIO

Contact a Dell EMC Expert

View more resources Join the conversation @DellEMCStorage and

#XtremIO

How to Learn More

For a detailed presentation explaining XtremIO X2 Storage Array's capabilities and how XtremIO X2 substantially improves performance, operational efficiency, ease-of-use and total cost of ownership, please contact XtremIO X2 at [email protected]. We will schedule a private briefing in person or via a web meeting. XtremIO X2 provides benefits in many environments and mixed workload consolidations, including virtual server, cloud, virtual desktop, database, analytics and business applications.