39
White Paper Abstract This white paper covers the replication solutions available in VNXe3200, as well as their features and implementation. Specifically, it discussed VNXe3200 Native Asynchronous Replication, RecoverPoint, and others. It also includes information on how you can manage replication in VNXe3200, and the major benefits replication provides. May, 2016 EMC VNXe3200 Replication Technologies A Detailed Review

EMC VNXe3200 Replication Technologies · Unisphere – A web-based EMC management interface for creating storage resources and configuring and scheduling protection of stored data

Embed Size (px)

Citation preview

White Paper

Abstract

This white paper covers the replication solutions available in VNXe3200, as well as their features and implementation. Specifically, it discussed VNXe3200 Native Asynchronous Replication, RecoverPoint, and others. It also includes information on how you can manage replication in VNXe3200, and the major benefits replication provides. May, 2016

EMC VNXe3200 Replication Technologies A Detailed Review

2 EMC VNXe3200 Replication Technologies

Copyright © 2016 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided “as is.” EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. VMware is a registered trademark of VMware, Inc. in the United States and/or other jurisdictions. All other trademarks used herein are the property of their respective owners. Part Number h13685.3

3 EMC VNXe3200 Replication Technologies

Table of Contents

Executive Summary ................................................................................................. 5 Audience ............................................................................................................................ 5

Replication Overview ............................................................................................... 8

VNXe3200 Native Replication .................................................................................. 9

Theory of Operations ............................................................................................... 9 Replication Interfaces ......................................................................................................... 9 Replication Connections ................................................................................................... 10 Replication Sessions ........................................................................................................ 11 Internal Snapshots ........................................................................................................... 12 Storage Resources ............................................................................................................ 14 Replication Modes ............................................................................................................ 15 Replication Roles .............................................................................................................. 16

Replication Operations .......................................................................................... 17 Pause and Resume ........................................................................................................... 17 Sync Now ......................................................................................................................... 18 Failover and Failback ........................................................................................................ 18 Verify and Update ............................................................................................................. 19 Delete .............................................................................................................................. 19

Topologies ............................................................................................................ 20

Unisphere Management ........................................................................................ 22 Configuring Replication .................................................................................................... 22 Replication Settings ......................................................................................................... 22 Replication Connections ................................................................................................... 23 Create Replication Session Wizard ................................................................................... 24 Replication Details ........................................................................................................... 28

Limits .................................................................................................................... 31

RecoverPoint with VNXe3200 ................................................................................. 32

Overview ............................................................................................................... 32

Configuration ........................................................................................................ 34 RecoverPoint LUN Access ................................................................................................. 34

Comparison of Replication Solutions ..................................................................... 35

Interoperability...................................................................................................... 36 Native Replication and Unified Snapshots ........................................................................ 36 Native Replication and RecoverPoint ................................................................................ 37 Native Replication and SMI-S API ...................................................................................... 38

Conclusion ............................................................................................................ 38

4 EMC VNXe3200 Replication Technologies

References ............................................................................................................ 39

5 EMC VNXe3200 Replication Technologies

Executive Summary Data is one of the most valuable assets of any organization. It is being stored, mined, transformed, and utilized continuously. It is a critical component in the operation and function of organizations. Outages, whatever the cause, are extremely costly, and customers are concerned about data availability at all times. Safeguarding and keeping data highly available are some of the top priorities of any organization. To avoid disruptions in business operations, it is necessary for data centers to implement data protection technologies. A data replication solution can enable you to achieve business continuity, high availability, and data protection. EMC® VNXe3200™ provides a set of integrated features that will enable you to meet your goals of business continuity and data protection.

The replication features in VNXe3200 enable you to maintain remote replicas of the production data. This ensures that your organization can recover from a disaster at a production data center quickly, easily, and with minimal to no data loss. Replication creates crash-consistent point-in-time copies of the production data that you can replicate locally or to a remote VNXe3200 system.

This white paper primarily describes the replication technologies available for VNXe3200:

• Native Asynchronous Block Replication

• EMC® RecoverPoint®

The VNXe3200’s native asynchronous block replication is tightly integrated into the existing Unisphere GUI and Unisphere CLI interfaces. Unisphere’s simple and intuitive interface allows IT generalists and advanced users alike to configure and manage replication for their VNXe3200 systems easily.

As an alternative to native replication, EMC RecoverPoint allows for appliance-based synchronous or asynchronous block replication using the Unisphere for RecoverPoint interface. Using this technology, VNXe3200 is able to recover data to any point-in-time, and replicate with other EMC storage systems including VNX and VMAX.

Audience

This white paper is intended for EMC customers, partners, and employees interested in using replication with VNXe3200. Some familiarity with VNXe3200 Unified Snapshots and RecoverPoint is assumed.

6 EMC VNXe3200 Replication Technologies

Terminology

Asynchronous Replication – A replication mode that enables you to replicate data over long distances, while maintaining a copy of the data at the remote site. Automatic synchronization from source to destination can be performed periodically by specifying a Recovery Point Objective (RPO).

Bandwidth – The amount of data that can be transferred in a given period of time. Bandwidth is usually represented in megabytes per second or MB/s.

CMI – An internal inter-controller communication and data transfer channel. In VNXe3200 native local replication, data is transferred across the CMI if the destination storage resource is owned by a different storage processor than the source storage resource. Under non-optimal circumstances, externally generated IO may also traverse the CMI in order to reach the desired storage processor.

Common Base – A pair of snapshots on different storage objects, internally identified by the system as having the same point-in-time view.

Consistency Group (General) – A group of storage elements that need to have snapshots or replicas created either in a crash-consistent or application-consistent fashion. All elements that belong to the same block application can define a consistency group. In VNXe3200, consistency groups are called LUN groups.

Consistency Group (RecoverPoint) – In RecoverPoint, a consistency group is a user-defined group of LUNs to be replicated. A consistency group needs to be created in RecoverPoint as part of the workflow to configure replication. This consistency group containing VNXe3200 LUNs can then be replicated locally or remotely. Note that in this context, consistency group is not synonymous with LUN group. A VNXe3200 LUN group is a storage system-side association between LUNs that does not automatically carry over into RecoverPoint.

Crash Consistency – A quality of a snapshot set such that the set represents the state of the application's storage at a specific point-in-time. The system will guarantee that no changes will be written to any of the storage elements during the timeframe where the snapshot set is being created. A crash consistent snapshot set represents the set of storage elements as they were at a specific point-in-time.

Destination Storage Resource – A storage resource that is used for disaster recovery in a replication session. A destination storage resource can exist on the same or different system as the production storage resource. Also known as a copy or target.

Internal Snapshot (Replication Snapshot) – Snapshots created by the system as part of a replication session. While visible to the user, user operations are not permitted on these snapshots. A single replication session requires two internal snapshots on both source and destination storage resources.

Journal Volume – In RecoverPoint, a journal volume is a LUN designated to hold data associated with previous points-in-time. The journal is used to allow RecoverPoint to roll back data to any point-in-time. Journal LUNs must be configured for each copy of a consistency group, including the production copy.

7 EMC VNXe3200 Replication Technologies

LUN – A block-based storage resource that a user provisions. It represents a SCSI logical unit.

LUN Group – The name of a block-based storage resource that a user provisions without any provisioning best practices. It may contain no or multiple LUNs. This is also known as a consistency group.

Protection Space – Space consumed or reserved by snapshots for the protection of overwritten data. In VNXe3200, this comes from the same pool as the storage resource.

RecoverPoint – An appliance-based disaster recovery solution that replicates asynchronously or synchronously and enables recovery to any point-in-time.

RecoverPoint Appliance (RPA) – An industry-standard server platform that runs RecoverPoint software and manages all aspects of data protection for a consistency group. RPAs are clustered at each site in a RecoverPoint system for high availability and load balancing. Virtual machines running RecoverPoint software, or vRPAs, are also supported as an alternative to physical appliances.

Recovery Point Objective (RPO) – The acceptable amount of data, measured in units of time, and may be lost in a failure. For example, if a storage resource has a one-hour RPO, any data written to the storage resource in the most recent hour may be lost when the replication fails over to the destination storage resource.

Recovery Time Objective (RTO) – RTO is the duration of time within which a business process must be restored after a disaster. For example, an RTO of 1 hour means that in case of a disaster, access to the data needs to be restored in 1 hour.

Replication Session– A relationship configured between two storage resources of the same type, on the same or different systems, to automatically or manually synchronize the data from one resource to another.

Round Trip Time (RTT) – RTT is the length of time it takes for a signal to be sent plus the length of time it takes for an acknowledgement of that signal to be received.

Snapshot – A snapshot is the state of a storage element or application at a particular point-in-time. When a snapshot is taken, it initially shares all blocks with its associated production storage resource. When hosts then write to the production storage resource, those writes are redirected to another location in the pool, and pointers are updated. In VNXe3200, this same snapshot technology is used for block and file storage resources.

Storage Resource – The top-level object a user can provision, associated with a specific quantity of storage. All host access and data protection activities are performed at this level. In this document, storage resource refers specifically to those which support replication: LUNs, LUN groups, and VMware VMFS datastores.

Synchronous Replication – A replication mode in which the host initiates a write to the system at a local site and the data must be successfully stored in both local and remote sites before an acknowledgement is sent back to the host.

8 EMC VNXe3200 Replication Technologies

Unisphere – A web-based EMC management interface for creating storage resources and configuring and scheduling protection of stored data on VNXe3200. Unisphere can be used for all management of VNXe3200 native replication.

Unisphere for RecoverPoint– A web-based interface for managing RecoverPoint replication. It serves as a single pane of glass for replicating storage resources of multiple storage systems configured to use RecoverPoint. Consistency groups are created, replicated, and recovered through this interface.

User Snapshot – A snapshot created manually by the user or through a user-defined schedule. This is different from an internal snapshot, which is created by the system when a replication session is created.

Replication Overview To protect against events that may disrupt production data availability, it is essential to maintain a redundant copy of data. You can use data replication to create this copy. Replication is a process in which data is synchronized to a remote location, providing an enhanced level of redundancy in case the storage systems at the main production site fail. Having a proper disaster recovery (DR) site minimizes the downtime-associated costs and simplifies the recovery process from a disaster. In addition to remote replication, data can also be replicated locally, within the same system.

In general, replication can be either synchronous or asynchronous:

Synchronous Replication

In synchronous replication write acknowledgement is not sent to the host until the write operation has been committed to both the local and remote storage. This ensures zero data loss in the event of a disaster, because data is always in sync between both sites at any given point-in-time. However, mirroring every write operation across the network before acknowledgement introduces latency for the application, resulting in a distance limitation for synchronous replication. This limitation is generally 5-10 milliseconds Round Trip Time (RTT), or around 150-300 miles between sites.

Asynchronous Replication

Asynchronous replication avoids distance limitations by acknowledging write operations immediately as they arrive at the local site, then tracking writes and replicating them across the network at a later time. With this method, applications do not experience latency as a result of replication, and replication can occur over much greater distances. Because write operations are not replicated to DR storage immediately, asynchronous replication introduces the concept of an acceptable data loss threshold, or Recovery Point Objective (RPO). Discussed in detail later, a user-defined RPO specifies the maximum allowable time for a replication session to be out-of-sync, limiting the amount of data that may be lost in the event of a disaster.

9 EMC VNXe3200 Replication Technologies

VNXe3200’s native replication operates asynchronously, and can be easily configured and managed through Unisphere. Discussed later in this paper, RecoverPoint support allows VNXe3200 to replicate synchronously or asynchronously using RecoverPoint appliances and the Unisphere for RecoverPoint user interface.

VNXe3200 Native Replication The replication functionality available natively in VNXe3200 allows users to configure asynchronous replication of block storage resources between VNXe3200 storage systems. All replication configuration and management can be easily performed through Unisphere. The following sections will discuss:

• How the native replication functionality works • Replication operations that can be performed • Unisphere configuration and management

RecoverPoint support, interoperability and comparisons between replication solutions will be discussed in later sections.

Theory of Operations There are several components required for replication which will be discussed in this session. These components are configured by the user and build on top of one another to enable replication. In VNXe3200, configuring replication involves the following steps:

• Creating replication interfaces

• Creating a replication connection

• Creating a replication session

This section details the functions, requirements, and interactions of these components. Configuration of these components in Unisphere will be discussed in a later section.

Replication Interfaces

VNXe3200 introduces the concept of designated replication interfaces, which are used to establish a connection between systems. Replication interfaces are dedicated network interfaces used only for replication-related data and management traffic between systems. Before a remote system connection can be successfully established, a minimum of one replication interface must be created on each storage processor (SP) of each system. Replication interfaces may be configured on the embedded 10GbE ports, or the ports of any Ethernet IO module. Replication interfaces can also be created on link aggregated ports for high availability, increased maximum throughput, and load balancing of replication traffic across physical ports in the aggregation.

10 EMC VNXe3200 Replication Technologies

Native replication is over Ethernet only; Fibre Channel cannot be used to replicate data. However there is no restriction on host access protocol used for replicated storage resources.

Replication Connections

Replicating between two systems requires that there is a trusted connection established between the systems to be used for replication. A replication connection establishes both a management interface and data path between a pair of systems, which will later be used when replication is configured.

Once replication interfaces are configured on each system, a replication connection can be established between the systems. When a replication connection is established, replication interfaces on each SP are paired with replication interfaces on each SP of the remote system. This ensures that there is a data and management connection between each pair of SPs, as shown in Figure 1. This is beneficial for high availability and performance in the case of a LUN trespass, but does not protect against a link or port failure in which case the replication interface would not failover. Link and port failures can be protected against by configuring the replication interface on a link aggregation group. As illustrated in the figure below, the interconnects created ensure that there is redundancy in the event of a LUN trespass which may occur as a result of an SP reboot or failure. Barring a failure, requiring an interface on each storage processor also improves performance from a LUN ownership perspective. Since it is preferable for LUN IO to arrive through the SP that owns the specific LUN, the configuration shown below ensures that replication IO can be serviced by the native SP, and prevents a LUN from potentially trespassing as a result of replication IO to SP that does not own the LUN.

11 EMC VNXe3200 Replication Technologies

Figure 1 - Replication Connection

All replication connections will involve all configured interfaces on each participating system. In other words, specific replication interfaces cannot be reserved for specific remote system connections. However, specific replication interfaces can be chosen on a per replication session basis. A replication connection between two systems requires an administrator to provide valid credentials for the remote system being registered. Note that unlike VNX Replicator, remote systems do not need to be registered from both sides manually. This is because the VNXe3200 leverages an internal API for inter-system management communications, which allows all remote system registration for both systems to be completed from a single side of the connection, simplifying management.

Local Replication

VNXe3200 also supports local replication, where a storage resource is replicated to another storage resource on the same system. This type of replication does not require replication interfaces to be configured, as no data must be transferred over the external network. Local replication where the destination storage resource is owned by a different SP than the source storage resource will be completed over the internal CMI bus connecting the SPs. For local replication, a default Local System replication connection is selected, which is pre-existing and does not need to be manually configured.

Replication Sessions

A replication session uses a configured replication connection and its associated interfaces to replicate data between source and destination storage resources. In

12 EMC VNXe3200 Replication Technologies

VNXe3200, all native replication sessions are asynchronous, where synchronization is triggered by a user-defined Recovery Point Objective (RPO) or manually. Asynchronous replication is defined by the following related characteristics:

• Host writes are acknowledged prior to being replicated to destination storage

o When a host writes to a production storage resource, those write operations are not immediately replicated to the destination storage resource prior to acknowledgement. Instead, source write operations are tracked and replicated at a later time.

• A maximum acceptable level of potential data loss, as defined by the user’s RPO

o An RPO is the maximum amount of data the user is willing to lose in the event of a disaster, usually measured in time. This value determines how frequently synchronizations must occur at minimum.

Upon creation of the replication session, a full initial synchronization occurs between the source and destination storage resources. This creates a common base between the source and destination storage resources, meaning the same point-in-time data is present on both sides. Any incremental changes to the source storage resource will be kept track of so they can later be copied to the destination. Host write operations to source storage resources will be acknowledged normally, without waiting for the changes to be propagated to the destination side before acknowledgement. This transfer will be done asynchronously, at a later time. At some later time determined by the RPO setting, all source changes since the last sync will be replicated to the destination storage resource. The common base established earlier is updated to reflect these new changes. In the event of a failure on the source side, all changes since the last completed sync will be lost when failing over to the destination, as these changes have not yet been synced.

Internal Snapshots

In VNXe3200, the existing native Unified Snapshots technology is leveraged to maintain the common base explained above and sync incremental changes. This is demonstrated in Figure 2 below.

13 EMC VNXe3200 Replication Technologies

Figure 2 - Replication Session

In general, asynchronous replication operates in the following way:

1. When a replication session is established, two internal snapshots are created on each of the source and destination storage resources. After the snapshots are created, Source Snap A and B each contain a current point-in-time view of the Source LUN.

2. Data from Snap A is then copied to the empty Destination LUN. This is known as the initial synchronization, and is a full copy.

3. Once this initial synchronization is completed, Destination Snap A is refreshed to reflect the current state of the Destination LUN. At this point, Source Snap A and Destination Snap A contain the same data, which is reflective of the point-in-time view of the Source LUN at the time the initial synchronization began. Snap A is now the common base between the source and destination LUNs.

4. As hosts continuously write to Source LUN A, the data in the LUN is changed.

5. At the next automatic or manual sync, Source Snap B is refreshed to reflect the current point-in-time view of the Source LUN. All incremental changes since the time of the previous sync are then copied from Source Snap B to the Destination LUN.

6. After this copy is complete, Destination Snap B is refreshed to reflect the current state of the Destination LUN. Now Snap B is the common base between the source and destination.

14 EMC VNXe3200 Replication Technologies

The common base will continue to alternate between Snaps A and B at each subsequent automatic or manual sync.

Internal snapshots behave the same way as user snapshots, meaning they leverage Redirect on Write technology and consume protection space from the same pool as the storage resource, as per Unified Snapshots semantics. Although they function the same way, the internal snapshots used for replication have some differences from normal user snapshots. Internal snapshots are visible to users however user operations are not permitted on these snapshots. This means replication snapshots cannot be manually deleted, copied, restored, attached, or detached. Additionally, snapshots created by replication do not count against the maximum allowed number of user snapshots.

For more information on how snapshots work, see the EMC VNXe3200 Unified Snapshots white paper on EMC Online Support.

Storage Resources

When configuring replication, the source and destination storage resources must be of the same type. In VNXe3200, the following types of storage resources can be replicated:

• LUNs

• LUN groups

• VMware VMFS datastores

LUNs and VMware VMFS Datastores

On the storage system, replication of VMware VMFS datastores functions identically to LUN replication, although LUNs and datastores cannot be replicated with one another. When configuring LUN replication, the size of the source and destination LUNs must match exactly. All other configuration settings may vary between source and destination LUNs. As discussed previously, a LUN must be designated as a Destination before it can be used as a destination in a replication session. In Unisphere, the user also has the ability to create the destination as part of the replication session configuration, which will ensure these requirements are met.

LUN Groups

Crash-consistent replication of block application data can be achieved using LUN groups. In VNXe3200, a LUN group is a consistency group, meaning snapshots of LUNs in a LUN group always represent a consistent point-in-time across all LUNs in the group. LUN group replication works in the same way as described earlier, with crash-consistent internal snapshots being taken atomically across every LUN in the group and refreshed periodically to maintain a common base.

When configuring LUN group replication, a destination LUN group must be created which matches the source LUN group exactly.

1. The destination LUN group must have the same number of LUNs with corresponding size to LUNs residing in the source LUN group.

15 EMC VNXe3200 Replication Technologies

2. The LUNs in the destination LUN group must also be thin/thick provisioned to match their corresponding LUNs in the source LUN group.

In Unisphere, source LUN group LUNs are automatically mapped to destination LUN group LUNs when configuring replication. However, Unisphere CLI provides the ability to manually map source LUNs to destination LUNs when replicating LUN groups, provided there are multiple possible pairings that adhere to the rules above.

Auto-Expansion

While source and destination storage resource sizes must match exactly in order to configure replication, a source storage resource may have its size increased after replication has been configured. When increasing the size of a source storage resource in a replication session, the destination storage resource will automatically be extended at the next sync, as shown in Figure 3. This is true for standalone LUNs, LUN groups, and VMware VMFS datastores. Note that this only applies to changing the size of LUNs; it does not mean LUNs can be added or removed from a LUN group after replication has been configured.

Figure 3 - Destination Size Increase

If a thick source storage resource is extended but there is not sufficient pool space to automatically extend the destination, synchronization will not occur. Synchronization will resume when there is sufficient space in the destination pool to complete the automatic extension.

The user should always check to ensure that there is also sufficient space in the destination pool before expanding a thick source storage resource.

Replication Modes

As mentioned earlier, VNXe3200 asynchronous replication can operate in one of two modes:

• Manual Synchronization

• Automatic Synchronization (RPO)

16 EMC VNXe3200 Replication Technologies

Manual Synchronization

With manual synchronization, the user manually syncs source changes to the destination storage resource as desired. Optionally, the initial synchronization can be performed automatically when a replication session is created. However, subsequent syncs must be user-initiated. When using the manual synchronization option, it is recommended that users sync the session periodically in order to avoid excess consumption of pool space by the internal snapshots.

Automatic Synchronization

With automatic synchronization, the user specifies an RPO which is used to automatically sync the source and destination storage resources. When configuring replication with this option, the initial sync is always performed immediately upon creation of the replication session.

Recovery Point Objective (RPO)

The RPO value represents the maximum amount of time which the source and destination storage resources are allowed to be out of sync, which in turn defines the maximum acceptable data loss.

For example, if the RPO is set to 10 minutes, then the latest point-in-time copy on the destination side should not be older than 10 minutes. So if the source storage resource were to become unavailable, no more than the last 10 minutes worth of data would be lost upon failing over to the destination. Since the incremental changes take time to be copied to the destination, an RPO of 10 minutes does not necessarily mean that synchronization will occur in 10 minute intervals. The destination could also be updated more frequently than the RPO, depending on source write rate and network link speed. Similarly, large write bursts, network limitations, or network problems between the source and destination could cause the value specified in the RPO to be exceeded.

There are several considerations when configuring replication to use automatic synchronization. While a shorter RPO will provide greater protection due to more frequent synchronizations, there are also drawbacks. The shorter the RPO, the greater the performance impact since internal snapshots will be refreshed and invalidated more frequently. Additionally, replication may generate higher network traffic when using a shorter RPO.

Conversely, a replication session with a shorter RPO has the benefit of consuming less pool space with its internal snapshots, since protected blocks will be released more frequently when snapshots are refreshed. This means that there will be less buildup of uncopied writes than with a longer RPO, and less pool space used as a result.

Replication Roles

Two storage resources are required for replication:

• A source which is copied

17 EMC VNXe3200 Replication Technologies

• Destination which is copied to from the source.

All storage resources in VNXe3200 have an attribute identifying them as having the role of either a source or destination. This attribute, “Replication Destination”, is set to “no” by default, and ensures that production storage resources are not accidentally chosen as destinations and overwritten when configuring replication. If a storage resource has not been designated as a replication destination, it will not be available to be used as a destination when configuring a replication session. When a storage resource is designated as a replication destination, it becomes unavailable to hosts, and can only be written to as part of replication synchronization.

Storage resources cannot have their replication destination attribute set to “yes” during initial creation; this must be modified afterward. The exception to this is creating a destination through the Create Replication Session wizard, which allows creation of a new storage resource to be used as a destination, directly from the replication session wizard. In this case, the newly created storage resource will automatically be designated as a destination so it can be used in the replication session being configured.

Replication Operations Once replication has been configured, various operations are available to be performed on the replication session. Not all operations are available at all times, and some will produce different results depending on whether they are performed on the source or destination side. Note that only one replication operation can be in progress at any given time. For example, a session cannot be paused or deleted during synchronization.

Pause and Resume

Pausing and resuming allows the user to temporarily stop the replication session and make changes before continuing. In Unisphere, replication sessions can only be paused from the source side, however Unisphere CLI also supports pausing from the destination, though it is not recommended. Replication sessions cannot be paused while synchronization is in progress.

A user may want to pause a replication session for various reasons. Management or replication interface IP addresses may need to be changed, or a system may need to be physically moved from one location to another. One particular use case is avoiding lengthy initial synchronization times where there is a large amount of data to be copied and low network bandwidth or high latency between source and destination sites. Using the pause and resume operations, a user could:

1. Create the replication session with both systems locally.

2. Complete the initial synchronization over a more optimal connection.

3. Pause the replication session.

4. Ship the destination system to the remote site.

18 EMC VNXe3200 Replication Technologies

5. Once the system arrives at the remote location and is reconfigured, the replication connection could be updated, and the replication session could be resumed.

Because the initial full copy was already completed previously, only incremental changes will need to be applied across the slower network going forward.

Sync Now

The Sync Now operation replicates incremental changes since the last sync from the source to the destination. It is used primarily to sync replication sessions configured to use manual synchronization. However sessions configured for automatic synchronization (RPO) can also be synced manually using this operation. If initial synchronization is initially deferred when using manual synchronization, the Sync Now operation must also be used to manually perform the initial synchronization. If synchronization is already in progress, it must complete before a new manual synchronization can begin. Any attempt to begin a new synchronization will fail if another is already running. When the synchronization completes, the destination storage resource will have the same data that was present on the source at the time the sync was started. Sync Now can only be initiated from the source side.

Failover and Failback

When a failover is initiated, the destination storage resource becomes the production copy, and hosts can begin accessing their production data on this storage resource. Failover behavior depends on whether the failover is initiated from the source or destination side.

Failover with Sync

When failover is requested from the source side, a final synchronization will first be performed to ensure that no production data is lost when failing over. For this reason, it is recommended to failover from the source side whenever possible. The source remains available to hosts during this sync until less than 100MB of changes remain to be copied to the destination. At this point, the system disables host access to the source storage resource, and the remaining changes are replicated to the destination. The system then enables host access on the destination storage resource.

Failover without Sync

In the case of a disaster, the source side will not be accessible, and the replication session will need to be failed over from the destination. In this case, initiating a failover will cause the replication session to be immediately failed over without first being synchronized. Because of this, all changes made to the source since the last sync will be lost. The system will then enable host access on the destination storage resource, and hosts can begin accessing their production data on the destination. Note that when failing over a local replication session, the user is given an option to failover with or without sync.

19 EMC VNXe3200 Replication Technologies

Resume

Once a replication session is failed over from either side, replication does not automatically begin in the opposite direction. If the original source storage resource is available, the replication session can be manually resumed which will begin replication in the opposite direction. This resume operation is initiated from the new production side (original destination). Note that this is not the same as a failback, since the original destination storage resource will continue to be used for production, with replication in the opposite direction as was originally configured.

Failback

After failover , if the source storage resource is available, the user also has the option to failback. Failback synchronizes the source with the current production destination storage resource, allowing the source to again be made available to hosts. After failback the source will become the production storage resource, as it was prior to failover. Failback follows a similar process as failover. When failback is initiated, the system begins synchronization from the destination storage resource to the source storage resource in order to apply changes made on the destination since failover. The destination remains available to hosts during this sync until less than 100MB of changes remain to be copied to the source. At this point, the system disables host access to the destination storage resource, and the remaining changes are replicated to the source. Host access is then enabled on the source storage resource, and the direction of replication is automatically reversed, so that data is replicated from source to destination in the future.

Verify and Update

This operation allows a user to verify and update a replication connection to a remote system. It is performed on the replication connection itself, as opposed to an individual replication session. Verify and Update can be used to verify network connectivity required for replication, or to update the list of usable replication interfaces after adding or removing interfaces. If a replication connection is disrupted due to network or other issues, Verify and Update can also be used to re-establish the connection.

Delete

Deleting a replication session can be performed entirely from the source side or from both the source and destination sides separately. When deleting a replication session from the source side, the session will also automatically be deleted on the destination side, provided the replication connection between the systems is healthy. This method is recommended anytime the source and destination systems are both available. It is also possible to delete a replication session from each side individually, which is useful when connectivity between systems has been lost. Note that deleting a replication session will also delete the internal snapshots used by that session. Since the common base is lost, a full synchronization will be required when a replication session between the same storage resources is established again. Replication sessions cannot be deleted while synchronization is in progress.

20 EMC VNXe3200 Replication Technologies

The connection and interfaces configured for replication can also be deleted. However this is only allowed if there are no dependent objects. This means a replication connection can only be deleted if there are no replication sessions using that connection, and replication interfaces can only be deleted if they are not in use by a replication connection.

Topologies With the introduction of replication, VNXe3200 is now able to support multiple topologies to suit various disaster recovery configurations. The following remote replication topologies are possible using native replication in VNXe3200:

• One-Directional

o A single source system replicating to a single destination system

• Bi-Directional

o A two system topology in which each system acts as a replication destination for the other’s production data

• One-to-Many (system level)

o A single source system replicates different storage resources to multiple destination systems

• Many-to-One

o Multiple source systems replicate to a single destination system

A graphical view of these topologies is shown in Figure 4. Note that all supported topologies have a 1-to-1 configuration for each individual replication session in the topology.

21 EMC VNXe3200 Replication Technologies

Figure 4 – Replication Topologies

In VNXe3200, all storage resources may be a member of only one replication session. This means that cascading replication, where a destination storage resource also serves as a source for another replication session, is not possible. It is also not possible to configure a One-to-Many topology where one storage resource replicates to several destination storage resources. One-to-Many topology is possible at the system level.

With the One-to-Many topology, separate storage resources at a single central production site can be replicated to multiple destination sites. These storage resources could then be accessed locally on the destination sites, using user snapshots of the otherwise inaccessible destination storage resources. For example, several test/dev teams at remote sites could safely experiment with copies of different production storage resources, without accessing the actual production storage. One-to-Many topology also segregates fault domains for DR copies of production data, avoiding lengthy full synchronizations if a disaster occurs at a DR site. For example, if all production storage resources are replicated to the same DR site, and that DR site fails (meaning data cannot be recovered), a full copy will be required for each storage resource upon configuring new replication sessions. However, if one DR site fails in a One-to-Many topology, only storage resources replicating to that site will require a full synchronization upon re-establishing DR.

With the Many-to-One topology, VNXe3200 allows replication from multiple remote source sites to one central destination site.

The maximum supported number of remote replication connections in VNXe3200 is 16, meaning 16 systems is the limit on the size of any Many-to-One or One-to-Many replication configuration.

22 EMC VNXe3200 Replication Technologies

Unisphere Management Unisphere makes managing replication simple and intuitive. All replication operations, including configuration of replication interfaces, connections, and sessions can be performed with the Unisphere GUI. Unisphere’s easy to use wizard-based configuration and management allows IT generalists and advanced users alike to leverage VNXe3200’s replication functionality. All replication functionality can also be achieved through Unisphere CLI. For more information on configuring replication using Unisphere CLI, refer to the VNXe3200 Unisphere CLI User Guide on EMC Online Support.

Configuring Replication

The workflow for configuring replication through Unisphere involves the components described earlier in the paper:

1. Creating replication interfaces

2. Creating a replication connection

3. Creating a replication session

Note that the first two steps are only required for remote replication. Local replication uses the internal CMI bus to replicate across SPs if required, so replication interfaces are not needed. Local replication also uses a pre-created default local replication connection, so a replication connection does not need to be manually configured as with remote replication.

Unisphere has been updated with multiple new pages and tabs for configuring and managing replication. Each of the steps above is performed from an appropriate page in Unisphere. For more information on using Unisphere to configure and manage replication, refer to the Unisphere Online Help.

Replication Settings

The Replication Settings page is used to configure the replication interfaces used for replication traffic between systems. It is located under the Settings tab in Unisphere. Similar to the iSCSI Settings page, the Replication Settings page allows interfaces to be created on embedded or IO module Ethernet ports. Multiple interfaces can be created per physical port, and at least one replication interface per SP per system must be created before a replication connection can be successfully configured. Replication interfaces can also be configured as part of the Unisphere Initial Configuration wizard when first logging into a system.

While replication interfaces may share physical ports with iSCSI or NAS Server interfaces, replication interfaces are designated only for replication traffic, and cannot be used for host IO.

23 EMC VNXe3200 Replication Technologies

Figure 5 - Replication Settings Page

Replication Connections

Once replication interfaces have been configured on two systems, a replication connection can be created between these systems. In Unisphere this is done from the Replication Connections page found under the Hosts tab, as shown in Figure 6. The Verify and Update operation described earlier can also be run from this page.

Figure 6 - Replication Connections Page

24 EMC VNXe3200 Replication Technologies

To configure a replication connection, click the Add Replication Connection button and provide the remote system’s management IP Address, and credentials for both systems as shown in Figure 7. If there are any issues with connectivity between replication interfaces or management IP addresses, the operation will fail, and the connection will appear in the list with an unhealthy state. This will allow you to run a Verify and Update to bring the replication connection online once the networking issues have been resolved.

Figure 7 - Add Replication Connection Wizard

Create Replication Session Wizard

Use the Replication tab on the Details page of a storage resource for configuring and managing replication, as shown in Figure 8. There are initially three options available on this tab: Configure Local Replication, Configure Replication to a Remote System, and Change to Read-Only. Configuring local or remote replication will designate this storage resource as the source in the replication session, and allow you to specify or create a destination storage resource. The Change to Read-Only button will disable host access to this storage resource, allowing it to be used as a destination in a replication session. Note that you must then configure the replication session from the intended source side.

25 EMC VNXe3200 Replication Technologies

Figure 8 - LUN Details Replication Tab, Replication Not Configured

If either Configure Local Replication of Configure Replication to a Remote System is clicked, the Create Session Wizard will launch to configure replication. The first step of this wizard allows the Destination System and Storage Resource to be selected, as shown in Figure 9. If a remote system connection or destination storage resource does not exist, it may be created from within this wizard using the appropriate button on the right. When creating a storage resource from within this wizard, the size will be locked to match the source storage resource. Thin/thick configuration will also be locked in this wizard, however thin-to-thick and thick-to-thin replication is supported when the destination LUN has been pre-configured outside of the replication session wizard. Note that for remote replication, the Create Storage Resource button will launch a new session of Unisphere on the remote system.

26 EMC VNXe3200 Replication Technologies

Figure 9 - Select Destination

The wizard also allows the user to configure Manual or Automatic Synchronization options for the replication session, as shown in Figure 10. If Manual Synchronization is selected, the user has the option to perform the initial synchronization automatically (immediately), as discussed previously. If Automatic Synchronization is selected, the initial synchronization is always performed immediately, and the user can specify an RPO value to be used for subsequent differential syncs. Considerations for choosing an RPO: Protection, Performance Impact, Network Traffic, and Protection Space Consumed are shown on this step for user reference.

27 EMC VNXe3200 Replication Technologies

Figure 10 - Configure Synchronization

The Replication Path step shown in Figure 11 allows the user to optionally choose the replication interfaces that will be used for this replication session. By default, the system will automatically select which interface will be used by each storage processor for the replication session. Interfaces must either all be System Selected or all be selected by the user; one interface cannot be specified without also specifying the others.

28 EMC VNXe3200 Replication Technologies

Figure 11 - Replication Path

Replication Details

Once a replication session is configured, the Replication tab on the Details page of the storage resource will display replication information, as shown in Figure 12. The information shown on this page includes:

Name – The name of the replication session.

State – Auto-sync, Manual-sync, Idle, Failed Over, Failed Over with Sync, Lost Communication.

Replication Role – The role of the storage resource being viewed, Source or Destination.

Source System – This field shows the system that hosts the source storage resource. Only shown for destination storage resources.

Destination System – This field shows the system that hosts the destination storage resource. Only shown for source storage resources.

Time of Last Sync – The time when the last completed sync was started. This is the point-in-time to which data will be restored if a disaster occurs and the storage resource must be failed over without synchronizing.

Sync Status – Displays percentage complete and estimated time remaining for the current sync. Only shown when a sync is in progress.

Sync Transfer Rate – Displays rate of current sync in MB/s. Only shown when a sync is in progress.

29 EMC VNXe3200 Replication Technologies

The replication session can also be managed from this page. Replication interfaces used by the session can be changed by expanding Show Advanced. The replication mode, Manual or Automatic, can be changed from this page as well, including modifying the RPO for Automatic Synchronization. Replication operations including Sync Now, Pause/Resume, and Failover/Failback can also be performed from the bottom of this page. The available actions will vary based on the state of the replication session and whether the source or destination storage resource is being viewed.

Figure 12 - LUN Details Replication Tab, Replication Configured

The information and actions discussed above can also be viewed from the System Replications page in Unisphere. The System Replications page is located under the System tab, and allows users to view details for all replication sessions configured for the VNXe3200, from a central location. The System Replications page is shown in Figure 13. Replication session attributes are shown for each session, including Name, State, Sync State, Storage Type, Replication Source, and Replication Destination. The replication operations discussed previously are available at the bottom of the page for the selected session. For more information on a specific replication session, select a session and click the Details button.

30 EMC VNXe3200 Replication Technologies

Figure 13 - System Replications Page

The Details page of a replication session shown in Figure 14 displays the same information as the System Replications page, as well as a graphical representation of the replication session showing the storage resources involved. The user can also perform replication operations and change which replication interfaces are used by this replication session from this page. The RPO and replication mode (Manual/Automatic) can be changed from the Synchronization tab.

31 EMC VNXe3200 Replication Technologies

Figure 14 - Session Details Page

Limits This section discusses the limitations related to native asynchronous replication. These address the number of replication sessions and synchronizations, and are shown in Table 1 below.

Limit Maximum

Max Replication Sessions 1000 (500 per SP)

Max Concurrent Syncs 256 (128 per SP)

Max Concurrent Initial Syncs 32 (16 per SP)

Max Remote System Connections 16

Max Replication Sessions per Storage Resource

1

Minimum/Maximum RPO 5 minutes/24 hours

Table 1 – Native Replication Limits

32 EMC VNXe3200 Replication Technologies

The per SP limits above are enforced when both SPs are functional. This means that a single surviving SP can sustain all replication sessions and syncs in the event of an SP failure. Initial syncs also count against the maximum number of concurrent syncs; 32 initial and 256 incremental syncs are not possible concurrently. Note that all limits above are at the top storage resource level. This means that a LUN group containing 3 LUNs will count as a single replication session/sync. The introduction of native replication does not affect existing VNXe3200 limits on storage resources or their snapshots. Both source and destination storage resources will continue to count toward their respective limits, despite the fact that destination storage resources are not host-accessible. However, internal snapshots created by the system for replication will not count against the system snapshot limit.

RecoverPoint with VNXe3200 RecoverPoint support allows VNXe3200 to leverage RecoverPoint appliances for synchronous or asynchronous block replication. RecoverPoint’s unique DVR-like roll back functionality allows data to be recovered to any point-in-time. Using RecoverPoint, VNXe3200 is able to interoperate with other EMC storage solutions for out of family replication, including other vendor’s storage systems using VPLEX. The following sections will discuss:

• RecoverPoint functionality available with VNXe3200 • Configuring RecoverPoint with VNXe3200 • Storage system-specific limits

A comparison of native replication and RecoverPoint, as well as interoperability concerns will be discussed in later sections.

This document focuses on using RecoverPoint with VNXe3200. For more information on RecoverPoint-specific concepts and management, refer to the RecoverPoint Administrator’s Guide on EMC Online Support.

Overview RecoverPoint is an appliance-based replication solution that supports various EMC storage solutions. RecoverPoint is managed from its own user interface, Unisphere for RecoverPoint, which is a web portal for all RecoverPoint related replication management. RecoverPoint support allows VNXe3200 users to achieve the full advanced replication and disaster recovery functionality of RecoverPoint/EX, including:

• Synchronous and asynchronous block replication • Local and remote replication • Replication with other EMC storage, including Clariion, VNX, VMAX, XtremIO,

and VPLEX • Any point-in-time recovery • Deduplication and compression

33 EMC VNXe3200 Replication Technologies

• Dynamically switch between sync/async mode based on throughput or latency • Application regulation to ensure RPO is met

RecoverPoint appliances (RPAs) are used to replicate data locally and remotely. VNXe3200 supports the use of either physical or virtual RPAs. A cluster of RPAs is configured at each site and connected to the local storage system(s) at that site. The RPA clusters at each site are then connected together to form a RecoverPoint system, enabling replication between sites. In VNXe3200, RecoverPoint/EX is supported, which allows up to 5 clusters and replication to and from various EMC storage systems. Figure 15 shows a high level example of a basic RecoverPoint system.

Figure 15 - RecoverPoint System

In the system above, RecoverPoint is being used to replicate data from a local VNXe3200 to a remote VNXe3200, although other supported storage systems could also be used. RecoverPoint replicates consistency groups, or groups of LUNs which must be restored to a common point-in-time in the case of a disaster. In this example, the Production consistency group on the left is being replicated to the Remote Copy consistency group on the right.

When replication is first configured, the production consistency group will be fully copied to the remote consistency group. Subsequent host writes to the production storage will then be propagated to the remote storage synchronously or asynchronously in order to maintain consistent data between the two sites.

The VNXe3200 interoperates with RecoverPoint through the use of an on-array RecoverPoint splitter. The RecoverPoint splitter in a VNXe3200 software component that “splits” or copies writes, allowing the RPAs to have access to the data being written to production storage. When host writes are sent to production storage on the VNXe3200, they are intercepted by the splitter, which sends a copy to the local RPAs and a copy to the production storage. The local RPA cluster then sends the writes across the link to the remote RPA cluster, which writes the data to the remote storage.

Another important component of RecoverPoint is its Journal Volumes. Journal volumes (not shown above) store point-in-time information that allows RecoverPoint to roll

34 EMC VNXe3200 Replication Technologies

back data to any point-in-time. When replication is configured, a journal LUN must be created for each copy of the consistency group. As host writes are copied to a copy of the consistency group, those changes are also stored in that copy’s corresponding journal volume and associated with a particular point-in-time. If data later needs to be recovered to a specific point-in-time, the changes since that time can be reversed, or rolled back, using the point-in-time information stored in the journal. The capacity of the journal volume and magnitude of changes will influence the number of points-in-time that can be maintained.

For a more in-depth discussion of RecoverPoint concepts, refer to the RecoverPoint Administrator’s Guide on EMC Online Support.

Configuration As RecoverPoint is an off-array appliance based solution, some additional configuration is required before VNXe3200 data can be replicated using RecoverPoint. VNXe3200 uses RecoverPoint/EX licensing, which requires that RecoverPoint deployment be completed by EMC Professional Services. Once deployment is complete, RecoverPoint can be used to replicate VNXe3200 storage resources. RecoverPoint version 4.1.1 or later is required for VNXe3200 support.

Note that configuration requires a repository volume (LUN) be created for each cluster in the RecoverPoint system. The repository volume is used to maintain the configuration and communication between RPAs in a cluster. The LUN designated as the repository volume should not be made accessible to other hosts or deleted. Additionally, vRPA iSCSI network interfaces should reside on a different subnet than hosts that will be accessing the replicated storage.

RecoverPoint LUN Access

This section will cover configuring LUNs for use with RecoverPoint. Note that this section uses the term “LUN” to describe any individual block logical unit, whether it is a standalone LUN, VMware VMFS datastore, or individual LUN group LUN. This is because VMware VMFS datastores and LUN groups are VNXe3200 constructs not distinguishable by RecoverPoint. That is to say, only the underlying LUNs are seen by RecoverPoint. VNXe3200 LUN groups are not automatically grouped in RecoverPoint, and must be re-associated as a consistency group within RecoverPoint when configuring replication. From the RecoverPoint side, there is no difference between a LUN, VMware VMFS datastore, and individual LUN group LUN.

Before RecoverPoint can be used to replicate LUNs, the RPAs must be able to access those LUNs. In VNXe3200, this is done using a special RecoverPoint host.

When a VNXe3200 is connected to a RecoverPoint cluster, a new host of type “RecoverPoint” is automatically created by the VNXe3200. The initiators of the cluster’s RPAs are automatically associated with this host. All RPA initiators visible to the VNXe3200 will be combined under a single host entry for RecoverPoint, regardless of the number of RPAs in the cluster. For a LUN to be available to

35 EMC VNXe3200 Replication Technologies

RecoverPoint, the automatically created RecoverPoint host must be given LUN access on the VNXe3200. This is done in the same way as giving access to any other host: through the LUN creation wizard or LUN Details page.

Once RecoverPoint host access has been given, the LUN will become available in Unisphere for RecoverPoint to be used for replication. Before replication can be configured, RecoverPoint must be able to access:

• All production LUNs to be replicated as part of the consistency group • A corresponding destination LUN for each production LUN, of equal or greater

capacity • At least 1 journal LUN per copy of the consistency group (including the

production copy)

The following basic example illustrates this configuration. In this example, a user wants to replicate 3 production LUNs from a VNXe3200 at Site A to a VNXe3200 at Site B using RecoverPoint.

1. 3 production LUNs are created or pre-existing at Site A 2. 1 journal LUN is created at Site A 3. The RecoverPoint host at Site A is given access to the 3 production LUNs, and

the journal LUN 4. 3 copy LUNs are created at Site B, to serve as a destination for replication 5. 1 journal LUN is created at Site B 6. The RecoverPoint host at Site B is given access to the 3 copy LUNs, and the

journal LUN

At this point, all VNXe3200-specific configuration for this consistency group has been completed. All further replication configuration and management will be completed using the Unisphere for RecoverPoint interface. For more information on configuring consistency group replication using RecoverPoint, refer to the RecoverPoint Administrator’s Guide on EMC Online Support.

Comparison of Replication Solutions Differences between VNXe3200 native replication and RecoverPoint involve their functionality as well as configuration and management. For general use cases where asynchronous block replication is required between VNXe3200 storage systems, native replication may be used. RecoverPoint offers more advanced functionality however as an off-array appliance-based solution it also requires additional configuration and management. Table 2 below highlights some of the key differences to consider when choosing a replication solution for VNXe3200.

36 EMC VNXe3200 Replication Technologies

Table 2- Comparison of Replication Solutions

These differences help to determine the appropriate solution for a specific use case. For example, a user who wishes to replicate block data from several remote VNXe3200s to a central disaster recovery VNXe3200 over long distances could leverage native asynchronous replication. On the other hand, a user with more advanced requirements such as synchronous replication from other vendor’s storage systems to VNXe3200 could leverage RecoverPoint replication between VPLEX and VNXe3200.

Interoperability This section discusses interoperability considerations when using replication in VNXe3200. Specifically, it discusses interoperability between internal replication snapshots and user snapshots, as well as considerations when using RecoverPoint and SMI-S with native replication.

Native Replication and Unified Snapshots

As discussed earlier, native replication leverages VNXe3200’s Unified Snapshots technology to replicate point-in-time images of data. The snapshots created and used by the replication process are called internal snapshots. While they are user-visible, the user cannot perform any actions on these snapshots. These snapshots also do not count against the system snapshot limit, or participate in auto-delete operations. Unlike internal snapshots, user snapshots are the snapshots scheduled or manually taken by a user for local point-in-time data protection.

37 EMC VNXe3200 Replication Technologies

User snapshots are supported for storage resources involved in a replication session. Source storage resources can be snapped at any time, and the user snapshot will always reflect the point-in-time at which it was taken. All normal snapshot operations are available for user snapshots of source storage resources, including restore. While source user snapshots can be restored at any time, restoring may result in the next sync taking longer than usual to complete, since the changes made to the source as a result of restoring will need to be replicated to the destination.

When snapshot of a destination storage resource is taken, the new snapshot will reflect the last point-in-time with which the storage resource was synced, or Time of Last Sync. For example:

1. Suppose a replication synchronization operation begins at 1:00pm, and completes at 1:30pm.

2. If a user snapshot is then taken on the destination at 1:35pm, this user snapshot will reflect the data as it was on the source at 1:00pm.

3. Now suppose a second synchronization begins at 3:00pm, and the user takes a destination snapshot while this synchronization is still in progress, at 3:05. Because a synchronization operation is in progress, the user snapshot will be redirected to the destination’s most recent internal replication snapshot. Instead of taking the snapshot of the destination resource mid-sync, the latest internal replication snapshot will be copied. This is the data associated with the last completed synchronization, which is consistent with the 1:00pm source image.

4. Therefore, this second snapshot will also reflect the data as it was on the source at 1:00pm.

Note that user snapshots of destination storage resources cannot be restored without first deleting the replication session.

Native Replication and RecoverPoint

In addition to native replication, VNXe3200 also provides support for RecoverPoint replication. RecoverPoint is an appliance-based disaster recovery solution that replicates asynchronously or synchronously and enables recovery to any point-in-time. RecoverPoint and native replication can be used on the same VNXe3200 system, however the same storage resource cannot be replicated by both technologies simultaneously. More specifically, any storage resources visible to RecoverPoint (including repository and journal volumes) cannot be configured in a native replication session, and storage resources participating in a native replication session cannot be made visible to RecoverPoint.

When RecoverPoint is configured, the RecoverPoint appliance initiators are automatically combined into a single host in Unisphere, which can be given access to storage resources for replication. If a storage resource is already involved in a native replication session, this host will not be available for that storage resource. Similarly, native replication cannot be configured for any storage resource if access to that storage resource has been given to the RecoverPoint host. This behavior enforces the

38 EMC VNXe3200 Replication Technologies

restriction that a storage resource can only be replicated by one technology, native replication or RecoverPoint.

Native Replication and SMI-S API

Storage Management Initiative Specification (SMI-S) is an industry standard that provides a solution for storage management applications to manage storage devices of various vendors who support SMI-S API on their products. One such storage management application is Microsoft System Center Virtual Machine Manager (SCVMM). VNXe3200’s SMI-S API supports many of the profiles defined in SMI-S, including the Replication Profile (mirror part) profile. Implementing this profile allows external storage applications to manage VNXe3200 native replication functionality through SMI-S API. On VNXe3200, SMI-S API can be used to:

• Discover replication capabilities of the system • Configure local replication • Configure remote replication • Delete replication session • Pause replication session • Resume replication session • Failover replication session • Failback replication session

The operations above are supported for LUNs, LUN groups, and VMware VMFS datastores. While replication sessions can be configured and managed through SMI-S API, configuration of replication interfaces and replication connections is not supported. Replication interfaces and connections will need to be configured using Unisphere or Unisphere CLI before a storage management application can use SMI-S API to configure and manage VNXe3200 replication sessions.

Conclusion This paper provided information on the replication solutions available for VNXe3200. Implementing a replication technology enables you to have a redundant copy of data locally, or at a remote location. Having a disaster recovery site minimizes the cost associated with downtime and simplifies the recovery process in the event of a disaster involving the production storage system or data center.

Native asynchronous block replication leverages Unified Snapshot technology to maintain consistent point-in-time images across two VNXe3200 systems, or locally within the same system. Automatic or manual synchronization options, and support for up to sixteen remote systems allow users to configure replication to fit their unique use cases. Native replication allows storage administrators to configure and manage replication directly from Unisphere.

RecoverPoint support brings advanced replication functionality to the VNXe3200. With RecoverPoint, asynchronous and synchronous replication can be configured between various EMC storage systems, including VNX and VMAX. These

39 EMC VNXe3200 Replication Technologies

functionalities along with any point-in-time recovery open the door to additional use cases, and contribute to making the VNXe3200 a more robust storage solution.

References The following references can be found on EMC Online Support:

• EMC RecoverPoint Administrator’s Guide

• EMC Unisphere for the VNXe3200: Next-Generation Storage Management

• EMC VNXe3200 Unified Snapshots

• Introduction to the EMC VNXe3200