23
Cohesity Solution Guide: VMware January 2017

Cohesity Solution Guide: VMware · Cohesity Solution Guide: VMware January 2017 ... VMware pioneered the transformation of enterprise data centers from inefficient, physical silos

Embed Size (px)

Citation preview

Cohesity Solution Guide:VMware January 2017

©2016 Cohesity, All Rights Reserved1.

Table of Contents

Table of Contents.................................................................................................................................................................................................1

About This Guide.................................................................................................................................................................................................2

Audience..................................................................................................................................................................................................2

Cohesity and VMware Overview & Benefits..............................................................................................................................................2

Cohesity & VMware Architecture and Concepts......................................................................................................................................3

Cohesity Cluster Management Interfaces: Dashboard, Alerts, SNMP, REST API..........................................................4

Cohesity Protection Sources............................................................................................................................................................5

Cohesity Protection Policies............................................................................................................................................................5

Cohesity Protection Jobs..................................................................................................................................................................7

Cohesity View Boxes...........................................................................................................................................................................7

Cohesity Views......................................................................................................................................................................................8

Cohesity Test & Dev Clones..............................................................................................................................................................8

VMware and Cohesity Design Considerations..........................................................................................................................................10

Data Access and Cluster Communication Network Ports.....................................................................................................10

CBT Backups..........................................................................................................................................................................................11

Non-CBT Backups................................................................................................................................................................................11

Anatomy of Backup Job Run...........................................................................................................................................................11

Anatomy of a VM Restore Job.........................................................................................................................................................12

Anatomy of a File Restore Job........................................................................................................................................................12

Supported Configurations................................................................................................................................................................................13

Role Privileges for vCenter Server 6.0, 5.5 or 5.1......................................................................................................................14

Cohesity and VMware Automation................................................................................................................................................................15

Cohesity & VMware Recommended Practices...........................................................................................................................................15

Recommended Practice: Configure the maximum number of vCenter NFS mounts.................................................15

Recommended Practice: Use application consistent snapshots for Windows VMs....................................................15

Recommended Practice: Use pre-freeze and post-thaw scripts with

VMware tools for VM guests that require special quiescing mechanisms......................................................................16

Recommended Practice: Implement and test crash-consistent snapshots for all other VMs.................................16

Recommended Practice: Optimize MTU for end-to-end performance............................................................................17

Recommended Practice: Utilize extended policy rules to ensure archive and compliance requirements.........18

Recommended Practice: Set blackout periods to optimize job start times..................................................................18

Recommended Practice: Utilize a naming convention for Protection Jobs and Policies..........................................19

Policy Naming........................................................................................................................................................................19

Job Naming............................................................................................................................................................................20

Recommended Practice: Assign hosts to jobs across multiple ESX/ESXi hosts..........................................................20

Recommended Practice: Confirm datastore space is available when restoring to a new primary storage location.....21

Acknowledgements...........................................................................................................................................................................................22

About this guideThis paper outlines the core architecture of the Cohesity DataPlatform and DataProtect hyperconverged solution in a VMware environment and details key design recommendations and practices for successful deployment.

Audience: This paper is written for virtualization, storage and backup architects and administrators who are responsible for enterprise data protection of their environments. This guide assumes basic knowledge of the VMware solution stack V5.5 and higher including VMware ESX/ESXi, vSphere, vCenter, vSAN and Storage Policy-Based Management (SPBM) operations as well as fundamental networking knowledge and guest OS file system and snapshot technologies.

Cohesity and VMware Overview & Benefits VMware pioneered the transformation of enterprise data centers from inefficient, physical silos of IT resources to highly efficient and easy-to-manage virtualized infrastructures. New capabilities and offerings from VMware such as Storage Policy-Based Management (SPBM) and Virtual SAN have expanded the choices for IT staff on how to architect and operate their compute and storage resources in support of their application needs. These new data center architectures create challenges for many IT managers that are realizing the complexity, limitations and cost of legacy data protection solutions. In many instances legacy, storage systems are put to use as targets for backup data that are loosely integrated, are less efficient due to non-shared storage efficiency domains and have a difficulty keeping up with the RPO and RTO needs of today’s applications. Additionally, the data remains “locked up” in these storage targets and the data is unable to be leveraged for test/dev usage or analysis of the data for business advantage or compliance. The Cohesity DataPlatform provides the only web-scale platform designed to consolidate all your secondary data and workflows. Cohesity provides a scale-out, globally deduped, highly available storage fabric to consolidate your secondary data, including backups, files, and test/dev copies. Cohesity provides a single, unified solution to simplify data protection, integrate with the public cloud, support test/dev environments, and provide deep visibility into secondary data with built-in analytics. The platform brings the simplicity and efficiency of scale-out hyperconverged secondary storage to all VMware environments and architectures. Cohesity utilizes native VMware data protection APIs (VADP) to ensure seamless integration and native support for VMware backups. The joint Cohesity and VMware solution delivers the following benefits:

• Web-scale, pay-as-you grow secondary storage architecture

• Dynamic, application-centric operations through integration with storage policy-based management

• Eliminate complexity with a unified platform for end-to-end data protection

• Ensure fast Recovery Points and near-instantaneous Recovery Times

• Lower total cost of ownership

• Consolidate backups, files, and test/dev copies on a single web-scale platform

• Accelerate application time-to-market with instantaneous provisioning of clones for test/dev

• Get deep visibility into your secondary data with built-in analytics capabilities

The Cohesity DataPlatform provides the capability for IT Administrators to bridge the traditional “islands of secondary storage” by leveraging the truly global file system and storage efficiency technologies built into the platform. These capabilities help customers in transforming their data centers from silos of dark data to highly efficient, next-generation webscale Enterprise IT.

©2016 Cohesity, All Rights Reserved 2.

©2016 Cohesity, All Rights Reserved

Figure 1 - Cohesity & VMware Architecture High Level Overview

3.

Cohesity & VMware Architecture and Concepts

The core components of the Cohesity and VMware solution are:

VMware vSphere/ESX/ESXi Hosts

VSAN or other primary storage

vCenter Management Servers

Cohesity DataPlatform Cluster with DataProtect Software

Optional remote Disaster Recovery Cohesity Cluster

Optional archive or tier target at your Cloud provider of choice

As can be seen in Figure 1 above, the Cohesity DataPlatform can easily be integrated into a new or existing VMware infrastructure in minutes by simply adding the nodes, connecting the network, configuring a cluster partition with VIPs and creating the appropriate protection policies, jobs and replication schedules. Due to Cohesity’s advanced VMware integration, there is no need to load agents on the hosts or guests in order to begin utilizing DataPlatform and DataProtect in the environment. Cohesity’s integrated data management and protection software become immediately available upon installation without the need for any external hardware or software installations or licensing needs.

App Servers

VMware

Physical Servers NFS / SMB / REST

Cloud Tier

Tape / Cloud Archival

VMware vSAN

Database

vSphere + VMware vSAN

4.

Cohesity Cluster Management Interfaces: Dashboard, Alerts, SNMP, REST APICohesity’s policy-based storage management approach is immediately viewable. Upon supplying credentials to the unified management web console hosted on the DataPlatform, users are presented with an overall health dashboard of the cluster:

This dashboard reflects the overall health and state of the cluster including the number of jobs that have run, any SLA violations, errors or alerts as well as a brief data reduction and performance summary. Each of these items can be clicked on to review further detail. Storage, backup and virtualization administrators may utilize the dashboard for cluster status reviews, however the platform also provides built-in alerting mechanisms as well as SNMP support for notification of certain conditions. Finally, all cluster management tasks can be driven via the native REST API. Full monitoring and administration capabilities are available via the REST API and the documentation contained on the cluster.

Figure 2: Cohesity Dashboard

5.

Figure 3: Cohesity Protection Source Registration

Cohesity Protection Sources Protection sources are registered vCenters or ESX/ESXi hosts that provide the inventory lists of VMs to backup and restore. Registration of a protection source is as simple as entering the hostname or IP address of the vCenter host and the appropriate credentials.

Cohesity protection sources can be throttled to avoid overwhelming the primary datastores during protection jobs. This is configured by enabling throttling and setting latency thresholds for new and currently running tasks. When throttling is enabled, the Cohesity cluster enables Storage IO Control (SIOC) statistics on the datastores and monitors latencies that throttle Cohesity jobs and tasks to ensure optimal performance. Cohesity Protection Policies Protection policies describe backup frequency, backup retention, replication policies and archive schedules. Once policies are defined they can be applied to projection jobs thus ensuring appropriate protection strategies across the environment.

6.

Figure 4: Cohesity Protection Policy and Extended Retention Rules

Protection policies are very flexible and provide a number of extended rules and monitoring options for:

Extended Retention: these rules extend the standard snapshot frequency and retention rules by retaining selected snapshots for a longer period of time. In the example above, the standard rules call for snapshots every 2 hours with 7 day retention. The first extended retention rule ensure that a daily snapshot from the standard rule is retained for an additional 30 days. Note that this rule does create a new snapshot as extended retention rules use the first snapshot from the standard rule set. - Schedule End Date: a policy can be retired on a given date - Custom Retry Settings: set job retry intervals and number of attempts - Blackout Periods: set periodic pauses in associated job runs that use the policy - Alerts and SLAs: ensure IT administrators are aware of job success, failures or SLA violations for SLA time limits. - Priority: set a relative priority for jobs that use the policy

Policies are usually created to ensure RPO compliance and are made available to protection jobs across the entire cluster. Define policies based around data protection requirements that can be shared across protection jobs. Implement a naming scheme that allows easy identification of policy attributes.

7.

Cohesity Protection Jobs

Protection jobs define one or more groupings of VMs for protection that comply with a specific Protection Policy (see above). Jobs also set the time and timezone of the job as well as the desired viewbox that will contain the protected data. Jobs are primarily used to assign policies to groups of VMs that have similar attributes. Most commonly, jobs are used to group VMs that have a common SLA and assigned a common protection policy. Jobs can also be used to capture VMs that serve the same application as a group. In other instances, jobs are used to capture VMs from logical groupings such as organizational user groups, departments or geographical locations. Jobs protect VMs in one vCenter, thus VMs that span multiple vCenters will require multiple jobs. When doing so, it is best to use a naming convention that allows quick identification of related jobs. Cohesity View Boxes

View boxes represent storage efficiency domains within the Cohesity cluster and can optionally associate a view box with a specific cloud tier. When view boxes are created, the administrator assigns deduplication, compression and encryption attributes to the view box.

View boxes can be set for inline, post-process dedupe or no dedupe as well as inline post-process or no compression. These settings can be modified at creation and over the life of the view box. Encryption is enabled at creation and is only chosen at creation. Similarly, the Cloud Tier option is only available at creation time.

Figure 5: View Box Configuration

8.

Cohesity Views Cohesity views represent mount points into a specific view box. Views provide NFS or SMB/CIFS protocol access for files, snapshots, and clones of other views. QoS can be set for each view that can tune performance for the target workload.

Figure 6 - Cohesity View QoS Policy Settings

Cohesity Test & Dev Clones The Cohesity cluster can instantaneously create and provision VM clones hosted on the cluster. This is a powerful capability that allows DevOps and test/dev workflows to quickly spin up multiple VMs for various purposes while keeping them running on the Cohesity cluster and not impacting the performance or storage capacity of the primary storage systems. Cloned VMs can run indefinitely on the Cohesity cluster or can eventually be vMotioned to a primary storage system.

9.

Figure 7 - Cohesity Test & Dev Clones

10.

VMware and Cohesity Design Considerations At a deeper level there are key infrastructure requirements that should be implemented to ensure successful VMware and Cohesity integration: Data Access and Cluster Communication Network Ports Each Cohesity DataPlatform node has 2 x 10GbE ports for data access as well as cluster communications. The data access and cluster interconnect ports are assigned both node level IP addresses for individual node management as well as one or more virtual IP (VIP) addresses for data access. The VIPs are provisioned as a bonded interface by the Cohesity cluster so it is important that they reside on the same switch vlan and have multicast enabled on their associated switch ports. For applications that are heavily data protection focused, jumbo frames can be used to increase performance. If jumbo frames are used, ensure that the correct MTU sizes are set across the entire link path from the Cohesity nodes, all associated switches and/or routers as well as the VMware hosts. It is critical that the entire path be covered to all nodes in the cluster as well as any network failover paths that could enter the link path. Finally, the best performance of the cluster will be when the shortest routes are taken from host/guest OS and the Cohesity cluster. Ensure that VMware networking best practices are used for the vSphere version that’s been deployed. Ensure that vmkernel port groups are properly associated with the appropriate physical NICs (or teams of NICs) within the VMware standard or distributed switch configurations.

Figure 8 - Register vCenter Source

11.

CBT Backups Cohesity leverage’s VMware’s vSphere API for Data Protection and Change Block Tracking (CBT) mechanism to ensure consistent and storage efficient protection of VM data while keeping the data fully hydrated, indexed and instantly available. A deep dive on the Cohesity architecture and SnapTree(™) that supports these capabilities can be found here:

http://www.cohesity.com/wp-content/uploads/2015/10/Cohesity-Architecture-WhitePaper.pdf

https://www.cohesity.com/resource-assets/solution-brief/Cohesity-SnapTree-Solution-Brief.pdf

CBT is implemented at the VMware virtualization layer and can track virtual disk blocks that have been used and/or changed since a previous snapshot. This allows for very efficient storage of incremental changes to the virtual disk such that both storage space is reduced as well as the overall time for backups to take place and in-turn replicate. For further information and supported configurations for CBT, please visit: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1020128

By default, new VMs have CBT disabled. Cohesity’s DataProtect software will automatically detect this condition and enable CBT for the VMs that are configured for protection, removing the need for administrators to track CBT status. Non-CBT Backups While CBT backups are the most common form of Cohesity backups due to their space efficient, fully-hydrated nature, Cohesity also provides the ability to take non-CBT backups. Non-CBT backups provide an extra level of data protection for VMs that require the most protection that the Cohesity DataPlatform has to offer. Anatomy of Backup Job Run A Cohesity backup job coordinates actions on the vCenter, ESXi host, the guest OS and all nodes in the Cohesity cluster. The key elements and order of operations are:

Step 1: Cohesity DataProtect software triggers a scheduled backup job run or the job is manually triggered by a user.

Step 2: The cluster automatically distributes the task of backing up individual VMs across the entire cluster.

Step 3: Cohesity cluster contacts the vCenter to gather current inventory and obtain virtual disk and ESXi host information for the target VMs to be protected.

Step 4: vCenter passes back to Cohesity the requested information.

Step 5: Cohesity contacts the ESXi host, gathers host resource status as well as currently running backup tasks. If resources and backup tasks are within Cohesity thresholds, the backup job run will begin. If the ESXi host is already loaded with backup tasks, then the Cohesity cluster will poll for host availability and defer the backup job to another time.

Step 6: When application-consistent backups are set, the Cohesity cluster checks for the presence of VMTools on the guest OS, which is required to invoke Windows VSS Snapshots for application consistency.

Step 7: Cohesity then contacts the vCenter and requests a VM snapshot. If application-consistency is requested, VMware invokes VSS Snapshots for Windows. Optionally, VMware will also run pre/post scripts for Linux as an additional step before invoking VMware snapshot.

Step 8: VMware takes a snapshot honoring any requested VSS snapshot or pre/post scripts configured.

Step 9: Cohesity validates the snapshot.

Step 10: Cohesity backs up all of the snapshots from all ESXi hosts in parallel across all Cohesity cluster nodes optimized for both VMware and Cohesity parameters such as VMs per datastore and maximum backups per Cohesity cluster.

Step 11: Cohesity requests that the snapshot be deleted.

Step 12: VMware host releases the snapshot.

12.

Anatomy of a VM Restore JobStep 1: User manually triggers a Cohesity VM recovery task and selects snapshot, target, networking settings, VM name,

target datastore…

Step 2: Cohesity contacts VMware endpoint to validate current inventory and chosen recovery task settings

Step 3: Cohesity creates an internal view and clones the VM snapshot and mounts the view to the target ESXi host(s)

Step 4: Create a new VM object using original VM configuration file and chosen recovery settings. Network configuration changes take place at this step

Step 5: VM is (optionally) powered on (Note that the VM is now available for access)

Step 6: Storage vMotion is initiated to move the datastore from the Cohesity cluster to the primary data store.

Step 7: Storage vMotion completes, VMware non-disruptively migrates datastore access from the Cohesity cluster snapshot to the primary data store.

Step 8: Cohesity requests the datastore to unmount

Step 9: ESXi host unmounts datastore

Step 10: Cohesity releases the view Anatomy of a File Restore JobStep 1: User manually triggers a file/folder recovery task either by searching the files through the elasticsearch database or

via browsing VMs and it’s volumes.

Step 2: Cohesity creates an internal view and clones the VM snapshot and mounts the view to the target ESXi host(s)

Step 3: Cohesity attaches the cloned VMDK files to the target VM to which the files are being recovered.

Step 4: Cohesity deploys a helper utility onto the VM and triggers the restore process

Step 5: The restore helper utility performs file copy from the attached disks (originally from the backup) onto the recovery location. The utility additionally preserves the file attributes and other properties based on the user preferences.

Step 6: Once the file/folder copy completes, the disks are detached from the VM

Step 7: Cohesity requests the datastore to unmount

Step 8: ESXi host unmounts the datastore

Step 9: Cohesity releases the view

Figure 9 - Multi-node & multi-stream backups across entire Cohesity cluster

Streams

Streams

Streams

Streams

Streams

13.

Supported ConfigurationsVMware vSphere Support

The Cohesity Cluster supports the following versions of VMware platform software: VMware vSphere: 5.1+

VMware vRealize Support

VMware vRealize Automation 7.01

VMware vRealize Orchestrator 7.01

VMware Virtual Hardware Versions 9+ For the Cohesity Cluster to have adequate vCenter Server privileges, the user specified to connect to the Source (vCenter Server) must have the Role privileges listed in Create a Role with Necessary vCenter Server Privileges: • Log in to a vSphere Client (not a vSphere Web Client).

• Create a new role.

• Select the appropriate privileges for the new role as listed below.

• Assign this role to the vCenter users that will access the Cohesity Cluster.

• Assign this role to users at the top vCenter Object level and enable Propagate to children checkbox.

Figure 10 - File Restore Selection

14.

• When registering a Source in the Cohesity Dashboard, specify a VirtualCenter user assigned to a role with the required Cohesity privileges (defined below).

Role Privileges for vCenter Server 6.0, 5.5 or 5.1The role privileges required by the Cohesity Cluster for vCenter Server 6.0, 5.5 or 5.1 are listed below. If you plan on enabling Throttling for Protection Jobs, you must set the indicated datastore privileges. Datastore

• Allocate space

• Browse datastore

• Configure datastore—Required if Throttling is enabled

• Low level file operations

• Move datastore—Required if Throttling is enabled

• Remove datastore—Required if Throttling is enabled

• Remove file

• Rename datastore—Required if Throttling is enabled

• Update virtual machine files—Required if Throttling is enabled

• Privilege.Datastore.UpdateVirtualMachineMetaData.label

• Required if Throttling is enabled

Distributed switch

• Create

• Delete

dvPort Group

• Create

• Modify

Folder

• Create folder

• Delete folder

Global

• Disable methods

• Enable methods

• Licenses

• Log event

Host

• Configuration

• Storage partition configuration

• Local operations

• Delete virtual machine

Network

• Assign network

Resource

• Assign virtual machine to resource [ool

• Migrate powered off virtual machine

• Migrate powered on virtual machine

vApp

• Add virtual machine

• Assign Resource Pool

• Unregister

Virtual machine

• Configuration

> Add existing disk

> Add new disk

> Advanced

> Disk change tracking

> Host USB device

> Raw device

> Remove disk

> Swapfile placement

• Guest Operations

> Guest Operation Modifications

> Guest Operation Program Execution

> Guest Operation Queries

• Interaction

> Guest operating system management by VIX API

> Power On

> Power Off

• Inventory

> Create new

> Register

> Remove

> Unregister

• Provisioning

> Allow read-only disk access

> Allow virtual machine download

• Snapshot management

> Create snapshot

> Remove snapshot

> Revert to snapshot

15.

Cohesity relies upon the VMware infrastructure for snapshots, CBT and VADP APIs and adheres to VMware’s support matrix for these technologies. Certain disk types are not supported such as RDMs. Visit the link below to ensure to adhere to current VMware supported configurations: http://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vsphere.vm_admin.doc_50%2FGUID-53F65726-A23B-4CF0-A7D5-48E584B88613.html Cohesity and VMware Automation Cohesity has developed components that integrate with VMware’s vRealize Automation and Orchestration stack. For more details visit: http://www.cohesity.com/blog/cohesity-vrealize-automation-automation-automation/ Cohesity & VMware Recommended Practices Recommended Practice: Configure the maximum number of vCenter NFS mounts Cohesity’s instantaneous availability of fully hydrated snapshots allows direct mounting of the Cohesity cluster to the vCenter during cloning and recovery operations. The default vCenter limits are usually too low for production-level, enterprise use of the Cohesity DataPlatform.

Cohesity recommends that the following settings be configured for vSphere 5.x and higher:

NFS.MaxVolumes = 256

TcpipHeapSize = 32

Additionally set TcpipHeapMax depending upon the ESXi version:

TcpipHeapMax = 128 for vSphere 5.0/5.1

TcpipHeapMax = 512 for vSphere 5.5

TcpipHeapMax = 1536 for vSphere 6.0

* Please note that the NFS.MaxVolumes setting changes dynamically. However the TcpipHeapMax and HeapSize will require a ESXi host reboot.

More details about these settings can be found here:

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2239 Recommended Practice: Use application consistent snapshots for Windows VMsCohesity provides access to VMware’s ability to take application-consistent snapshots. With application-consistent snapshots, Cohesity makes a request for an application-consistent snapshot that is in-turn passed to the guest OS VMware tools that will quiesce the active selected file systems. For supported Windows guests, VMware tools makes a request of the installed VSS providers to quiesce the file system to ensure a “freeze” of the file system before snapshots are taken and thus recoveries will complete gracefully using native Microsoft APIs. General requirements for proper application consistent snapshots are that the operating system must be supported by a version of VMware tools that has the proper quiescing capabilities. Interoperability information can be found VMware’s web site at http://www.vmware.com/resources/compatibility/sim/interop_matrix.php

16.

Recommended Practice: Use pre-freeze and post-thaw scripts with VMware tools for VM guests that require special quiescing mechanismsMany non-Windows guest OS’s have a quiescing mechanism for their supported file systems. When setting up backup jobs for such hosts, utilize the pre-freeze and post-thaw script mechanisms provided by VMware. More information can be found here: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1006671 Recommended Practice: Implement and test crash-consistent snapshots for all other VMsGuest VMs that do not have a supported OS and VMware tools combination can make use of crash-consistent snapshots. These types of snapshots require the guest OS and/or application to complete full recovery. When using these types of snapshots, ensure you complete a test cycle of backups and recoveries to ensure proper behavior.

Figure 11 - App-Consistent Backup Setting

17.

Recommended Practice: Optimize MTU for end-to-end performanceEnsure that the correct MTU sizes are set across the entire network path from the Cohesity nodes, all associated switches and/or routers as well as the VMware hosts. It is critical that the entire path has the correct MTUs across all nodes in the cluster as well as any network failover paths that could enter the link path. Confirming the MTUs across the network path from the Cohesity cluster to the guest OS can be easily validated by logging into each Cohesity node and executing the following command:

ping -M do -s <packet size> <ip address of guest>

If the ping succeeds without fragmentation errors than MTU is configured correctly. An example ping error would be “ping: local error: Message too long, mtu=1500”.

Figure 12 - Crash-Consistent Backup Setting

18.

Figure 13 - Extended Retention Rules

Recommended Practice: Utilize extended policy rules to ensure archive and compliance requirementsExtended policy rules should be used to extend retention of backups and snapshots taken by the standard rules to meet archive and compliance requirements.

Recommended Practice: Set blackout periods to optimize job start timesBlackout periods allow the administrator to prevent new job from starting based upon a schedule. Common reasons to do this are to stage jobs from overlapping or to account for differing timezones.

19.

Figure 14 - Blackout Period Schedule Settings

Recommended Practice: Utilize a naming convention for Protection Jobs and Policies Naming conventions for policies and jobs can be used to quickly identify similar attributes for scheduling, retention and associated host selection.

Policy Naming:

Policy names can describe general themes. Example themes could be: Operational Examples: “Production VM Policy” or “Staging VM Policy” signifying that a common policy has been defined for all production-level VMs vs the staging-level VMs which are used for development and staging operations. Backup Frequency and Retention Details: “Production VM 4H-7D” would identify that the policy has a 4-hour backup frequency and a 7-day retention. Organizational: “Engineering Development” or “Engineering QA” to signify that development machines have a different backup schedule than the QA VMs used for testing.

20.

Job Naming: Protection jobs apply policies to VMs and assign a job start time. Job naming examples: Application Grouping: “ERP Production Instance” or “HR Expense Application” could group all VMs that make up those particular applications and assign an appropriate policy and start time Geographical: “Production VMs California” or “Application VMs Europe” would signify geographical location and ensure that job start times were consistent with VM and user activity in those time zones

Recommended Practice: Assign hosts to jobs across multiple ESX/ESXi hosts To ensure optimal backup performance, best practice is to configure jobs to backup VMs across many ESXi hosts in parallel rather than attempting to backup all VMs on a single host. Doing so ensures that the ESXi host doesn’t get overloaded with backup traffic and at the same time allows the Cohesity Cluster to ingest many parallel streams yielding the best performance

Figure 15 - Distribute jobs across VM hosts

21.

Recommended Practice: Confirm datastore space is available when restoring to a new primary storage locationWhen setting up a restore task with a new location, ensure that the destination datastore has enough storage space to provision the VM.

Figure 16 - Recovery Location Datastore Selection

Figure 17 - Confirm Datastore Space Availability

22.Cohesity, Inc. Address 451 El Camino Real, Santa Clara, CA 95050Email [email protected] www.cohesity.com ©2016 Cohesity. All Rights Reserved.

@cohesity

Acknowledgements

AuthorsAndrew McCumiskey - Technical Marketing Engineering

ContributorsChinmaya Manjunath - Engineering

Manoj Singhal - Engineering

ReviewersJoe Putti - Product Management

Sai Mulchundan - Product Management

Rawlinson Rivera - Field CTO Office

Brad Baughn - Systems Engineer

Brian Doyle - Systems Engineer

Maikel Kelder - Systems Engineer

Patrick Rogers - Product Management & Product Marketing

Apurv Gupta - Engineering Revision HistoryVersion 1.0 - First publication: January 5th, 2017

SG-0117-ADM