120
Proven Infrastructure Guide EMC VSPEX PRIVATE CLOUD VMware vSphere for up to 700 Virtual Machines Enabled by Microsoft Windows Server 2012 R2, EMC All-Flash Array XtremIO, and EMC Data Protection EMC VSPEX Abstract This document describes the EMC ® VSPEX ® Proven Infrastructure solution for private cloud deployments with VMware vSphere™ and EMC XtremIO™ technology. July 2015

EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

  • Upload
    hatuyen

  • View
    241

  • Download
    0

Embed Size (px)

Citation preview

Page 1: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Proven Infrastructure Guide

EMC VSPEX PRIVATE CLOUD VMware vSphere for up to 700 Virtual Machines Enabled by Microsoft Windows Server 2012 R2, EMC All-Flash Array XtremIO, and EMC Data Protection

EMC VSPEX

Abstract

This document describes the EMC® VSPEX® Proven Infrastructure solution for private cloud deployments with VMware vSphere™ and EMC XtremIO™ technology.

July 2015

Page 2: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

2 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Copyright © 2015 EMC Corporation. All rights reserved. Published in the USA.

Published July 2015

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.

EMC VSPEX Private Cloud: VMware vSphere for up to 700 Virtual Machines Enabled by Microsoft Windows Server 2012 R2, EMC XtremIO, and EMC Data Protection Proven Infrastructure Guide

Part Number H14085.1

Page 3: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Contents

3 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Contents

Chapter 1 Executive Summary 11

Introduction ............................................................................................................. 12

Target audience ........................................................................................................ 12

Document purpose ................................................................................................... 12

Business needs ........................................................................................................ 13

Chapter 2 Solution Overview 15

Introduction ............................................................................................................. 16

Virtualization ............................................................................................................ 16

Compute .................................................................................................................. 16

Network .................................................................................................................... 16

Storage ..................................................................................................................... 17

Performance ........................................................................................................ 17

Workload portability ............................................................................................ 18

Scalability ............................................................................................................ 19

Virtual machine provisioning ............................................................................... 19

Deduplication ...................................................................................................... 20

Thin provisioning ................................................................................................. 20

Data protection .................................................................................................... 20

VAAI integration ................................................................................................... 20

Summary ............................................................................................................. 21

Chapter 3 Solution Technology Overview 23

Overview .................................................................................................................. 24

VSPEX Proven Infrastructures ................................................................................... 24

Key components ....................................................................................................... 26

Virtualization layer ................................................................................................... 27

Overview .............................................................................................................. 27

VMware vSphere 6.0 ........................................................................................... 27

New VMware vSphere 6.0 features ....................................................................... 27

VMware vCenter ................................................................................................... 28

VMware vSphere High Availability ........................................................................ 28

XtremIO support for VMware VAAI ........................................................................ 28

Compute layer .......................................................................................................... 29

Network layer ........................................................................................................... 31

Page 4: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Contents

4 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Storage layer ............................................................................................................ 32

EMC XtremIO ........................................................................................................ 32

Virtualization management .................................................................................. 34

ROBO ................................................................................................................... 35

EMC Data Protection ................................................................................................. 35

Overview .............................................................................................................. 35

EMC Avamar deduplication .................................................................................. 35

EMC Data Domain deduplication storage systems ............................................... 36

VMware vSphere Data Protection ......................................................................... 36

vSphere Replication ............................................................................................. 36

EMC RecoverPoint ................................................................................................ 37

Other technologies ................................................................................................... 37

Overview .............................................................................................................. 37

VMware vCloud Automation Center ...................................................................... 37

VMware vCenter Operations Management Suite .................................................. 38

VMware vCenter Single Sign-On ........................................................................... 39

Public-key infrastructure ...................................................................................... 40

PowerPath/VE ...................................................................................................... 40

Chapter 4 Solution Architecture Overview 41

Overview .................................................................................................................. 42

Solution architecture ................................................................................................ 42

Overview .............................................................................................................. 42

Logical architecture ............................................................................................. 42

Key components .................................................................................................. 43

Hardware resources ............................................................................................. 45

Software resources .............................................................................................. 46

Server configuration guidelines ................................................................................ 47

Overview .............................................................................................................. 47

Ivy Bridge updates ............................................................................................... 47

VMware vSphere memory virtualization for VSPEX ............................................... 48

Memory configuration guidelines ......................................................................... 50

Network configuration guidelines ............................................................................. 51

Overview .............................................................................................................. 51

VLANs .................................................................................................................. 51

Enable jumbo frames (for iSCSI) .......................................................................... 52

Storage configuration guidelines .............................................................................. 52

Overview .............................................................................................................. 52

XtremIO X-Brick scalability ................................................................................... 53

Page 5: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Contents

5 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

VMware vSphere storage virtualization for VSPEX ................................................ 54

VSPEX storage building blocks ............................................................................. 55

High-availability and failover .................................................................................... 58

Overview .............................................................................................................. 58

Virtualization layer ............................................................................................... 58

Compute layer ..................................................................................................... 58

Network layer ....................................................................................................... 59

Storage layer ....................................................................................................... 59

Backup and recovery configuration guidelines.......................................................... 60

Chapter 5 Sizing the Environment 61

Overview .................................................................................................................. 62

Reference workload .................................................................................................. 62

Overview .............................................................................................................. 62

Define the reference workload ............................................................................. 62

Scaling out ............................................................................................................... 63

Applying the reference workload .............................................................................. 63

Overview .............................................................................................................. 63

Example 1: Custom-built application .................................................................. 63

Example 2: Point of sale system ........................................................................... 64

Example 3: Web server ........................................................................................ 64

Example 4: Decision-support database ............................................................... 64

Summary of examples ......................................................................................... 65

Quick assessment .................................................................................................... 65

Overview .............................................................................................................. 65

CPU requirements ................................................................................................ 66

Memory requirements .......................................................................................... 66

Storage performance requirements ...................................................................... 66

I/O operations per second ................................................................................... 67

I/O size ................................................................................................................ 67

I/O latency ........................................................................................................... 67

Unique data ......................................................................................................... 67

Storage capacity requirements ............................................................................ 67

Determining equivalent reference virtual machines ............................................. 68

Fine-tuning hardware resources ........................................................................... 71

EMC VSPEX Sizing Tool ........................................................................................ 73

Chapter 6 VSPEX Solution Implementation 75

Overview .................................................................................................................. 76

Pre-deployment tasks ............................................................................................... 76

Page 6: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Contents

6 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Deployment prerequisites .................................................................................... 77

Customer configuration data ................................................................................ 78

Network implementation .......................................................................................... 79

Prepare network switches .................................................................................... 79

Configure infrastructure network .......................................................................... 79

Configure VLANs .................................................................................................. 80

Configure jumbo frames (iSCSI only) .................................................................... 80

Complete network cabling ................................................................................... 81

Prepare and configure the storage array ................................................................... 81

XtremIO configuration .......................................................................................... 81

Install and configure the VMware vSphere hosts ...................................................... 85

Overview .............................................................................................................. 85

Install ESXi .......................................................................................................... 86

Configure ESXi networking ................................................................................... 86

Install and configure multipath software .............................................................. 87

Connect VMware datastores ................................................................................ 88

Plan virtual machine memory allocations ............................................................. 89

Install and configure Microsoft SQL Server databases .............................................. 91

Overview .............................................................................................................. 91

Create a virtual machine for SQL Server ............................................................... 91

Install Microsoft Windows on the virtual machine ................................................ 91

Install SQL Server ................................................................................................ 91

Configure database for VMware vCenter .............................................................. 92

Configure database for VMware Update Manager................................................. 92

Install and configure VMware vCenter Server ............................................................ 93

Overview .............................................................................................................. 93

Create the vCenter host virtual machine ............................................................... 94

Install vCenter guest OS ....................................................................................... 94

Create vCenter ODBC connections ....................................................................... 94

Install vCenter Server ........................................................................................... 94

Apply vSphere license keys .................................................................................. 94

Provisioning a virtual machine .................................................................................. 95

Create a virtual machine in vCenter ...................................................................... 95

Perform partition alignment, and assign file allocation unit size .......................... 95

Create a template virtual machine ....................................................................... 95

Deploy virtual machines from the template virtual machine ................................. 95

Summary .................................................................................................................. 95

Page 7: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Contents

7 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Chapter 7 Verifying the Solution 97

Overview .................................................................................................................. 98

Post-install checklist ................................................................................................ 99

Deploy and test a single virtual server ...................................................................... 99

Verify the redundancy of the solution components ................................................... 99

Chapter 8 System Monitoring 101

Overview ................................................................................................................ 102

Key areas to monitor ............................................................................................... 102

Performance baseline ........................................................................................ 103

Servers .............................................................................................................. 103

Networking ........................................................................................................ 104

Storage .............................................................................................................. 104

XtremIO resource monitoring guidelines ................................................................. 105

Monitoring the storage....................................................................................... 105

Monitoring the performance .............................................................................. 107

Monitoring the hardware elements .................................................................... 108

Advanced monitoring ......................................................................................... 110

Appendix A Reference Documentation 111

EMC documentation ............................................................................................... 112

Other documentation ............................................................................................. 112

Appendix B Customer Configuration Worksheet 115

Customer configuration worksheet ......................................................................... 116

Appendix C Server Resource Component Worksheet 119

Server resources component worksheet ................................................................. 120

Page 8: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Contents

8 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Figures Figure 1. I/O randomization brought by server virtualization .............................. 18

Figure 2. Management of vMotion operations .................................................... 19

Figure 3. VSPEX Private Cloud components ........................................................ 24

Figure 4. VSPEX Proven Infrastructures .............................................................. 25

Figure 5. Compute layer flexibility examples ...................................................... 30

Figure 6. Example of a highly available network design ...................................... 31

Figure 7. Logical architecture for the solution ..................................................... 43

Figure 8. Intel Ivy Bridge processors ................................................................... 47

Figure 9. Hypervisor memory consumption ........................................................ 49

Figure 10. Required networks for XtremIO storage ................................................ 52

Figure 11. Single X-Brick XtremIO storage ............................................................ 53

Figure 12. Cluster configuration as single and multiple X-Brick cluster ................. 54

Figure 13. VMware virtual disk types .................................................................... 55

Figure 14. XtremIO Starter X-Brick building block for 350 virtual machines .......... 56

Figure 15. XtremIO Single x-Brick building block for 700 virtual machines ........... 56

Figure 16. Maximum scale levels and entry points of different arrays ................... 57

Figure 17. High availability at the virtualization layer ........................................... 58

Figure 18. Redundant power supplies .................................................................. 58

Figure 19. Network layer high availability ............................................................. 59

Figure 20. XtremIO high availability ..................................................................... 59

Figure 21. Required resource from the reference virtual machine pool ................. 69

Figure 22. Sample Ethernet network architecture ................................................. 80

Figure 23. Adding volumes .................................................................................. 83

Figure 24. Volume summary................................................................................. 84

Figure 25. Set the multi-path policy as Round Robin ............................................ 88

Figure 26. Virtual machine memory settings ........................................................ 90

Figure 27. Monitoring efficiency ......................................................................... 105

Figure 28. Volume capacity ................................................................................ 106

Figure 29. Physical capacity ............................................................................... 106

Figure 30. Monitoring IOPS performance ............................................................ 107

Figure 31. Data and management cable connectivity ......................................... 108

Figure 32. Viewing X-Brick properties ................................................................. 109

Figure 33. Monitoring SSDs ................................................................................ 110

Page 9: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Contents

9 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Tables Table 1. Solution hardware ............................................................................... 45

Table 2. Solution software ................................................................................ 46

Table 3. Hardware resources for the compute layer ........................................... 48

Table 4. Hardware resources for the network layer ............................................ 51

Table 5. Different numbers of virtual machines at different scalable scenarios . 56

Table 6. VSPEX Private Cloud workload ............................................................. 62

Table 7. Customer Sizing Worksheet example (blank) ....................................... 66

Table 8. Reference virtual machine resources ................................................... 68

Table 9. Customer Sizing Worksheet example with user numbers added .......... 69

Table 10. Example applications – stage 1 ........................................................... 70

Table 11. Example applications – stage 2 ........................................................... 71

Table 12. Server resource component totals ....................................................... 72

Table 13. Deployment process overview ............................................................. 76

Table 14. Tasks for pre-deployment .................................................................... 77

Table 15. Deployment prerequisites checklist ..................................................... 77

Table 16. Tasks for switch and network configuration ......................................... 79

Table 17. Tasks for XtremIO configuration ........................................................... 81

Table 18. Storage allocation table for block data ................................................ 85

Table 19. Tasks for server installation ................................................................. 85

Table 20. Tasks for SQL Server database setup ................................................... 91

Table 21. Tasks for vCenter configuration ........................................................... 93

Table 22. Tasks for testing the installation .......................................................... 98

Table 23. Advanced monitor parameters ........................................................... 110

Table 24. Common server information .............................................................. 116

Table 25. ESXi server information ..................................................................... 116

Table 26. X-Brick information ............................................................................ 116

Table 27. Network infrastructure information .................................................... 117

Table 28. VLAN information .............................................................................. 117

Table 29. Service accounts ............................................................................... 117

Table 30. Blank worksheet for server resource totals ........................................ 120

Page 10: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Contents

10 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Page 11: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 1: Executive Summary

11 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Chapter 1 Executive Summary

This chapter presents the following topics:

Introduction ............................................................................................................. 12

Target audience ....................................................................................................... 12

Document purpose ................................................................................................... 12

Business needs ........................................................................................................ 13

Page 12: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 1: Executive Summary

12 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Introduction

EMC® VSPEX® Proven Infrastructures are optimized for virtualizing business-critical applications. VSPEX provides modular solutions built with technologies that enable faster deployment, greater simplicity, greater choice, higher efficiency, and lower risk.

Server virtualization has been a driving force in data center efficiency gains for the past decade. However, the mixing of multiple virtual machine workloads on a single physical server creates a randomization of input/output (I/O) for the storage array, which stalls virtualization of I/O-intensive workloads. The EMC XtremIO™ all-flash array effectively addresses the effects of virtualization on I/O-intensive database workloads with impressive random I/O performance and consistent ultra-low latency. XtremIO also provides new levels of speed and provisioning agility to virtualized environments with space-efficient snapshots, inline copy deduplication, thin provisioning, and accelerated provisioning using VMware vSphere Storage API Array Integration (VAAI).

The 700 virtual machine VMware Private Cloud solution described in this document is based on the XtremIO storage array and on a defined reference workload. This document is a comprehensive guide to the technical aspects of this solution. It describes required server capacity minimums for CPU, memory, and network interfaces. You can select server and networking hardware that meets or exceeds these minimum requirements.

Target audience

The readers of this document must have the necessary training and background to install and configure VMware vSphere, EMC XtremIO series storage systems, and associated infrastructure as required by this implementation. External references are provided where applicable, and readers should be familiar with these documents.

Readers should also be familiar with the infrastructure and database security policies of the customer installation.

Partners selling and sizing a VMware Private Cloud infrastructure must pay particular attention to the first four chapters of this document. After purchase, implementers of the solution should focus on the configuration guidelines in Chapter 6, the solution verification in Chapter 7, and the appropriate references and appendices.

Document purpose

This document includes an initial introduction to the VSPEX architecture, an explanation of how to modify the architecture for specific engagements, and instructions on how to effectively deploy and monitor the system.

The VSPEX Private Cloud architecture provides customers with a modern system capable of hosting many virtual machines at a consistent performance level. This solution runs on the VMware vSphere virtualization layer backed by highly available XtremIO storage. The compute and network components, which are defined by the

Page 13: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 1: Executive Summary

13 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

VSPEX partners, are designed to be redundant and sufficiently powerful to handle the processing and data needs of the virtual machine environment.

The 700 virtual machine VMware Private Cloud solution described in this document is based on the XtremIO storage array and on a defined reference workload. Since not every virtual machine has the same requirements, this document contains methods and guidance to adjust your system to be cost-effective as deployed.

A private cloud architecture is a complex system offering. This document facilitates setup by providing prerequisite software and hardware material lists, step-by-step sizing guidance and worksheets, and verified deployment steps. After the last component has been installed, verification tests and monitoring instructions ensure that your system is running properly. Following the instructions in this document ensures an efficient and painless journey to the cloud.

Business needs

VSPEX solutions are built with proven technologies to create complete virtualization solutions that allow you to make informed decisions in the hypervisor, server, and networking layers.

Business applications are moving into consolidated compute, network, and storage environments. EMC VSPEX Private Cloud using VMware reduces the complexity of configuring every component of a traditional deployment model. The solution simplifies integration management while maintaining the application design and implementation options. It also provides unified administration while enabling adequate control and monitoring of process separation.

The business benefits of the VSPEX Private Cloud for VMware architectures include:

An end-to-end virtualization solution to effectively use the capabilities of the all-flash array infrastructure components

Efficient virtualization of 700 virtual machines for varied customer use cases

A reliable, flexible, and scalable reference design

Page 14: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 1: Executive Summary

14 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Page 15: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 2: Solution Overview

15 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Chapter 2 Solution Overview

This chapter presents the following topics:

Introduction ............................................................................................................. 16

Virtualization ........................................................................................................... 16

Compute .................................................................................................................. 16

Network ................................................................................................................... 16

Storage .................................................................................................................... 17

Page 16: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 2: Solution Overview

16 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Introduction

The VSPEX Private Cloud for VMware vSphere 6.0 solution provides a complete system architecture capable of supporting up to 700 virtual machines with a redundant server and network topology and highly available storage. The core components that make up this solution are virtualization, compute, network, and storage.

Virtualization

VMware vSphere is the leading virtualization platform in the industry. It provides flexibility and cost savings to end users by enabling the consolidation of large, inefficient server farms into nimble, reliable cloud infrastructures. The core VMware vSphere components are the VMware vSphere hypervisor and the VMware vCenter Server for system management.

The VMware hypervisor runs on a dedicated server and allows multiple operating systems to run simultaneously on the system as virtual machines. These hypervisor systems can be connected to operate in a clustered configuration. The clustered configurations are then managed as a larger resource pool through VMware vCenter, and allow for dynamic allocation of CPU, memory, and storage across the cluster.

Features such as VMware vMotion, which allows a virtual machine to move between different servers with no disruption to the operating system (OS), and Distributed Resource Scheduler (DRS), which performs vMotion migrations automatically to balance the load, make vSphere a solid business choice.

Compute

VSPEX provides the flexibility to design and implement the customer’s choice of server components. The infrastructure must have sufficient:

Cores and memory to support the required number and types of virtual machines

Network connections to enable redundant connectivity to the system switches

Capacity to enable the environment to withstand a server failure and failover in the environment

Network

VSPEX provides the flexibility to design and implement the customer’s choice of network components. The infrastructure must provide:

Redundant network links for the hosts, switches, and storage

Traffic isolation based on industry-accepted best practices

Support for link aggregation

Page 17: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 2: Solution Overview

17 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

IP network switches used to implement this solution architecture must have a minimum non-blocking backplane capacity that is sufficient for the target number of virtual machines and their associated workloads. Enterprise-class IP network switches with advanced features such as quality of service are highly recommended.

Storage

This section describes the challenges faced in a virtualized data center and why XtremIO is the ideal solution to meet these challenges. Performance, application provisioning, and data management requirements were easy to meet when discrete applications used physical servers and dedicated storage systems. However, when moved into large-scale, agile VMware virtual environments, new demands are placed on the infrastructure. These environments require high performance and support for a high density of virtualized applications with unpredictable workloads, and rapid virtual-machine provisioning and cloning.

While the promise of flash storage arrays meeting large-scale virtualization requirements looms large, the reality is that all-flash arrays must have an optimized architecture for both storage I/O performance and storage efficiency to effectively address these challenges.

Storage efficiency has an important role to play, because both acquisition and operations costs of storage infrastructure are among the top challenges of cloud-based virtual server environments. Storage efficiency requires maximizing both available storage capacity and processing resources, which often results in competing efforts. Storage efficiency is key to enabling the promise of elastic scalability, pay-as-you-grow efficiency, and a predictable cost structure, all while increasing productivity and innovation.

CPUs have historically gained power through increases in transistor count and clock speed. More recently, a shift has been made to multicore CPUs and multithreading. This shift combined with server virtualization technology allows massive consolidation of applications onto a single physical server. The result is intensive randomization of the workload for the storage array.

Imagine a dual socket server with six cores per socket and two threads per core. With virtualization technology, this server can easily present shared storage with a workload of 24 unique, intermixed data streams. Now imagine numerous servers on a SAN sharing the same storage array. The array workload very quickly becomes an I/O blender of completely random I/O from hundreds or thousands of intermixed sources, as shown in Figure 1. Flash arrays are ideal for handling high volumes of random I/O that have traditionally been too expensive for large-scale virtualization deployments.

Performance

Page 18: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 2: Solution Overview

18 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Figure 1. I/O randomization brought by server virtualization

Being able to move active virtual machines as quickly and seamlessly as possible from one physical server to another with no service interruption is a key element of a large-scale virtualized infrastructure. VMware vSphere vMotion enables the live migration of virtual machines from one VMware vSphere host to another, with no perceivable impact for users. This is an important enabler for a number of key VMware technologies, including vSphere DRS and vSphere Distributed Power Management (DPM).

vMotion requires physical memory from the virtual machine (as large as 1 TB) to be transferred during a virtual machine migration while using the vSphere suspend and resume functionality. This functionality momentarily freezes the virtual machine on the source vSphere host, copies the last set of memory changes to the target vSphere host, and then resumes the virtual machine on the target. The suspend and resume feature is the most likely feature to impact guest performance, during which an abrupt, temporary increase of latency can occur. The impact depends on a variety of factors, including the performance of the storage I/O.

Large-scale virtual environments commonly use VMware Storage vMotion for live, non-disruptive migration of virtual machine files within and across storage arrays for performing proactive storage migrations, improving virtual machine performance, and optimizing storage utilization. Figure 2 shows how array-enabled vMotion and Storage vMotion operations are managed. Storage vMotion is highly dependent on array I/O and cloning performance.

Workload portability

Page 19: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 2: Solution Overview

19 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Figure 2. Management of vMotion operations

You can use the VMware VAAI Extended Copy (X-COPY) command to accelerate Storage vMotion with compliant storage arrays, which enables the host to offload specific virtual machine and storage management operations to the storage array. The host issues the command to the array from the source logical unit number (LUN) to the destination LUN or to the same source LUN, if required. The choice depends on how the virtual machine file system (VMFS) datastores are configured on the relevant LUNs. The array uses internal mechanisms to complete the cloning operation, and depending on the efficiency of the array used to implement the Extended Copy support, can accelerate the performance of Storage vMotion.

An agile, virtualized infrastructure must also take into consideration the multiple dimensions of performance, capacity, and operations. It must have the ability to scale efficiently, without sacrificing performance and resiliency, and without scaling the number of people that manage the environment. However, deploying traditional discrete dual-controller flash appliances to address scalability challenges can lead to system sprawl, performance bottlenecks, and suboptimal availability, which increases storage administration time.

Agility is a major reason why organizations choose to virtualize their infrastructures. However, IT responsiveness often exponentially slows as virtual environments grow. Bottlenecks occur because organizations don’t have the right tools to quickly determine the capacity and health of the physical and virtual resources.

While enterprise users want responsive deployment of business applications to meet changing business requirements, the enterprise is often unable to rapidly deploy or update virtual machines and storage on a large scale. Standard virtual machine provisioning or cloning methods, which are commonly implemented in flash arrays, can be expensive, because full copies of virtual machines can require 50 GB or more storage for each copy.

vSphere

LUN01 LUN02

Array

VAAI

vSphere

vMotion

Scalability

Virtual machine provisioning

Page 20: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 2: Solution Overview

20 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

In a large-scale cloud data center, when shared storage is cloning up to hundreds of virtual machines each hour while concurrently delivering I/O to active virtual machines, cloning can become a major bottleneck for optimal data center performance and operational efficiency.

Storage arrays can accumulate duplicate data over time, which increases costs and management overhead. In particular, large-scale virtual server environments create large amounts of duplicate data when virtual machines are deployed by cloning existing virtual machines or when virtual machines have the same OS and applications installed.

Deduplication eliminates duplicate data by replacing it with a pointer to a unique data block. This post-processing operation first writes incoming data to disk and then the array deduplicates the data, both of which impact array performance.

Thin provisioning is a popular technique that improves array utilization. The storage capacity is consumed only when data is written rather than when storage volumes are provisioned. For administrators of large-scale virtualized environments, thin provisioning removes the need for overprovisioning storage to meet anticipated future capacity demands. Thin provisioning allows virtual machine storage to be allocated on-demand from an available storage pool.

Most storage arrays are designed to be statically installed and run, yet virtualized application environments are naturally dynamic and variable. Change and growth in virtualized workloads causes organizations to actively redistribute workloads across storage array resources (or use other features such as VMware DRS) for load balancing to avoid running out of space or reducing performance. Unfortunately, this ongoing load balancing is a manual, iterative task that is often costly and time-consuming.

As a result, storage arrays that support large-scale virtualization environments require optimal and inherent data placement to ensure maximum utilization of both capacity and performance without any planning demands.

While storage arrays have traditionally supported several RAID data protection levels, the arrays required storage administrators to choose between data protection and performance for specific workloads. The challenge for large-scale virtual environments is the shared storage system that stores data for hundreds or thousands of virtual machines with different workloads. Some storage systems allow live migrations between RAID levels, which require repeated proactive administration as workloads evolve.

Optimal data protection for virtualized environments requires that arrays support data protection schemes, which combine the best attributes of existing RAID levels while avoiding the drawbacks. Because flash endurance is a special consideration in an all-flash array, the scheme maximizes the service life of the array solid-state drives (SSDs) while complementing the high I/O performance of flash media.

In contrast to a custom integration between virtualized environments and storage arrays, VAAI is a set of APIs that enable VMware hosts to offload common storage operations to the array. This reduces resource overhead on VMware hosts and can significantly improve performance for storage-intensive operations, such as storage cloning for virtual machine provisioning.

Deduplication

Thin provisioning

Data protection

VAAI integration

Page 21: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 2: Solution Overview

21 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

While VAAI removes the involvement of vSphere hosts in storage-intensive operations, the actual performance benefits VAAI-enabled flash arrays are highly dependent on the array architecture. For example, the performance of VAAI-enabled X-COPY for copying virtual disk files (up to hundreds of GBs) for cloning or storage vMotion is highly dependent on the efficiency of deduplication and metadata models supported by the array. If the X-COPY operation requires read and write of data blocks to and from the SSD as compared to only creating metadata pointers to deduplicated data blocks on SSDs, the performance can widely vary for both the copy operation and I/O to live virtual machines.

In summary, to meet the multiple demands from a large-scale virtualization data center, you need a storage array that is able to provide superb performance and capacity scale-out for infrastructure growth, built-in data deduplication, thin provisioning for capacity efficiency and cost mitigation, flash-optimized data protection techniques, near-instantaneous virtual machine provisioning and cloning, inherent load-balancing, and automated virtual machine disk (VMDK) provisioning.

The XtremIO all-flash array is built to unlock the full performance potential of flash storage and to deliver array-based data management capabilities that make it an optimal storage solution for large-scale virtualization. The next chapter provides more details about how to apply XtremIO features for optimal performance.

Summary

Page 22: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 2: Solution Overview

22 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Page 23: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 3: Solution Technology Overview

23 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Chapter 3 Solution Technology Overview

This chapter presents the following topics:

Overview .................................................................................................................. 24

VSPEX Proven Infrastructures................................................................................... 24

Key components ...................................................................................................... 26

Virtualization layer ................................................................................................... 27

Compute layer .......................................................................................................... 29

Network layer ........................................................................................................... 31

Storage layer ........................................................................................................... 32

EMC Data Protection ................................................................................................ 35

Other technologies .................................................................................................. 37

Page 24: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 3: Solution Technology Overview

24 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Overview

This solution uses EMC XtremIO and VMware vSphere 6.0 to provide storage and server hardware consolidation in a private cloud. The solution has been designed and proven by EMC to provide virtualization, server, network, and storage resources to provide customers with the ability to deploy an architecture with a scalable number of virtual machines and associated shared storage.

Figure 3 shows the solution components.

…ComputeComponents

EMC XtremIO

Hypervisor

Virtual Servers Virtual Servers

……………. Network Connections

Supporting Infrastructure

Storage Network

Network Components

StorageComponents

Virtualization Components

Figure 3. VSPEX Private Cloud components

The following sections describe the components in more detail.

VSPEX Proven Infrastructures

EMC has joined forces with the industry-leading providers of IT infrastructure to create a complete virtualization solution that accelerates the deployment of the private cloud. VSPEX enables customers to accelerate their IT transformation with faster deployment, greater simplicity and choice, higher efficiency, and lower risk, compared to the challenges and complexity of building an IT infrastructure themselves.

Page 25: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 3: Solution Technology Overview

25 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

VSPEX validation by EMC ensures predictable performance and enables customers to select technology that uses their existing or newly acquired IT infrastructure while eliminating planning, sizing, and configuration burdens. VSPEX provides a virtual infrastructure for customers who want the simplicity that is characteristic of truly converged infrastructures, with more choice in individual stack components.

VSPEX Proven Infrastructures, as shown in Figure 4, are modular, virtualized infrastructures validated by EMC and delivered by EMC VSPEX partners. These infrastructures include virtualization, server, network, and storage layers. Partners can choose the virtualization, server, and network technologies that best fit a customer’s environment, while XtremIO storage systems and technologies provide the storage layers.

Figure 4. VSPEX Proven Infrastructures

Page 26: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 3: Solution Technology Overview

26 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Key components

This section describes the following key components of this solution:

Virtualization layer

The Virtualization layer decouples the physical implementation of resources from the applications that use the resources, so that the application view of the available resources is no longer directly tied to the hardware. This enables many key features in the private cloud concept. This solution uses VMware vSphere for the virtualization layer.

Compute layer

The Compute layer provides memory and processing resources for the virtualization layer software and for the applications running in the private cloud. The VSPEX program defines the minimum amount of required compute layer resources, and implements the solution by using any server hardware that meets these requirements.

Network layer

The Network layer connects the users of the private cloud to the resources in the cloud and the storage layer to the compute layer. The VSPEX program defines the minimum number of required network ports, provides general guidance on network architecture, and enables you to implement the solution by using any network hardware that meets these requirements.

Storage layer

The Storage layer is critical for the implementation of the server virtualization. With multiple hosts accessing shared data, many of the use cases can be implemented. The XtremIO all-flash array used in this solution provides extremely high performance and supports a number of capacity efficiency and data services capabilities.

EMC Data Protection

The components of the solution provide protection when the data in the primary system is deleted, damaged, or unusable. See EMC Data Protection for more information.

Security layer

The security layer is an optional solution component that provides consumers with additional options to control access to the environment and ensure that only authorized users are permitted to use the system. This solution uses RSA SecurID to provide secure user authentication.

The Solution architecture section provides details about the components that make up the reference architecture.

Page 27: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 3: Solution Technology Overview

27 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Virtualization layer

The virtualization layer is a key component of any server virtualization or private cloud solution. It decouples the application resource requirements from the underlying physical resources that serve them. This enables greater flexibility in the application layer by eliminating hardware downtime for maintenance, and allows the system to physically change without affecting the hosted applications. In a server virtualization or private cloud use case, it enables multiple independent virtual machines to share the same physical hardware, rather than being directly implemented on dedicated hardware.

VMware vSphere 6.0 transforms the physical resources of a computer by virtualizing the CPU, RAM, hard disk, and network controller. This transformation creates fully functional virtual machines that run isolated and encapsulated operating systems and applications just like physical computers.

The high-availability features of VMware vSphere 6.0 such as vMotion and Storage vMotion enable seamless migration of virtual machines and stored files from one vSphere server to another, or from one data storage area to another, with minimal or no performance impact. Coupled with vSphere DRS and Storage DRS, virtual machines have access to the appropriate resources at any point in time through load balancing of compute and storage resources.

VMware vSphere 6.0 includes an expansive list of new and improved features that enhance performance, reliability, availability, and recovery of virtualized environments. Of those features, several have significant impacts on VSPEX Private Cloud deployments, including:

Expanded maximum memory and CPU limits for VMware ESXi™ hosts. Logical and virtual CPU (vCPU) counts have doubled in this version, as have non-uniform memory access (NUMA) node counts and maximum memory. This means host servers can support larger workloads.

62 TB VMDK file support including Raw Device Mapping (RDM). Datastores can hold more data from more virtual machines, which simplifies storage management and uses larger capacity NL-SAS drives.

Enhanced VAAI UNMAP support that includes a new esxcli storage vmfs unmap command with multiple reclamation methods.

Enhanced Single-Root I/O Virtualization (SR-IOV) to allow a single PCIe physical device under a single root port to appear to be multiple separate physical devices to the hypervisor or the guest OS.

16 Gb end-to-end support for FC environments.

Enhanced Link Aggregation Control Protocol (LACP) functions offering additional hash algorithms and up to 64 Link Aggregation Groups (LAGs).

vSphere Data Protection (VDP), which can now replicate backup data directly to EMC Avamar®.

Overview

VMware vSphere 6.0

New VMware vSphere 6.0 features

Page 28: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 3: Solution Technology Overview

28 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

40 Gb Mellanox network interface card (NIC) support.

VMFS heap improvements, which reduce memory requirements while allowing access to the full 64 TB VMFS address space.

VMware vCenter is a centralized management platform for the VMware virtual infrastructure. This platform provides administrators with a single interface for all aspects of monitoring, managing, and maintaining the virtual infrastructure, that can be accessed from multiple devices.

VMware vCenter also manages some advanced features of the VMware virtual infrastructure, such as VMware vSphere High Availability, DRS, vMotion, and Update Manager.

The VMware vSphere High Availability feature enables the virtualization layer to automatically restart virtual machines in various failure conditions, including:

If the virtual machine OS has an error, the virtual machine can automatically restart on the same hardware.

If the physical hardware has an error, the impacted virtual machines can automatically restart on other servers in the cluster.

Note: To restart virtual machines on different hardware, the servers must have available resources. The Compute section provides detailed information to enable this function.

With vSphere High Availability, you can configure policies to determine which machines automatically restart, and under what conditions to attempt these operations.

Hardware XtremIO is fully VAAI compliant, allowing vSphere server to offload I/O intensive work to the XtremIO array and provide accelerated storage vMotion, virtual machine provisioning, and thin provisioning functionality.

In addition, VAAI improves the X-copy efficiency even further, by making the whole operation metadata driven. With XtremIO, thanks to Inline Data Reduction and in-memory metadata, no actual data blocks are copied during the X-copy command execution. The cluster only creates new pointers to the existing data, and the entire process is carried out in the Storage Controller memory. Therefore, it does not consume the resources of the storage array and has no impact on the cluster performance. For example, a virtual machine image can be cloned instantaneously (even multiple times) with XtremIO.

VMware vCenter

VMware vSphere High Availability

XtremIO support for VMware VAAI

Page 29: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 3: Solution Technology Overview

29 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

The XtremIO features for VAAI support include:

Zero Blocks/Write Same

Used for zeroing-out disk regions (VMware term: HardwareAcceleratedInit).This feature provides accelerated volume formatting.

Clone Blocks/Full Copy/XCOPY

Used for copying or migrating data within the same physical array (VMware term: HardwareAcceleratedMove).On XtremIO, this allows virtual machine cloning to take place almost instantaneously, without affecting user I/O on active virtual machines.

Record-based locking/Atomic Test and Set (ATS)

Used during creation and locking of files on a VMFS volume, for example, during powering-down/powering-up of VMs (VMware term: HardwareAcceleratedLocking). This feature is designed to address access contention on ESX volumes shared by multiple VMs.

Block Delete/UNMAP/TRIM

Allows for unused space to be reclaimed, using the SCSI UNMAP feature (VMware term: BlockDelete; vSphere 5.x only). This can also be performed manually, in VMware version 5.1, using the vmkfstool command (for details, refer to VMware documentation).

Compute layer

The choice of a server platform for an EMC VSPEX infrastructure is not only based on the technical requirements of the environment, but on the supportability of the platform, existing relationships with the server provider, advanced performance, management features, and many other factors. For this reason, EMC VSPEX solutions are designed to run on a wide variety of server platforms. Instead of requiring a specific number of servers with a specific set of requirements, VSPEX solutions provide minimum requirements for the number of processor cores and the amount of RAM. This can be implemented with two or twenty servers, and still be considered the same VSPEX solution.

In the example shown in Figure 5, the compute layer requirements for a specific implementation are 25 processor cores and 200 GB of RAM. One customer might want to implement this with white-box servers containing 16 processor cores and 64 GB of RAM, while another customer might select a higher-end server with 20 processor cores and 144 GB of RAM.

Page 30: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 3: Solution Technology Overview

30 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Figure 5. Compute layer flexibility examples

The first customer needs four of the selected servers, while the other customer needs two.

Note: To enable high availability for the compute layer, each customer needs one additional server to ensure that the system has enough capability to maintain business operations when a server fails.

Use the following best practices in the compute layer:

Use several identical, or at least compatible, servers. VSPEX implements hypervisor level high-availability technologies that may require similar instruction sets on the underlying physical hardware. By implementing VSPEX on identical server units, you can minimize compatibility problems in this area.

Page 31: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 3: Solution Technology Overview

31 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

If you implement high availability at the hypervisor layer, the largest virtual machine you can create is constrained by the smallest physical server in the environment.

Implement the available high-availability features in the virtualization layer, and ensure that the compute layer has sufficient resources to accommodate at least single server failures. This enables the implementation of minimal-downtime upgrades, and tolerance for single unit failures.

Within the boundaries of these recommendations and best practices, the compute layer for EMC VSPEX can be flexible to meet your specific needs. Ensure that there are sufficient processor cores and RAM per core to meet the needs of the target environment.

Network layer

The infrastructure network requires redundant network links for each vSphere host, the storage array, the switch interconnect ports, and the switch uplink ports. This configuration provides both redundancy and additional network bandwidth. This is a required configuration regardless of whether the network infrastructure for the solution already exists, or you are deploying it alongside other components of the solution. Figure 6 shows an example of this highly available network topology.

XtremIO Storage Controllers (Active/Active)

Servers

Switch 2

Switch 1

Redundant Switch Interconnect

Link from switch 1 to Storage controller 1 Link from switch 2 to

Storage controller 1

Link from switch 2 to Storage controller 2

Link from switch 1 to Storage controller 2

...

Network links from each server to switch 2

Network links from each server

to switch 1

Figure 6. Example of a highly available network design

Page 32: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 3: Solution Technology Overview

32 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

This validated solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high-availability, and security.

XtremIO is a block-only storage platform, and it provides network high availability or redundancy by using two ports per storage controller. If a link is lost on the storage processor I/O port, the link fails over to another port. All network traffic is distributed across the active links.

Storage layer

The storage layer is a key component of any cloud infrastructure solution that serves data generated by applications and operating systems in a data center storage processing system. This VSPEX solution uses XtremIO storage arrays to provide virtualization at the storage layer. The XtremIO platform provides the required storage performance, increases storage efficiency and management flexibility, and reduces total cost of ownership.

The EMC XtremIO all-flash array is a clean-sheet design with a revolutionary architecture. It brings together all the necessary and sufficient requirements to enable the agile data center: linear scale-out, inline all-the-time data services, and rich data center services for the workloads.

The basic hardware building block for these scale-out arrays is the “X-Brick.” Each X-Brick is two active-active controller nodes and a disk array enclosure packaged together with no single point of failure. The “Starter X-Brick” with 13 SSDs can be non-disruptively expanded to a full “X-Brick” with 25 SSDs without any downtime. The scale-out cluster can support up to six X-Bricks.

The XtremIO platform is designed to maximize the use of flash storage media. Key attributes of this platform are:

High levels of I/O performance, particularly for random I/O workloads that are typical in virtualized environments

Consistently low (sub-millisecond) latency

True inline data reduction—the ability to remove redundant information in the data path and write only unique data on the storage array, thus lowering the amount of capacity required

A full suite of enterprise array capabilities, such as integration with VMware through VAAI, N-way active controllers, high availability, strong data protection, and thin provisioning

Because the XtremIO array has a scale-out design, additional performance and capacity can be added in a building block approach, with all building blocks forming a single clustered system. XtremIO storage includes the following components:

Host adapter ports—Provide host connectivity through fabric into the array.

Storage controllers (SCs)—The compute component of the storage array. SCs handle all aspects of data moving into, out of, and between arrays.

EMC XtremIO

Page 33: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 3: Solution Technology Overview

33 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Disk drives—SSDs that contain the host/application data and their enclosures.

Infiniband switches— A computer network communications link used in multi-X-Brick configurations that is switched, scalable, has high throughput, low latency, and is quality-of-service and failover-capable.

XtremIO Operating System (XIOS)

The XtremIO storage cluster is managed by the powerful XtremIO Operating System (XIOS). XIOS ensures that the system remains balanced and always delivers the highest levels of performance without any administrator intervention, as follows:

Ensures that all SSDs in the system are evenly loaded, providing both the highest possible performance as well as endurance that stands up to demanding workloads for the entire life of the array.

Eliminates the need to perform the complex configuration steps found on traditional arrays. There is no need to set RAID levels, determine drive group sizes, set stripe widths, set caching policies, build aggregates, or do any other such configuration.

Automatically and optimally configures every volume at all times. I/O performance on existing volumes and data sets automatically increases with large cluster sizes. Every volume is capable of receiving the full performance potential of the entire XtremIO system.

Standards-based enterprise storage system

The XtremIO system interfaces with vSphere hosts using standard FC and iSCSI block interfaces. The system supports complete high-availability features, including support for native VMware multipath I/O, protection against failed SSDs, non-disruptive software and firmware upgrades, no single point of failure (SPOF), and hot-swappable components.

Real-time, inline data reduction

The XtremIO storage system deduplicates and compresses incoming data in real time, allowing a massive number of virtual machines as well as application data to reside in a small and economical amount of flash capacity. Furthermore, data reduction on the XtremIO array does not adversely affect input/output per second (IOPS) or latency performance; rather it enhances the performance of the virtualized environment.

Scale-out design

The X-Brick is the fundamental building block of a scaled out XtremIO clustered system. Using a Starter X-Brick, virtual server deployments can start small and grow to nearly any scale required by upgrading the Starter X-Brick to an X-Brick, and then configuring a larger XtremIO cluster if required. The system expands capacity and performance linearly as building blocks are added, making the virtualized environments simple to size and manage as the demands grow over time.

vSphere Storage APIs - Array Integration

The XtremIO array is fully integrated with vSphere through VAAI. All API commands are supported, including ATS, Clone Blocks/Full Copy/XCOPY, Zero Blocks/Write Same, Thin Provisioning, and Block Delete. This, in combination with the array’s inline data

Page 34: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 3: Solution Technology Overview

34 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

reduction and in-memory metadata management, enables nearly instantaneous virtual machine provisioning and cloning and makes it possible to use large volume sizes for management simplicity.

Extraordinary performance

The XtremIO array is designed to handle very high, sustained levels of small, random, mixed read and write I/O, as is typical in virtual environments, and to do so with consistent extraordinarily low latency.

Fast provisioning

XtremIO arrays deliver the industry’s first writeable snapshot technology that is space-efficient for both data and metadata. XtremIO snapshots are free from limitations of performance, features, topology, or capacity reservations. With their unique in-memory metadata architecture, XtremIO arrays can instantly clone virtual machine environments of any size.

Ease of use

The XtremIO storage system requires only a few basic setup steps that can be completed in minutes with absolutely no tuning or ongoing administration in order to achieve and maintain high performance levels. In fact, the XtremIO system can be deployment ready in less than an hour after delivery.

Security with Data at Rest Encryption (D@RE)

XtremIO arrays securely encrypt all data stored on the all-flash array, delivering protection for regulated use cases in sensitive industries such as healthcare, finance, and government.

Data center economics

The exceptional performance, capacity savings from unique data reduction capabilities, linear predictive scaling from scale-out architecture, and ease of use of XtremIO lead to breakthrough total cost of ownership in virtualized workload environments.

EMC Virtual Storage Integrator

EMC Virtual Storage Integrator (VSI) for VMware vSphere is a plug-in for VMware vCenter that provides a single management interface for managing EMC storage within the vSphere environment. VSPEX customers can use VSI to simplify management of virtualized storage. VMware administrators can manage their XtremIO arrays using the familiar vCenter interface.

VSI offers unmatched access control that enables you to efficiently manage and delegate storage tasks with confidence: you can perform daily management tasks with up to 90 percent fewer clicks and up to 10 times higher productivity. Furthermore, you can add and remove individual VSI features from VSI, which provides flexibility for customizing VSI user environments.

Virtualization management

Page 35: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 3: Solution Technology Overview

35 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

We1 used the following VSI features during validation testing of this solution:

Storage Viewer—Extends the functionality of the vSphere Client to facilitate the discovery and identification of XtremIO and EMC VNX devices that are allocated to VMware vSphere hosts and virtual machines. Storage Viewer presents the underlying storage details to the virtual data center administrator, merging the data of several different storage mapping tools into a few seamless vSphere Client views.

Unified Storage Management—Simplifies storage administration of XtremIO. It enables VMware administrators to seamlessly provision new XtremIO VMFS datastores and RDM volumes within the vSphere Client.

Refer to the EMC VSI for VMware vSphere product guides on EMC Online Support for more information.

Organizations with remote office and branch offices (ROBO) often prefer to locate data and applications close to the users in order to provide better performance and lower latency. In these environments, IT departments need to balance the benefits of local support with the need to maintain central control. Local systems and storage should be easy for local personnel to administer, but should also support remote management and flexible aggregation tools that minimize the demands on those local resources.

With VSPEX, you can accelerate the deployment of applications at remote offices and branch offices.

EMC Data Protection

EMC Data Protection, another important component in this VSPEX solution, provides data protection by backing up data files or volumes on a defined schedule, and then restores data from backup for recovery after a disaster.

EMC Data Protection is a smart method of backup. It consists of optimal integrated protection storage and software designed to meet backup and recovery objectives now and in the future. With EMC market-leading protection storage, deep data source integration, and feature-rich data management services, you can deploy an open, modular protection storage architecture that allows you to scale resources while lowering cost and minimizing complexity.

EMC Avamar provides fast, efficient backup and recovery through a complete software and hardware solution. Equipped with integrated variable-length deduplication technology, Avamar facilitates fast, daily full backups for virtual environments, remote offices, enterprise applications, NAS servers, and desktops/laptops. Learn more at: http://www.emc.com/avamar

1 In this paper, "we" refers to the EMC Solutions engineering team that validated the solution.

ROBO

Overview

EMC Avamar deduplication

Page 36: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 3: Solution Technology Overview

36 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

EMC Data Domain® deduplication storage systems continue to revolutionize disk backup, archiving, and disaster recovery with high-speed, inline deduplication for backup and archive workloads. Learn more at: http://www.emc.com/datadomain

vSphere Data Protection (VDP) is a proven solution for backing up and restoring VMware virtual machines. VDP is based on the award-winning EMC Avamar product and has many integration points with vSphere 6.0, providing simple discovery of your virtual machines and efficient policy creation. One of the challenges that traditional systems have with virtual machines is the large amount of data that these files contain. VDP uses a variable-length deduplication algorithm to ensure that a minimum amount of disk space is used and to reduce ongoing backup storage growth. Data is deduplicated across all virtual machines that are associated with the VDP virtual appliance.

VDP uses vSphere Storage APIs for Data Protection (VADP), which sends, only the changed blocks of data each day, resulting in less data being sent over the network. VDP enables up to eight virtual machines to be backed up concurrently. Because VDP resides in a dedicated virtual appliance, all the backup processes are offloaded from the production virtual machines.

VDP can alleviate the burden of restore requests from administrators by enabling end users to restore their own files using a web-based tool called vSphere Data Protection Restore Client. Users can browse their system backups in an easy-to-use interface that provides search and version control features. Users can restore individual files or directories without any intervention from IT. This frees up valuable time and resources and provides a better end user experience.

For backup and recovery options, refer to the following documents:

EMC Backup and Recovery Options for VSPEX Private Clouds Design and Implementation Guide

EMC Backup and Recovery Options for VSPEX Private Clouds

vSphere Replication is a feature of the vSphere platform versions 5.5 and later that provides business continuity. vSphere Replication copies a virtual machine defined in your VSPEX infrastructures to a second instance of VSPEX or within the clustered servers in a single VSPEX system. vSphere Replication continues to protect the virtual machine and replicates the changes to the copied virtual machine. This replication ensures that the virtual machine remains protected and is available for recovery without requiring restoration from backup. Replicated virtual machines are defined in VSPEX to ensure application-consistent data with a single click when replication is set up.

Administrators who manage virtualized Microsoft applications running on VSPEX can use the automatic integration of vSphere Replication with Microsoft Volume Shadow Copy Service (VSS) to ensure that applications such as Microsoft Exchange and Microsoft SQL Server databases are quiescent and consistent when generating replica data. A quick call to the virtual machine’s VSS layer flushes the database writers for an instant to ensure that the replicated data is static and fully recoverable.

This automated approach simplifies management and increases the efficiency of your VSPEX-based virtual environment.

EMC Data Domain deduplication storage systems

VMware vSphere Data Protection

vSphere Replication

Page 37: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 3: Solution Technology Overview

37 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

EMC RecoverPoint® is an enterprise-scale solution that protects application data on heterogeneous SAN-attached servers and storage arrays. RecoverPoint runs on a dedicated appliance (RPA) and combines industry-leading continuous data protection technology with a bandwidth-efficient, no-data-loss replication technology. This technology enables RPAs to protect data locally (continuous data protection or CDP), remotely (continuous remote replication or CRR), or either (concurrent local and remote or CLR), offering the following advantages:

RecoverPoint CDP replicates data within the same site or to a local bunker site some distance away, and transfers the data via FC.

RecoverPoint CRR uses either FC or an existing IP network to send the data snapshots to the remote site using techniques that preserve write-order.

In a CLR configuration, RecoverPoint replicates to both a local and a remote site simultaneously.

RecoverPoint uses lightweight splitting technology to mirror application writes to the RecoverPoint cluster, and supports the following write splitter types:

Array-based

Intelligent fabric-based

Host-based

Other technologies

In addition to the required technical components for EMC VSPEX solutions, other items may provide additional value depending on the specific use case. These include, but are not limited to, the following technologies.

VMware vCloud Automation Center, which is part of vCloud Suite Enterprise, orchestrates the provisioning of software-defined data center services as complete virtual data centers that are ready for consumption in a matter of minutes. vCloud Automation Center is a software solution that enables customers to build secure, private clouds by pooling infrastructure resources from VSPEX into virtual data centers and exposing them to users through Web-based portals and programmatic interfaces as fully automated, catalog-based services.

VMware vCloud Automation Center uses pools of resources abstracted from the underlying physical, virtual, and cloud-based resources to automate the deployment of virtual resources when and where they are required. VSPEX with vCloud Automation Center enables customers to build complete virtual data centers delivering computing, networking, storage, security, and a complete set of services necessary to make workloads operational in minutes.

EMC RecoverPoint

Overview

VMware vCloud Automation Center

Page 38: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 3: Solution Technology Overview

38 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Software-defined data center service and the virtual data centers fundamentally simplify infrastructure provisioning and enable IT to move at the speed of business. VMware vCloud Automation Center integrates with existing or new VSPEX Private Cloud with VMware vSphere deployments and supports existing and future applications by providing elastic standard storage and networking interfaces, such as Layer 2 connectivity and broadcasting between virtual machines. VMware vCloud Automation Center uses open standards to preserve deployment flexibility and pave the way to the hybrid cloud. The key features of VMware vCloud Automation Center include:

Self-service provisioning

Life-cycle management

Unified cloud management

Multi-virtual machine blueprints

Context-aware, policy-based governance

Intelligent resource management

All VSPEX Proven Infrastructures can use vCloud Automation Center to orchestrate deployment of virtual data centers based on single VSPEX or multi-VSPEX deployments. These infrastructures enable simple and efficient deployment of virtual machines, applications, and virtual networks.

The VMware vCenter Operations Manager Suite provides unparalleled visibility into VSPEX virtual environments. The suite collects and analyzes data, correlates abnormalities, identifies the root cause of performance problems, and provides administrators with the information needed to optimize and tune their VSPEX virtual infrastructures. vCenter Operations Manager provides an automated approach to optimizing your VSPEX-powered virtual environment by delivering self-learning analytic tools that are integrated to provide better performance, capacity usage and configuration management. The suite delivers a comprehensive set of management capabilities, including:

Performance

Capacity

Adaptability

Configuration and compliance management

Application discovery and monitoring

Cost metering

VMware vCenter Operations Management Suite

Page 39: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 3: Solution Technology Overview

39 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

The VMware vCenter Operations Manager Suite includes five components:

VMware vCenter Operations Manager is the foundation of the suite and provides the operational dashboard interface that makes visualizing issues in your VSPEX virtual environment simple.

VMware vCenter Configuration Manager helps to automate configuration and compliance of physical, virtual, and cloud environments, which ensures security and configuration consistency across the ecosystem.

VMware vCenter Hyperic monitors physical hardware resources, operating systems, middleware, and applications that you have deployed on VSPEX.

VMware vCenter Infrastructure Navigator provides visibility into the application services running over the virtual machine infrastructure and their interrelationships for day-to-day operational management.

VMware vCenter Chargeback Manager enables accurate cost measurement, analysis, and reporting of virtual machines. It provides visibility into the cost of the virtual infrastructure that you have defined on VSPEX as being required to support business services.

With the introduction of VMware vCenter Single Sign-On (SSO) in VMware vSphere 6.0, administrators now have a deeper level of available authentication services for managing their VSPEX Proven Infrastructures. Authentication by vCenter SSO makes the VMware cloud infrastructure platform more secure. This function allows the vSphere software components to communicate with each other through a secure token exchange mechanism, instead of requiring each component to authenticate a user separately with a directory service such as Active Directory.

When users log in to the vSphere Web client with user names and passwords, the vCenter SSO server receives their credentials. The credentials are then authenticated against the back-end identity sources and exchanged for a security token, which is returned to the client to access the solutions within the environment. SSO translates into time and cost savings which, when factored in against the entire organization, can result in savings and streamlined workflows.

With vSphere, users have a unified view of their entire vCenter Server environment because multiple vCenter Server instances and their inventories are displayed. This does not require Linked Mode unless users share roles, permissions, and licenses among vSphere vCenter Server instances.

Administrators can deploy multiple solutions within an environment with true single sign-on that creates trust between solutions without requiring authentication every time a user accesses the solution.

VSPEX Private Cloud with VMware vSphere is simple, efficient, and flexible. VMware SSO makes authentication simpler, workers can be more efficient, and administrators have the flexibility to make SSO servers local or global.

VMware vCenter Single Sign-On

Page 40: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 3: Solution Technology Overview

40 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

The ability to secure data and ensure the identity of devices and users is critical in today’s enterprise IT environment. This is particularly true in regulated sectors such as healthcare, financial, and government. VSPEX solutions can offer hardened computing platforms in many ways, most commonly by implementing a public-key infrastructure (PKI).

VSPEX solutions can be engineered with a PKI solution designed to meet the security criteria of your organization, and the solution can be implemented using a modular process, where layers of security are added as needed. The general process involves first implementing a PKI infrastructure by replacing generic self-certified certificates with trusted certificates from a third-party certificate authority. Services that support PKI can then be enabled using the trusted certificates to ensure a high degree of authentication and encryption.

Depending on the scope of PKI services needed, it may become necessary to implement a PKI infrastructure dedicated to those needs. There are many third party tools that offer these services including end-to-end solutions from RSA that can be deployed within a VSPEX environment. For additional information, visit the RSA website.

EMC PowerPath®/VE for VMware vSphere 6.0 is a module that provides multi-pathing extensions for vSphere and works in combination with SAN storage to intelligently manage FC, iSCSI, and FC over Ethernet (FCoE) I/O paths.

PowerPath/VE is installed on the vSphere host and scales to the maximum number of virtual machines on the host, improving I/O performance. The virtual machines do not have PowerPath/VE installed nor are they aware that PowerPath/VE is managing I/O to storage. PowerPath/VE dynamically balances I/O load requests and automatically detects and recovers from path failures.

Note: This validated solution uses the vSphere embedded Native Multiple Path (NMP) feature to manage I/O workflow.

Public-key infrastructure

PowerPath/VE

Page 41: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 4: Solution Architecture Overview

41 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Chapter 4 Solution Architecture Overview

This chapter presents the following topics:

Overview .................................................................................................................. 42

Solution architecture ............................................................................................... 42

Server configuration guidelines ............................................................................... 47

Network configuration guidelines ............................................................................ 51

Storage configuration guidelines ............................................................................. 52

High-availability and failover ................................................................................... 58

Backup and recovery configuration guidelines ......................................................... 60

Page 42: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 4: Solution Architecture Overview

42 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Overview

This chapter is a comprehensive guide to the major aspects of this solution. Server capacity is presented in generic terms for the required minimum CPU, memory, and network resources. You can select the server and networking hardware to meet or exceed the stated minimums. The specified storage architecture has been validated by EMC to provide high levels of performance while delivering a highly available architecture for your private cloud deployment.

Each Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual machines validated by EMC. In practice, each virtual machine has its own set of requirements that rarely fit a predefined idea of a virtual machine. In any discussion about virtual infrastructures, it is important to first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that takes into account every possible combination of workload characteristics.

Solution architecture

The VSPEX Private Cloud solution for VMware vSphere with EMC XtremIO validates the configuration for up to 700 virtual machines.

Note: VSPEX uses a reference workload to describe and define a virtual machine. Therefore, one physical or virtual server in an existing environment may not be equal to one virtual machine in a VSPEX solution. Evaluate your workload in terms of the reference to arrive at an appropriate point of scale. This process is described in Applying the reference workload.

Figure 7 shows a validated XtremIO infrastructure, where an 8 Gb FC or 10 Gb iSCSI SAN carries storage traffic, and 10 GbE carries management and application traffic.

Overview

Logical architecture

Page 43: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 4: Solution Architecture Overview

43 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

VMware ESXi Cluster

EMC XtremIO

VMware ESXi Virtual Servers

Virtual Server 1 Virtual Server n

…………….

10GbE IP Network

Storage Network

vCenter Server

SQL Server

DNS Server

Active Directory Server

Shared Infrastructure

8Gb FC/10Gb iSCSI

Figure 7. Logical architecture for the solution

This architecture includes the following key components:

VMware vSphere—Provides a common virtualization layer to host a server environment. The specifics of the validated environment are listed in Table 2. vSphere provides highly available infrastructure through features such as:

vMotion—Provides live migration of virtual machines within a virtual infrastructure cluster, with no virtual machine downtime or service disruption

Storage vMotion—Provides live migration of virtual machine disk files within and across storage arrays with no virtual machine downtime or service disruption

vSphere High Availability (HA)—Detects and provides rapid recovery for a failed virtual machine in a cluster

Distributed Resource Scheduler (DRS)—Provides load balancing of computing capacity in a cluster

Storage Distributed Resource Scheduler (SDRS)—Provides load balancing across multiple datastores based on space usage and I/O latency

VMware vCenter Server—Provides a scalable and extensible platform that forms the foundation for virtualization management for the VMware vSphere cluster. vCenter manages all vSphere hosts and their virtual machines.

Microsoft SQL Server—Provides a database service to store configuration and monitoring details, as required by VMware vCenter Server. This solution uses a Microsoft SQL Server 2012 database.

Key components

Page 44: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 4: Solution Architecture Overview

44 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

DNS server—Performs name resolution using DNS services for the various solution components. This solution uses the Microsoft DNS Service running on Windows Server 2012 R2.

Active Directory server—Required by various solution components to function properly. The Microsoft AD Service runs on a Windows Server 2012 server

Shared infrastructure—Use DNS and authentication/authorization services with existing infrastructure or set up as part of the new virtual infrastructure

IP network—Carries all network traffic with redundant cabling and switching. A shared IP network carries user and management traffic.

Storage network

The storage network is isolated to provide hosts with access to the array with the following two options:

Fibre Channel (FC)—Performs high-speed serial data transfer with a set of standard protocols. FC provides a standard data transport frame among servers and shared storage devices.

10 Gb Ethernet (iSCSI)—Enables the transport of SCSI blocks over a TCP/IP network. ISCSI works by encapsulating SCSI commands into TCP packets and sending the packets over the IP network.

XtremIO all-flash array

The XtremIO all-flash array includes the following components:

X-Brick—Represents a physical chassis that contains two active storage controllers as the fundamental scaling unit of the array and a shelf of eMLC SSDs. When the XtremIO cluster scales, the array clusters together multiple X-Bricks with an Infiniband back-end switch.

Storage controller (SC)—Represents a physical computer (1 unit in size) in the cluster, which acts as storage controllers, providing block data that supports FC and iSCSI protocols. Storage controllers can access all SSDs in the same X-Brick.

Processor D—Represents one of two CPU sockets for each storage controller. Processor D is responsible for disk access.

Processor RC—Represents the other CPU socket that is responsible for the router (hash writes and lookup) and controller (metadata).

Battery backup unit (BBU)—Provides enough power to each storage controller to ensure that any data in flight de-stages to disk in the event of a power failure. The first X-Brick has two battery backup units for redundancy. As clusters require additional X-Bricks, only a single battery backup unit is necessary for each additional X-Brick, which is 1 unit in size.

Disk array enclosures (DAE)—Houses the flash drives that the array uses and is 2 units in size.

Infiniband switch—Connects multiple X-Bricks together and is 1 unit in size. Two separate switches are usually necessary so that even the fabric that ties the controllers together are high availability.

Page 45: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 4: Solution Architecture Overview

45 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Table 1 lists the hardware used in this solution.

Table 1. Solution hardware

Component Configuration

VMware vSphere servers

CPU 1 vCPU per virtual machine

4 vCPUs per physical core

For 700 virtual machines:

700 vCPUs

Minimum of 175 physical CPUs

Memory 2 GB RAM per virtual machine

2 GB RAM reservation per VMware vSphere host

For 700 virtual machines:

Minimum of 1400 GB RAM

Add 2 GB for each physical server

Network 2 x 10 GbE NICs per server

2 HBA per server or 2 x 10 GbE NICs per server for data traffic

Note: You must add at least one additional server to the infrastructure beyond the minimum requirements to implement VMware vSphere HA functionality and to meet the listed minimums.

Network infrastructure

Minimum switching capacity

2 physical switches

2 x 10 GbE ports per VMware vSphere server for management

2 ports per VMware vSphere server for the storage network (FC or iSCSI)

2 ports per storage controller for storage data (FC or iSCSI)

EMC XtremIO all-flash array One X-Brick with 25 x 400 GB SSD drives

Shared infrastructure In most cases, the customer environment has infrastructure services such as Active Directory and DNS already configured. The setup of these services is beyond the scope of this document.

If implemented without the existing infrastructure, the new minimum requirements are as follows:

2 physical servers

16 GB RAM per server

4 processor cores per server

2 x 1 GbE ports per server

Note: You can migrate the services into this solution after deployment. However, the services must exist before the solution is deployed.

Note: For Intel Ivy Bridge or later processors, use 8 vCPUs per physical core.

Hardware resources

Page 46: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 4: Solution Architecture Overview

46 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Note: The solution recommends using a 10 GbE network or an equivalent 1 GbE network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled.

Table 2 lists the software used in this solution.

Table 2. Solution software

Software Configuration

VMware vSphere

vSphere Server Enterprise Edition, version 6.0

vCenter Server Enterprise Edition, version 6.0

OS for vCenter Server

Note: You can use any OS that is supported for vCenter.

Microsoft Windows Server 2012 R2 Standard Edition

Microsoft SQL Server

Note: You can use any database that is supported for vCenter.

Version 2012 R2 Standard Edition

EMC PowerPath/VE Use latest version

XtremIO ( for vSphere datastores)

XtremIO XIOS Operating System Release 3.0

EMC backup

Avamar Refer to EMC Backup and Recovery Options for VSPEX Private Clouds Design and Implementation Guide.

Data Domain OS Refer to EMC Backup and Recovery Options for VSPEX Private Clouds Design and Implementation Guide.

Virtual machines (used for validation, but is not required for deployment)

Base OS Microsoft Windows Server 2012 R2 Datacenter Edition

VDBench (workload generator) Version 5.0.4

Software resources

Page 47: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 4: Solution Architecture Overview

47 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Server configuration guidelines

When designing and ordering the compute layer of this VSPEX solution, several factors may impact the final purchase. From a virtualization perspective, if a system workload is well understood, features such as memory ballooning and transparent page sharing can reduce the aggregate memory requirement.

If the virtual machine pool does not have a high level of peak or concurrent usage, reduce the number of vCPUs. Conversely, if the applications being deployed are highly computational in nature, increase the number of CPUs and memory purchased.

Testing on the Intel Ivy Bridge series processors has shown significant increases in virtual machine density from the server resource perspective. If your server deployment comprises Ivy Bridge processors, we recommend increasing the vCPU/physical CPU (pCPU) ratio from 4:1 to 8:1. This essentially halves the number of server cores required to host the reference virtual machines.

Figure 8 shows results from the tested configurations.

Figure 8. Intel Ivy Bridge processors

Current VSPEX sizing guidelines require a maximum vCPU core to pCPU core ratio of 4:1, with a maximum 8:1 ratio for Ivy Bridge or later processors. This ratio was based on an average sampling of CPU technologies available at the time of testing. As CPU technologies advance, original equipment manufacturer (OEM) server vendors that are VSPEX partners might suggest higher ratios. Follow the updated guidance supplied by the OEM server vendor.

Overview

Ivy Bridge updates

Page 48: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 4: Solution Architecture Overview

48 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Table 3 lists the hardware resources used for the compute layer.

Table 3. Hardware resources for the compute layer

Component Configuration

VMware vSphere servers

CPU 1 vCPU per virtual machine

4 vCPUs per physical core

For 700 virtual machines:

700 vCPUs

Minimum of 175 physical CPUs

Memory 2 GB RAM per virtual machine

2 GB RAM reservation per VMware vSphere host

For 700 virtual machines:

Minimum of 1400 GB RAM

Add 2 GB for each physical serve

Network

Block 2 x 10 GbE NICs per server

2 HBA per server or 2 x 10 GbE NICs per server for iSCSI connection

Note: Add at least one additional server to the infrastructure beyond the minimum requirements to implement VMware vSphere HA functionality and to meet the listed minimums.

Note: The solution recommends using a 10 GbE network or an equivalent 1 GbE network infrastructure as long as the underlying requirements for bandwidth and redundancy are fulfilled.

VMware vSphere 6.0 has a number of advanced features that help maximize performance and overall resource utilization. The most important of these are in the area of memory management. This section describes some of these features, and the items to consider when using these features in the environment.

In general, virtual machines on a single hypervisor consume memory as a pool of resources, as shown in Figure 9.

VMware vSphere memory virtualization for VSPEX

Page 49: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 4: Solution Architecture Overview

49 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Total server

memory64 GB

Virtual machine 14 RVMs – 8 GB

Virtual machine 28 RVMs – 16 GB

Virtual machine 34 RVMs – 8 GB

Virtual machine 410 RVMs – 20 GB

Total used

54 GB

Total free10 GB

Hypervisor (2GB)

Figure 9. Hypervisor memory consumption

Understanding the technologies in this section makes it easier to understand this basic concept.

Memory compression

Memory over-commitment occurs when more memory is allocated to virtual machines than is physically present in a VMware vSphere host. Using sophisticated techniques, such as ballooning and transparent page sharing, VMware vSphere can handle memory over-commitment without any performance degradation. However, if memory usage exceeds server capacity, vSphere might resort to swapping out portions of the memory of a virtual machine.

Page 50: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 4: Solution Architecture Overview

50 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Non-Uniform Memory Access (NUMA)

vSphere 6.0 uses a NUMA load-balancer to assign a home node to a virtual machine. Because the home node allocates virtual machine memory, memory access is local and provides the best performance possible. Applications that do not directly support NUMA also benefit from this feature.

Transparent page sharing

Virtual machines running similar operating systems and applications typically have similar sets of memory content. Page sharing enables the hypervisor to reclaim any redundant copies of memory pages and keep only one copy, which frees up the total host memory consumption. If most of your application virtual machines run the same OS and application binaries, total memory usage can reduce to increase consolidation ratios.

Memory ballooning

By using a balloon driver loaded in the guest OS, the hypervisor can reclaim host physical memory if memory resources are under contention, with little or no impact to the performance of the application.

This section provides guidelines for allocating memory to virtual machines. These guidelines take into account vSphere memory overhead and the virtual machine memory settings.

vSphere memory overhead

Some associated overhead is required for the virtualization of memory resources. The memory space overhead has two components:

The fixed system overhead for the VMkernel

Additional overhead for each virtual machine

Memory overhead depends on the number of vCPUs and configured memory for the guest OS.

Allocating memory to virtual machines

Many factors determine the proper sizing for virtual machine memory in VSPEX architectures. With the number of application services and use cases available, determining a suitable configuration for an environment requires creating a baseline configuration, testing, and making adjustments for optimal results.

Memory configuration guidelines

Page 51: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 4: Solution Architecture Overview

51 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Network configuration guidelines

This section provides guidelines for setting up a redundant, highly available network configuration. The guidelines consider jumbo frames, VLANs, and FC/iSCSI connection on XtremIO storage. For detailed network resource requirements, refer to Table 4.

Table 4. Hardware resources for the network layer

Component Configuration

Network infrastructure

Minimum switching capacity

Block iSCSI – 2 physical LAN switches

Two 10GbE ports per VMware vSphere server

One 1GbE port per storage processor for management.

FC – 2 physical LAN switches, 2 physical SAN switches

Two FC ports per VMware vSphere server

One 1 GbE port per storage processor for management

Note: The solution can use 1 GbE network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled. This solution uses an iSCSI to host the array connection. The customer can use their existing FC or iSCSI network infrastructure.

Isolate network traffic so that the traffic between hosts and storage, hosts and clients, and management traffic all move over isolated networks. In some cases, physical isolation may be required for regulatory or policy compliance reasons; but in many cases logical isolation with VLANs is sufficient.

As a best practice, EMC recommends that you use three VLANs for:

Client access

Storage (for iSCSI and vMotion)

Management

Figure 10 depicts the VLANs and the network connectivity requirements for XtremIO arrays.

Overview

VLANs

Page 52: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 4: Solution Architecture Overview

52 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Servers

...

Management Network

Client Access Network

Management Network

Storage Network

Figure 10. Required networks for XtremIO storage

The client access network is for users of the system, or clients, to communicate with the infrastructure. The storage network provides communication between the compute layer and the storage layer. Administrators use the management network as a dedicated way to access the management connections on the storage array, network switches, and hosts.

Note: Some best practices recommend additional network isolation for cluster traffic, virtualization layer communication, and other features. Implement these additional networks if necessary.

This solution recommends setting the maximum transmission unit (MTU) at 9,000 (jumbo frames) for efficient storage and migration traffic. Refer to the switch vendor guidelines to enable jumbo frames on switch ports for storage and host ports on the switches.

Storage configuration guidelines

This section provides guidelines for setting up the storage layer of the solution to provide high availability and the expected level of performance.

VMware vSphere 6.0 allows more than one method of storage when hosting virtual machines. The tested solutions use different block protocols (FC/iSCSI), and the storage layout described in this section adheres to all current best practices. If required, you can make modifications to this solution based on your system usage and load requirements.

Enable jumbo frames (for iSCSI)

Overview

Page 53: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 4: Solution Architecture Overview

53 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

XtremIO storage clusters support a fully distributed, scale-out design that allows linear increases in both capacity and performance in order to provide infrastructure agility. XtremIO uses a building-block approach in which the array can be scaled using additional X-Bricks. With clusters of two or more X-Bricks, XtremIO uses a redundant 40 Gb/s quad data rate (QDR) Infiniband network for back-end connectivity among the storage controllers. This ensures a highly available, ultra-low latency network. Host access is provided by using two N-way active controllers for linear scaling of performance and capacity for simplified support of growing virtual environments. As a result, as capacity in the array grows, performance is enhanced by adding more storage controllers.

Figure 11. Single X-Brick XtremIO storage

As shown in Figure 11, the single X-Brick is the basic building block of an XtremIO array. Each X-Brick includes:

One 2-unit Disk Array Enclosure (DAE), containing:

25 eMLC SSDs (standard X-Brick) or 13 eMLC SSDs (10 TB Starter X-Brick [5TB])

Two redundant power supply units (PSUs)

Two redundant SAS interconnect modules

One Battery Backup Unit

Two 1-unit Storage Controllers (redundant storage processors). Each Storage Controller includes:

Two redundant PSUs

Two 8 Gb/s FC ports

Two 10 GbE iSCSI ports

Two 40 Gb/s InfiniBand ports

One 1 Gb/s management/IPMI port

Note: For details on X-Brick racking and cabinet requirements, refer to the EMC XtremIO Storage Array Site Preparation Guide.

XtremIO X-Brick scalability

Page 54: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 4: Solution Architecture Overview

54 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Figure 12 shows how the different cluster configurations look as you scale up. You can start from a single X-Brick and, as you scale, add a second X-Brick, a third, and a fourth. The performance scales linearly as additional X-Bricks are added.

Figure 12. Cluster configuration as single and multiple X-Brick cluster

Note: A 10 TB Starter X-Brick (5 TB) is physically similar to a single X-Brick cluster, except for the number of SSDs in the DAE (13 SSDs in a 10 TB Starter X-Brick [5 TB] instead of 25 SSDs in a standard single X-Brick).

VMware ESXi provides host-level storage virtualization, virtualizes the physical storage, and presents the virtualized storage to the virtual machines.

A virtual machine stores its OS and all other files related to the virtual machine activities in a virtual disk. The virtual disk itself consists of one or more files. VMware uses a virtual SCSI controller to present virtual disks to a guest OS running inside the virtual machines.

Virtual disks reside on a datastore. Depending on the protocol used, a datastore can be either a VMware VMFS datastore or an NFS datastore. An additional option, RDM, allows the virtual infrastructure to connect a physical device directly to a virtual machine. These virtual disk types are shown in Figure 13

VMware vSphere storage virtualization for VSPEX

Page 55: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 4: Solution Architecture Overview

55 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

LUN for RDM

LUN for VMFS

Virtual Machine

RDM

VMDK VMDK

HypervisorStorage Array

VMFS

Figure 13. VMware virtual disk types

VMFS

VMFS is a cluster file system that provides storage virtualization optimized for virtual machines. VMFS can be deployed over any SCSI-based local or network storage.

Raw Device Mapping (RDM)

VMware also provides RDM, which allows a virtual machine to directly access a volume on the physical storage. Only use RDM with FC or iSCSI.

Sizing the storage system to meet virtual server IOPS is a complicated process. Customers must consider various factors when planning and scaling their storage system to balance capacity, performance, and cost for their applications.

VSPEX uses a building block approach to reduce complexity. A building block is a set of disks that can support a certain number of virtual servers in the VSPEX architecture. Each building block combines several disks to create an XtremIO protection group that supports the needs of the private cloud environment.

For VSPEX solutions enabled with XtremIO array, there are two scales of validated configurations—one scale equipped with 13 SSDs Starter X-Brick (5 TB) and one scale with fully inserted 25 SSDs single brick (10 TB). Different brick scales can support different numbers of virtual servers. To accomplish this, VSPEX solutions can be deployed using two of the scale-points below to obtain the ideal configuration, all while guaranteeing a given performance level.

Building block for Starter X-Brick

The Starter X-Brick building block can support up to 350 virtual servers with 13 SSDs drives in the XtremIO data protection group, as shown in Figure 14.

VSPEX storage building blocks

Page 56: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 4: Solution Architecture Overview

56 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

XtremIO Starter X-brick – 350 VMs

SSDNot

Present

0 1 20 21 2322 2416 1817 1911 6 7 85 9 10 11 12 1514133 42

Figure 14. XtremIO Starter X-Brick building block for 350 virtual machines

This is the validated solution for the VSPEX architecture. In the Starter X-Brick configuration, the raw capacity is 5 TB, and the unique data percentage is 15 percent. Detailed information about the test profile can be found in Chapter 5. This building block can be expanded by adding 12 additional SSD drives and allowing the data protection group to support up to 700 virtual servers.

Building block for a single X-Brick

The second building block can contain up to 700 virtual servers. It contains 25 SSD drives, as shown in Figure 15.

XtremIO Starter X-brick – 700 VMs

SSDNot

Present

0 1 20 21 2322 2416 1817 1911 6 7 85 9 10 11 12 1514133 42

Figure 15. XtremIO Single x-Brick building block for 700 virtual machines

This is the validated solution for the VSPEX architecture. In the single-X-Brick configuration, the raw capacity is 10 TB, and the unique data percentage is 15 percent. Detailed information about the test profile can be found in Chapter 5.

Table 5 lists different scales of one XtremIO array supported with different numbers of virtual servers.

Table 5. Different numbers of virtual machines at different scalable scenarios

Virtual servers Scalable

350 Starter X-Brick (5 TB)

700 Single X-Brick (10 TB)

1400 Two X-Bricks (20 TB)

2800 Four X-Bricks (40 TB)

4200 Six X-Bricks (60 TB)

Note: The number of supported virtual machines is based on the unique data percentage of 15 percent.

Page 57: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 4: Solution Architecture Overview

57 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Conclusion

The scale levels shown in Figure 16 highlight the entry points and supported maximum values for the arrays in the VSPEX private cloud environment. The entry points represent optimal model demarcations in terms of the number of virtual machines within the environment. This helps determine which XtremIO array to choose based on your requirements. You can choose to configure any of the listed arrays with a number of virtual machines smaller than the maximum numbers supported using the building block approach described earlier.

Figure 16. Maximum scale levels and entry points of different arrays

Page 58: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 4: Solution Architecture Overview

58 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

High-availability and failover

This VSPEX solution provides a highly available virtualized server, network, and storage infrastructure. When the solution is implemented following the instructions in this document, business operations survive with little or no impact from single-unit failures.

Configure high availability in the virtualization layer, and enable the hypervisor to automatically restart failed virtual machines. Figure 17 illustrates the hypervisor layer responding to a failure in the compute layer.

VMware vSphere cluster – VMHA configured VMware vSphere cluster – VMHA configuredHost failure

Figure 17. High availability at the virtualization layer

By implementing high availability at the virtualization layer, even during a hardware failure, the infrastructure attempts to keep as many services running as possible.

While you have flexibility in the choice of servers to implement in the compute layer, we recommend that you use enterprise-class servers designed for the data center. This type of server has redundant power supplies, as shown in Figure 18. Connect these servers to separate power distribution units (PDUs) following your server vendor’s best practices.

Figure 18. Redundant power supplies

To configure high availability in the virtualization layer, configure the compute layer with enough resources to meet the needs of the environment, even with a server failure, as demonstrated in Figure 17.

Overview

Virtualization layer

Compute layer

Page 59: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 4: Solution Architecture Overview

59 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

The advanced networking features of the XtremIO series provide protection against network connection failures at the array. Each vSphere host has multiple connections to user and storage Ethernet networks to guard against link failures, as shown in Figure 19. Spread these connections across multiple Ethernet switches to guard against component failure in the network.

Server Connects to

multiple swtiches

Switches Connect to

each other

Each storage

controller connects

to multiple switches

Figure 19. Network layer high availability

XtremIO storage is designed for five 9s (99.999%) availability by using redundant components throughout the array, as shown in Figure 20. All of the array components are capable of continued operation in case of hardware failure. XtremIO Data Protection (XDP) delivers the superior protection of RAID 6, while exceeding the performance of RAID 1 and the capacity utilization of RAID 5, ensuring against data loss due to drive failures.

Figure 20. XtremIO high availability

EMC storage arrays are designed to be highly available by default. Use the installation guides to ensure that there are no single unit failures that result in data loss or unavailability.

Network layer

Storage layer

Page 60: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 4: Solution Architecture Overview

60 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Backup and recovery configuration guidelines

For details regarding backup and recovery configuration for this VSPEX Private Cloud solution, refer to EMC Backup and Recovery Options for VSPEX Private Clouds Design and Implementation Guide.

Page 61: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 5: Sizing the Environment

61 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Chapter 5 Sizing the Environment

This appendix presents the following topics:

Overview .................................................................................................................. 62

Reference workload.................................................................................................. 62

Scaling out ............................................................................................................... 63

Applying the reference workload ............................................................................. 63

Quick assessment .................................................................................................... 65

Page 62: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 5: Sizing the Environment

62 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Overview

The following sections provide definitions of the reference workload used to size and implement the VSPEX architectures. The sections include instructions on how to correlate those reference workloads to customer workloads, and how that can change the end delivery from the server and network perspective.

Modify the storage definition by adding drives for greater capacity and performance and by adding X-Bricks to improve the cluster performance. The cluster layouts provide support for the appropriate number of virtual machines at the defined performance level.

Reference workload

When you move an existing server to a virtual infrastructure, you can gain efficiency by right-sizing the virtual hardware resources assigned to that system.

Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual machines, as validated by EMC. In practice, each virtual machine has its own requirements that rarely fit a pre-defined idea of a virtual machine. In any discussion about virtual infrastructures, you need to first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that considers every possible combination of workload characteristics.

To simplify this discussion, this section presents a representative customer reference workload. By comparing the actual customer usage to this reference workload, you can determine how to size the solution.

VSPEX Private Cloud solutions define a reference virtual machine (RVM) workload, which represents a common point of comparison. Since XtremIO has an in-line deduplication feature, it is critical to determine the unique data percentage, as this parameter will impact XtremIO physical capacity usage. In our validated solution, we set the unique data to 15 percent. The parameters are described in Table 6.

Table 6. VSPEX Private Cloud workload

Parameter Value

Virtual machine OS Windows Server 2012 R2

Virtual CPUs 1

Virtual CPUs per physical core (maximum) 4

Memory per virtual machine 2 GB

IOPS per virtual machine 25

IO size 8 KB

I/O Pattern Fully random skew = 0.5

I/O read percentage 67%

Overview

Define the reference workload

Page 63: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 5: Sizing the Environment

63 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Parameter Value

Virtual machine storage capacity 100 GB

Unique data 15%

This specification for a virtual machine does not represent any specific application. Rather, it represents a single common point of reference by which to measure other virtual machines.

Scaling out

XtremIO is designed to scale from a single X-Brick to a cluster of multiple X-Bricks (up to six X-Bricks based on the current code release). Unlike most traditional storage systems, as the number of X-Bricks grows, so do capacity, throughputs, and IOPS. The scalability of performance is linear for the growth of the deployment. Whenever additional storage and compute resources (such as servers and drives) are needed, you can add them modularly. Storage and compute resources grow together so that the balance between them is maintained.

Applying the reference workload

When you consider an existing server for movement into a virtual infrastructure, you have the opportunity to gain efficiency by right-sizing the virtual hardware resources assigned to that system.

The solution creates storage resources that are sufficient to host a target number of reference virtual machines with the characteristics shown in Table 6. Virtual machines might not exactly match the specifications. In that case, define a single specific customer virtual machine as the equivalent of some number of reference virtual machines together, and assume these virtual machines are in use in the data protection group. Continue to provision virtual machines from the pool until no resources remain.

A small custom-built application server must move into this virtual infrastructure. The physical hardware that supports the application is not fully used. A careful analysis of the existing application reveals that the application can use one processor and needs 3 GB of memory to run normally. The I/O workload ranges between four IOPS at idle time to a peak of 15 IOPS when busy. The entire application consumes about 30 GB on local hard drive storage.

Based on these numbers, the application needs the following resources:

CPU of one reference virtual machine

Memory of two reference virtual machines

Storage of one reference virtual machine

I/Os of one reference virtual machine

Overview

Example 1: Custom-built application

Page 64: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 5: Sizing the Environment

64 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

In this example, a corresponding virtual machine uses the resources for two of the reference virtual machines. If implemented on a single brick XtremIO storage system, which can support up to 700 virtual machines, resources for 698 reference virtual machines remain.

The database server for a customer’s point-of-sale system must move into this virtual infrastructure. It is currently running on a physical system with four CPUs and 16 GB of memory. It uses 200 GB of storage and generates 200 IOPS during an average busy cycle.

The requirements to virtualize this application are:

CPUs of four reference virtual machines

Memory of eight reference virtual machines

Storage of two reference virtual machines

I/Os of eight reference virtual machines

In this case, the corresponding virtual machine uses the resources of eight reference virtual machines. If implemented on a single brick XtremIO storage system, which can support up to 700 virtual machines, resources for 692 reference virtual machines remain.

The customer’s web server must move into this virtual infrastructure. It is currently running on a physical system with two CPUs and 8 GB of memory. It uses 25 GB of storage and generates 50 IOPS during an average busy cycle.

The requirements to virtualize this application are:

CPUs of two reference virtual machines

Memory of four reference virtual machines

Storage of one reference virtual machine

I/Os of two reference virtual machines

In this case, the corresponding virtual machine uses the resources of four reference virtual machines. If implemented on a single brick XtremIO storage system, which can support up to 700 virtual machines, resources for 696 reference virtual machines remain.

The database server for a customer’s decision support system must move into this virtual infrastructure. It is currently running on a physical system with ten CPUs and 64 GB of memory. It uses 5 TB of storage and generates 700 IOPS during an average busy cycle.

Example 2: Point of sale system

Example 3: Web server

Example 4: Decision-support database

Page 65: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 5: Sizing the Environment

65 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

The requirements to virtualize this application are:

CPUs of 10 reference virtual machines

Memory of 32 reference virtual machines

Storage of 52 reference virtual machines

I/Os of 28 reference virtual machines

In this case, the corresponding virtual machine uses the resources of 52 reference virtual machines. If implemented on a single brick XtremIO storage system, which can support up to 700 virtual machines, resources for 648 reference virtual machines remain.

These four examples illustrate the flexibility of the resource pool model. In all four examples, the workloads reduce the amount of available resources in the pool. With business growth, the customer must implement a much larger virtual environment to support one custom-built application, one point-of-sale system, two web servers, and ten decision support databases. Using the same strategy, calculate the number of equivalent reference virtual machines, to get a total of 538 reference virtual machines. All these reference virtual machines can be implemented on the same virtual infrastructure with an initial capacity for 700 reference virtual machines that is supported with a single X-Brick. The resources for 162 reference virtual machines remain in the resource pool.

In more advanced cases, tradeoffs might be necessary between memory and I/O or other relationships in which increasing the amount of one resource, decreases the need for another. In these cases, the interactions between resource allocations become highly complex and are beyond the scope of this document. In this case, you must examine the change in resource balance and determine the new level of requirements. Add these virtual machines to the infrastructure with the method described in the examples.

Quick assessment

An assessment of the customer environment helps to ensure that you implement the correct VSPEX solution. This section provides an easy-to-use worksheet to simplify the sizing calculations and help assess the customer environment.

First, summarize the applications planned for migration into the VSPEX private cloud. For each application, determine the number of vCPUs, the amount of memory, the required storage performance, the required storage capacity, and the number of reference virtual machines required from the resource pool. Applying the reference workload provides examples of this process.

Complete the worksheet for each application listed in Table 7. Each row requires inputs on four different resources: CPU, memory, IOPS, and capacity.

Summary of examples

Overview

Page 66: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 5: Sizing the Environment

66 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Table 7. Customer Sizing Worksheet example (blank)

Application CPU (virtual CPUs)

Memory (GB)

IOPS Capacity (GB)

Equivalent reference virtual machines

Example application

Resource requirements

NA

Equivalent reference virtual machines

Optimizing CPU utilization is a significant goal for almost any virtualization project. A simple view of the virtualization operation suggests a one-to-one mapping between pCPU cores and vCPU cores regardless of the pCPU utilization. In reality, consider whether the target application can effectively use all CPUs presented. Use a performance-monitoring tool, such as esxtop, on vSphere hosts to examine the CPU Utilization counter for each CPU. If they are equivalent, implement that number of vCPUs when moving into the virtual infrastructure. However, if some CPUs are used and some are not, consider decreasing the number of vCPUs required.

In any operation involving performance monitoring, you can collect data samples for a period of time that includes all operational use cases of the system. Use either the maximum or 95th percentile value of the resource requirements for planning purposes.

Server memory plays a key role in ensuring application functionality and performance. Therefore, each server process has different targets for the acceptable amount of available memory. When moving an application into a virtual environment, consider the current memory available to the system and monitor the free memory by using a performance-monitoring tool, such as VMware esxtop, to determine memory efficiency.

The storage performance requirements for an application are usually the least understood aspect of performance. Three components become important when discussing the I/O performance of a system:

The number of requests coming in, or IOPS

The size of the request or I/O size. For example, a request for 4 KB of data is easier and faster to process than a request for 4 MB of data.

The average I/O response time, or I/O latency

CPU requirements

Memory requirements

Storage performance requirements

Page 67: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 5: Sizing the Environment

67 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

The reference virtual machine calls for 25 IOPS. To monitor this on an existing system, use a performance-monitoring tool such as VMware esxtop, which provides several counters that can help. The most common are:

Physical Disk\Commands/sec

Physical Disk\Reads/sec

Physical Disk\Writes/sec

Physical Disk\Average Guest Millisecond/Command The reference virtual machine assumes a 2:1 read-to-write ratio. Use these counters to determine the total number of IOPS, and the approximate ratio of reads to writes for the customer application.

The I/O size is important because smaller I/O requests are faster and easier to process than large I/O requests. The reference virtual machine assumes an average I/O request size of 8 KB, which is appropriate for a large range of applications. Most applications use I/O sizes that are even—powers of 2–4 KB, 8 KB, 16 KB, 32 KB, and so on are common. The performance counter does a simple average; it is common to see 11 KB or 15 KB instead of even I/O sizes.

The reference virtual machine assumes an 8 KB I/O size. If the average customer I/O size is less than 8 KB, use the observed IOPS number. However, if the average I/O size is significantly higher, apply a scaling factor to account for the large I/O size. A safe estimate is to divide the I/O size by 8 KB and use that factor. For example, if the application uses mostly 32 KB I/O requests, use a factor of four (32 KB/8 KB = 4). If that application generates 100 IOPS at 32 KB, the factor indicates you should plan for 400 IOPS, since the reference virtual machine assumes 8 KB I/O sizes.

The average I/O response time, or I/O latency, is a measurement of how quickly the storage system processes I/O requests. The VSPEX solutions meet a target average I/O latency of 20 ms. The recommendations in this document allow the system to continue to meet that target; however, monitor the system and re-evaluate the resource pool utilization if needed.

To monitor I/O latency, use the “Physical Disk \ Average Guest Millisecond/Command” counter (block storage) in esxtop. If the I/O latency is continuously over the target, re-evaluate the virtual machines in the environment to ensure that these machines do not use more resources than intended.

XtremIO automatically and globally deduplicates data as it enters the system. Deduplication is performed in real time and not as a post-processing operation. XtremIO is an ideal capacity saving storage array due to this feature. The consumed capacity is based on the deduplication ratio from the testing tool. This solution uses the VDbench tool to generate deduplication data. The reference virtual machine uses 15% unique data. From the XtremIO XMS GUI window, monitor the deduplication ratio parameters in VDbench to verify the deduplication rate.

The storage capacity requirement for a running application is usually the easiest resource to quantify. Determine the disk space used, and add an appropriate factor to

I/O operations per second

I/O size

I/O latency

Unique data

Storage capacity requirements

Page 68: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 5: Sizing the Environment

68 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

accommodate growth. For example, virtualizing a server that currently uses 40 GB of a 200 GB internal drive with anticipated growth of approximately 20 percent over the next year requires 48 GB. In addition, reserve space for regular maintenance patches and swapping files. Some file systems, such as Microsoft NTFS, degrade in performance if they become too full.

With all of the resources defined, determine an appropriate value for the equivalent reference virtual machines line by using the relationships in Table 8. Round all values up to the closest whole number.

Table 8. Reference virtual machine resources

Resource Value for reference virtual machine

Relationship between requirements and equivalent reference virtual machines

CPU 1 Equivalent reference virtual machines = resource requirements

Memory 2 Equivalent reference virtual machines = (Resource Requirements)/2

IOPS 25 Equivalent reference virtual machines = (resource requirements)/25

Capacity 100 Equivalent reference virtual machines = (resource requirements)x0.15/100

For example, the point of sale system database used in Example 2: Point of sale system requires four CPUs, 16 GB of memory, 200 IOPS, and 30 GB (15% unique data converted to physical capacity consumption is 200x0.15=30 GB) of physical capacity. This translates to four reference virtual machines of CPU, eight reference virtual machines of memory, eight reference virtual machines of IOPS, and two reference virtual machines of capacity. Table 9 shows how that machine fits into the worksheet row.

Determining equivalent reference virtual machines

Page 69: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 5: Sizing the Environment

69 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Table 9. Customer Sizing Worksheet example with user numbers added

Application CPU

(virtual CPUs)

Memory (GB)

IOPS Capacity (GB)

Equivalent reference virtual machines

Example application

Resource requirements

4 16 200 30 N/A

Equivalent reference virtual machines

4 8 8 1 8

Use the highest value in the row to fill in the Equivalent reference virtual machines column. As shown in Figure 21, the example requires eight reference virtual machines.

Figure 21. Required resource from the reference virtual machine pool

Implementation example - Stage 1

A customer wants to build a virtual infrastructure to support one custom-built application, one point of sale system, and one web server. The customer computes the sum of the Equivalent reference virtual machines column on the right side of the worksheet, as shown in Table 10, to calculate the total number of reference virtual machines required. The table shows the result of the calculation, rounded up to the nearest whole number.

Page 70: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 5: Sizing the Environment

70 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Table 10. Example applications – stage 1

Server resources Storage resources

Application CPU

(virtual CPUs)

Memory (GB)

IOPS Capacity (GB)

Reference virtual machines

Example application #1: Custom-built application

Resource requirements

1 3 15 5 NA

Equivalent reference virtual machines

1 2 1 1 2

Example application #2: Point of sale system

Resource requirements

4 16 200 60 NA

Equivalent reference virtual machines

4 8 8 1 8

Example application #3: Web server

Resource requirements

2 8 50 4 NA

Equivalent reference virtual machines

2 4 2 1 4

Total equivalent reference virtual machines 14

This example requires 14 reference virtual machines. According to the sizing guidelines, a single brick with 25 SSD drives provides sufficient resources for the current needs and room for growth. You can use a Starter X-Brick, which supports up to 350 reference virtual machines.

Implementation example – stage 2

Next, the customer must add a decision support database to the virtual infrastructure. Using the same strategy, the number of reference virtual machines required can be calculated, as shown in Table 11.

Page 71: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 5: Sizing the Environment

71 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Table 11. Example applications – stage 2

Server resources Storage resources

Application CPU

(virtual CPUs)

Memory (GB)

IOPS Capacity (GB)

Equivalent reference virtual machines

Example application #1: Custom built application

Resource requirements 1 3 15 5 N/A

Equivalent reference virtual machines

1 2 1 1 2

Example application #2: Point of sale system

Resource requirements 4 16 200 30 N/A

Equivalent reference virtual machines

4 8 8 1 8

Example application #3: Web server

Resource requirements 2 8 50 4 N/A

Equivalent reference virtual machines

2 4 4 1 4

Example application #4: Decision support database

Resource Requirements 10 64 700 768 N/A

Equivalent reference virtual machines

10 32 28 8 32

Total equivalent reference virtual machines 46

This example requires 46 reference virtual machines. According to the sizing guidelines, a single brick with 25 SSD drives provides sufficient resources for the current needs and room for growth. You can implement this storage layout with a single brick, which supports up to 700 reference virtual machines. 640 reference virtual machines are available after implementing one single brick.

This process usually determines the recommended hardware size for servers and storage. However, in some cases, there is a desire to further customize the hardware resources available to the system. A complete description of system architecture is beyond the scope of this document; however, additional customization can be done at this point.

Server resources

For some workloads, the relationship between server needs and storage needs does not match what is outlined in the reference virtual machine. You should size the server and storage layers separately in this scenario.

To do this, first total the resource requirements for the server components, as shown in Table 12. In the Server resource component totals row at the bottom of the worksheet, add up the server resource requirements from the applications in the table.

Fine-tuning hardware resources

Page 72: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 5: Sizing the Environment

72 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Note: When customizing resources in this way, confirm that storage sizing is still appropriate. The Storage component totals line at the bottom of Table 12 describes the required amount of storage.

Table 12. Server resource component totals

Server resources Storage resources

Application CPU

(Virtual CPUs)

Memory (GB)

IOPS Capacity (GB)

Reference Virtual Machines

Example Application #1: Custom Built Application

Resource Requirements

1 3 15 5

Equivalent Reference Virtual Machines

1 2 1 1 2

Example Application #2: Point of Sale System

Resource Requirements

4 16 200 30

Equivalent Reference Virtual Machines

4 8 8 1 8

Example Application #3: Web Server

Resource Requirements

2 8 50 4

Equivalent Reference Virtual Machines

2 4 2 1 4

Example Application #4: Decision Support Database

Resource Requirements

10 64 700 768

Equivalent Reference Virtual Machines

10 32 28 8 32

Total equivalent reference virtual machines 46

Server and storage resource component totals 17 155

Note: Calculate the sum of the resource requirements row for each application, not the equivalent reference virtual machines, to get the server and storage component totals.

In this example, the target architecture required 17 vCPUs and 155 GB of memory. If four virtual machines per physical processor core are used, and memory over-provisioning is not necessary, the architecture requires five physical processor cores and 155 GB of memory. With these numbers, the solution can be effectively implemented with fewer server resources.

Note: Keep high-availability requirements in mind when customizing the hardware resource.

Page 73: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 5: Sizing the Environment

73 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

To simplify the sizing of this solution, EMC has produced the VSPEX Sizing Tool. This tool uses the same sizing process described in the section above, and also incorporates sizing for other VSPEX solutions.

The VSPEX Sizing Tool enables you to input your resource requirements from the customer’s answers in the qualification worksheet. After you complete the inputs to the VSPEX Sizing Tool, the tool generates a series of recommendations, which allows you to validate your sizing assumptions and provides platform configuration information that meets those requirements. You can access this tool at the following location: EMC VSPEX Sizing Tool.

EMC VSPEX Sizing Tool

Page 74: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 5: Sizing the Environment

74 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Page 75: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 6: VSPEX Solution Implementation

75 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Chapter 6 VSPEX Solution Implementation

This chapter presents the following topics:

Overview .................................................................................................................. 76

Pre-deployment tasks .............................................................................................. 76

Network implementation .......................................................................................... 79

Prepare and configure the storage array .................................................................. 81

Install and configure the VMware vSphere hosts ..................................................... 85

Install and configure Microsoft SQL Server databases ............................................. 91

Install and configure VMware vCenter Server ........................................................... 93

Provisioning a virtual machine ................................................................................. 95

Summary .................................................................................................................. 95

Page 76: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 6: VSPEX Solution Implementation

76 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Overview

The deployment process consists of the stages listed in Table 13. After deployment, integrate the VSPEX infrastructure with the existing customer network and server infrastructure.

Table 13 lists the main stages in the solution deployment process. The table also includes references to sections that contain relevant procedures.

Table 13. Deployment process overview

Stage Description reference

1 Verify prerequisites. Pre-deployment tasks

2 Obtain the deployment tools. Deployment prerequisites

3 Gather customer configuration data.

Customer configuration data

4 Rack and cable the components.

Refer to the vendor documentation.

5 Configure the switches and networks, connect to the customer network.

Network implementation

6 Install and configure the XtremIO.

Prepare and configure the storage array

7 Configure virtual machine datastores.

Prepare and configure the storage array

8 Install and configure the servers.

Install and configure the VMware vSphere hosts

9 Set up Microsoft SQL Server (used by VMware vCenter).

Install and configure Microsoft SQL Server database

10 Install and configure vCenter Server and virtual machine networking.

Configure database for VMware vCenter

Pre-deployment tasks

The pre-deployment tasks, as shown in Table 14, include procedures not directly related to environment installation and configuration, and provide needed results at the time of installation. Examples of pre-deployment tasks are a collection of hostnames, IP addresses, VLAN IDs, license keys, and installation media. Perform these tasks before the customer visit to decrease the time required onsite.

Page 77: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 6: VSPEX Solution Implementation

77 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Table 14. Tasks for pre-deployment

Task Description Reference

Gather documents

Gather the related documents listed in Appendix A. These documents provide setup procedures and deployment best practices for the various components of the solution.

Appendix A

Gather tools Gather the required and optional tools for the deployment. Use Table 15 to confirm that all equipment, software, and appropriate licenses are available before starting the deployment process.

Table 15

Gather data Collect the customer-specific configuration data for networking, naming, and required accounts. Enter this information into the Customer configuration worksheet for reference during the deployment process.

Table 15 lists the hardware, software, and licenses required to configure the solution. For more information, refer to Table 1 and Table 2.

Table 15. Deployment prerequisites checklist

Requirement Description Reference

Hardware Physical servers to host virtual servers: Sufficient physical server capacity to host 700 virtual servers

Table 1

VMware vSphere servers to host virtual infrastructure servers

Note: The existing infrastructure may already meet this requirement.

Switch port capacity and capabilities as required by the virtual server infrastructure

EMC XtremIO single brick (700 virtual machines): multiprotocol storage array with the required disk layout.

Software VMware ESXi installation media

VMware vCenter Server installation media

EMC VSI for VMware vSphere: Unified Storage Management

EMC Online Support

EMC VSI for VMware vSphere: Storage Viewer

Deployment prerequisites

Page 78: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 6: VSPEX Solution Implementation

78 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Requirement Description Reference

Microsoft Windows Server 2012 installation media (suggested OS for VMware vCenter)

Microsoft SQL Server 2012 or newer installation media

Note: This requirement may be covered in the existing infrastructure.

VMware VAAI Plug-in EMC Online Support

Microsoft Windows Server 2012 R2 Datacenter Edition installation media (suggested OS for virtual machine guest OS)

Licenses

VMware vCenter license key

VMware ESXi license keys

Microsoft Windows Server 2012 R2 Standard Edition (or higher) license keys Microsoft Windows Server 2012 R2 Datacenter Edition license keys

Note: An existing Microsoft Key Management Server (KMS) may cover this requirement.

Microsoft SQL Server license key

Note: The existing infrastructure may already meet this requirement.

Gather information such as IP addresses and hostnames as part of the planning process to reduce time onsite.

The Customer configuration worksheet provides a set of tables to maintain a record of relevant customer information. Add, record, and modify information as needed during the deployment process.

Customer configuration data

Page 79: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 6: VSPEX Solution Implementation

79 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Network implementation

This section describes the network infrastructure requirements needed to support this architecture. Table 16 provides a summary of the tasks for network configuration, and references for further information.

Table 16. Tasks for switch and network configuration

Task Description Reference

Configure infrastructure network

Configure storage array and ESXi host infrastructure networking as specified in Prepare and configure the storage array and Install and configure the VMware vSphere hosts.

Prepare and configure the storage array and Install and configure the VMware vSphere hosts.

Configure VLANs

Configure private and public VLANs as required.

Your vendor’s switch configuration guide

Complete network cabling

Connect the switch interconnect ports.

Connect the XtremIO front-end ports.

Connect the ESXi server ports.

For validated levels of performance and high availability, this solution requires the switching capacity listed in Table 1. You do not need to use new hardware if the existing infrastructure meets the requirements.

The infrastructure network requires redundant network links for— each ESXi host, the storage array, the switch interconnect ports, and the switch uplink ports— to provide both redundancy and additional network bandwidth. This is a required configuration regardless of whether the network infrastructure or the solution already exists, or you are deploying it alongside other components of the solution.

Figure 22 shows a sample redundant infrastructure for this solution. The diagram illustrates the use of redundant switches and links to ensure that there are no single points of failure.

In Figure 22, converged switches provide customers with different protocol options (FC or iSCSI) for storage networks for block storage. While existing FC switches are acceptable for the FC protocol option, use 10 Gb Ethernet network switches for iSCSI.

Prepare network switches

Configure infrastructure network

Page 80: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 6: VSPEX Solution Implementation

80 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Figure 22. Sample Ethernet network architecture

Ensure that there are adequate network switch ports for ESXi hosts. EMC recommends that you configure the ESXi hosts with three VLANs:

Customer Data Network—Virtual machine networking (these are customer-facing networks, which can be separated if needed).

Storage Network—XtremIO data networking (private network).

Management network—Live Migration networking (private network).

Use jumbo frames for iSCSI protocol. Set the maximum transmission unit (MTU) to 9,000 for the switch ports for the iSCSI storage network. Consult your switch configuration guide for instructions.

Configure VLANs

Configure jumbo frames (iSCSI only)

Page 81: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 6: VSPEX Solution Implementation

81 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Ensure that all solution servers, switch interconnects, and switch uplinks have redundant connections and are plugged into separate switching infrastructures. Ensure that there is a complete connection to the existing customer network.

Note: The new equipment is connected to the existing customer network. Ensure that unexpected interactions do not cause service issues on the customer network.

Prepare and configure the storage array

Implementation instructions and best practices may vary depending on the storage network protocol selected for the solution. Follow these steps in each case:

1. Configure the XtremIO array, including the register host initiator group.

2. Provision storage and LUN masking to the ESXi hosts.

The following sections explain the options for each step separately, depending on whether the FC or iSCSI protocol is selected.

This section describes how to configure the XtremIO storage array for host access using a block-only protocol such as FC or iSCSI. In this solution, XtremIO provides data storage for VMware hosts. Table 17 describes the XtremIO configuration tasks.

Table 17. Tasks for XtremIO configuration

Task Description Reference

Prepare the XtremIO

Physically install the XtremIO hardware with the procedures in the product documentation.

XtremIO Storage Array Operation Guide

XtremIO Storage Array Site Preparation Guide version 3.0

XtremIO Storage Array User Guide version 3.0

Your vendor’s switch configuration guide

Set up the initial XtremIO configuration

Configure the IP addresses and other key parameters on the XtremIO.

Provision storage for VMware hosts

Create the storage areas required for the solution.

Prepare the XtremIO

The XtremIO Storage Array Operation Guide provides instructions to assemble, rack, cable, and power up the XtremIO. There are no specific setup steps for this solution.

Setup the initial XtremIO configuration

After completing the initial XtremIO array setup, configure key information about the existing environment so that the storage array can communicate with other devices in the environment. Configure the following common items in accordance with your IT data center policies and existing infrastructure information:

DNS

NTP

Storage network interfaces

Complete network cabling

XtremIO configuration

Page 82: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 6: VSPEX Solution Implementation

82 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

For data connection using the FC protocols: Ensure that one or more servers are connected to the XtremIO storage system, either directly or through qualified FC switches. Refer to the EMC Host Connectivity Guide for VMware ESX Server for more detailed instructions.

For data connection using iSCSI protocol: Connect one or more servers to the XtremIO storage system, either directly or through qualified IP switches. Refer to the EMC Host Connectivity Guide for VMware ESX Server for more detailed instructions.

Additionally, configure the following items in accordance with your IT data center policies and existing infrastructure information:

1. Set up a storage network IP address.

Logically isolate the other networks in the solution as described, in Chapter 3. This ensures that other network traffic does not impact traffic between hosts and storage.

2. Enable jumbo frames on the XtremIO front-end iSCSI ports.

Use jumbo frames for iSCSI networks to permit greater network bandwidth. Apply the MTU size specified below across all network interfaces in the environment. To enable the jumbo frames option:

a. From the menu bar, click Administration to display the Administration workspace.

b. Select Cluster > iSCSI Ports Configuration from the left pane. The iSCSI Ports Configuration screen appears.

c. Under Port Properties Configuration, select Enable Jumbo Frames.

d. Set the MTU value by using the up and down arrows.

e. Click Apply.

The reference documents listed in Appendix A provide more information on how to configure the XtremIO platform. Storage configuration guidelines provide more information on the disk layout.

Provision storage for VMware hosts

This section describes provisioning storage for VMware hosts. You can define various quantities of disk space as volumes in an active cluster. Volumes have the following definitions:

Volume size —The quantity of disk space reserved for the volume

LB size —The logical block size in bytes

Alignment-offset —A value for preventing unaligned access performance problems

Note: In the GUI, selecting a predefined volume type defines the alignment-offset and LB size values. In the CLI, you can define the alignment-offset and LB size values separately.

Page 83: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 6: VSPEX Solution Implementation

83 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

This section explains how to manage volumes using the XtremIO Storage Array GUI. Complete the following steps in the XtremIO GUI to configure LUNs to store virtual servers:

1. When XtremIO initializes during the installation process, the data protection domain is created automatically. Provision the LUNs based on the sizing information in Chapter 4. This example uses the array recommended maximums described in Chapter 4.

a. Log in to the XtremIO GUI.

b. From the menu, click Configuration.

c. From the Volumes pane, click Add, as shown in Figure 23.

Figure 23. Adding volumes

d. In the Add New Volumes screen, shown in Figure 24, define the following:

i. Name—The name of the volume

ii. Size—The amount of disk space allocated for this volume

iii. Volume Type—Select one of the following types that define the LB size and alignment-offset:

(1) Normal (512 LBs)

(2) 4 KB LBs

(3) Legacy Windows (offset:63)

iv. Small IO Alerts—Set to enabled if you want an alert to be sent when small IOs (<4 KB) are detected.

v. Unaligned IO Alerts—Set to enabled if you want an alert to be sent when unaligned I/Os are detected.

vi. VAAI TP Alerts—Set to enabled if you want an alert to be sent when the storage capacity reaches the set limit.

Page 84: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 6: VSPEX Solution Implementation

84 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Figure 24. Volume summary

e. Proceed as follows:

i. If you do not want to add the new volumes to a folder, click Finish; the new volumes are created and appear in the root within the Volumes pane of the Configuration window.

ii. If you want to add the new volumes to a folder:

(1) Click Next.

(2) Select the desired folder (or click New Folder to create a new one).

(3) Click Finish; the new volumes are created and appear in the selected folder within the Volumes pane of the Configuration window.

Table 18 depicts a single brick storage allocation layout for 700 virtual machines in this solution.

Page 85: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 6: VSPEX Solution Implementation

85 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Table 18. Storage allocation table for block data

Configuration

Availability physical capacity (TB)

Number of SSD drives (400GB) for single brick

Number of LUNs for Single brick

Volume capacity (TB)

700 virtual servers

7.2 25 1 50

Note: In this solution, each virtual machine occupies 102 GB, with 100 GB for the OS and user space and a 2 GB swap file.

2. Use the LUN created in Step 1 to create a datastore in the vSphere console:

a. Select Storage > VMware Datastores.

b. Click Create.

c. Specify the appropriate Datastore Type.

d. Type a Datastore Name.

e. Configure the appropriate Snapshot Schedule.

f. Configure the appropriate Host Access for each host.

g. Review the Summary of Datastore Configuration and click Finish to create the datastores.

Install and configure the VMware vSphere hosts

This section provides the requirements for the installation and configuration of the ESXi hosts and infrastructure servers required to support the architecture. Table 19 describes the tasks that must be completed.

Table 19. Tasks for server installation

Task Description Reference

Install ESXi Install the ESXi hypervisor on the physical servers that are deployed for the solution.

vSphere Installation and Setup Guide

Configure ESXi networking

Configure ESXi networking including NIC trunking, VMkernel ports, and virtual machine port groups and jumbo frames.

vSphere Networking

Install and configure multipath software

Install and configure multipath software, using vSphere NMP or EMC PowerPath/VE to manage multipathing for XtremIO LUNs.

PowerPath VE for VMware vSphere Installation and Administration Guide.

Connect VMware datastores

Connect the VMware datastores to the ESXi hosts deployed for the solution.

vSphere Storage Guide

Overview

Page 86: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 6: VSPEX Solution Implementation

86 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Task Description Reference

Plan virtual machine memory allocations

Ensure that VMware memory management technologies are configured properly for the environment.

vSphere Installation and Setup Guide

When starting the servers being used for ESXi, confirm or enable the hardware-assisted CPU virtualization and the hardware-assisted memory management unit (MMU) virtualization setting in the BIOS for each server. If the servers have a RAID controller, configure mirroring on the local disks.

Boot the ESXi install media and install the hypervisor on each of the servers. ESXi requires hostnames, IP addresses, and a root password for installation.

In addition, install the host bus adapter (HBA) drivers or configure iSCSI initiators on each ESXi host. For details, refer to EMC Host Connectivity Guide for VMware ESX Server.

A standard virtual switch (vSwitch) is created during the installation of VMware ESXi. By default, ESXi chooses only one physical NIC as a virtual switch uplink. To maintain redundancy and bandwidth requirements, add an additional NIC either by using the ESXi console or by connecting to the ESXi host from the vSphere Client.

Each VMware ESXi server must have multiple interface cards for each virtual network to ensure redundancy and provide network load balancing and network adapter failover.

VMware ESXi networking configuration, including load balancing and failover options, is described in vSphere Networking. Choose the appropriate load balancing option based on what is supported by the network infrastructure.

Create VMkernel ports as required, based on the infrastructure configuration:

VMkernel port for storage network (iSCSI protocols)

VMkernel port for VMware vMotion

Virtual server port groups (used by the virtual servers to communicate on the network)

vSphere Networking describes the procedure for configuring these settings. Refer to Appendix A for more information.

Jumbo frames (iSCSI only)

Enable jumbo frames for the NIC if using a NIC for the iSCSI data. Set the MTU to 9,000. Consult your NIC vendor’s configuration guide for instructions.

Install ESXi

Configure ESXi networking

Page 87: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 6: VSPEX Solution Implementation

87 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

To improve and enhance the performance and capabilities of XtremIO storage array, you can choose the VMware vSphere Native Multipathing (NMP) feature or install PowerPath/VE on the VMware vSphere host.

Configuring vSphere Native Multipathing

XtremIO supports the VMware vSphere NMP technology. This section describes the procedure required for configuring native vSphere multipathing for XtremIO volumes.

For best performance, EMC recommends that you do the following:

1. Set the native round robin path selection policy on XtremIO volumes presented to the ESX host.

Note: With NMP in vSphere versions earlier than 5.5, clustering is not supported when the path policy is set to Round Robin. For details, see vSphere MSCS Setup Limitations in the Setup for Failover Clustering and Microsoft Cluster Service guide for ESXi 5.0 or ESXi/ESX 4.x. In vSphere 5.5, Round Robin PSP (PSP_RR) support is introduced. For details, see MSCS support enhancements in vSphere 5.5 (VMware KB 2052238).

2. Set the vSphere NMP Round Robin path switching frequency to XtremIO volumes from the default value (1000 I/O packets) to 1.

These settings ensure optimal distribution and availability of load between I/O paths to the XtremIO storage.

Note: Use ESXi command line to adjust the path switching frequency of vSphere NMP Round Robin.

The following procedure uses vSphere client to configure NMP Round Robin on an XtremIO volume:

1. Launch the vSphere Client and select Inventory > Hosts and Clusters.

2. Select the ESX host and click Configuration.

3. Under Hardware, click Storage Adapters.

4. From the Storage Adapters list, select the storage adapter through which the XtremIO volume is presented.

5. Select Devices.

6. Under Details, right-click the XtremIO volume and select Manage Paths.

The Manage Paths window lists all discovered paths to the XtremIO volume.

7. From the Path Selection list, select Round Robin (VMware), as shown in Figure 25, and click Change to apply your selection.

8. Confirm that the Status of all listed paths to the XtremIO volume is Active (I/O).

Install and configure multipath software

Page 88: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 6: VSPEX Solution Implementation

88 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Figure 25. Set the multi-path policy as Round Robin

Installing and configuring PowerPath/VE

For detailed information and the configuration steps to install EMC PowerPath/VE, refer to the PowerPath/VE Installation and Administration Guide.

Note: This solution uses vSphere NMP as the multipathing solution to manage XtremIO LUNs.

Connect the datastores configured in Install and configure the VMware vSphere hosts to the appropriate ESXi servers. These include the datastores configured for:

Virtual server storage

Infrastructure virtual machine storage (if required)

SQL Server storage (if required)

vSphere Networking provides instructions on how to connect the VMware datastores to the ESXi host. Refer to Appendix A for more information.

Connect VMware datastores

Page 89: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 6: VSPEX Solution Implementation

89 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Server capacity in the solution is required for two purposes:

To support the new virtualized server infrastructure

To support the required infrastructure services such as authentication/authorization, DNS, and databases

For information on minimum infrastructure requirements, refer to Table 1. If existing infrastructure services meet the requirements, the hardware listed for infrastructure services is not required.

Memory configuration

When configuring server memory, properly size and configure the solution. This section provides an overview of memory allocation for the virtual servers and factors in vSphere overhead and the virtual machine configuration.

ESXi memory management

Memory virtualization techniques allow the vSphere hypervisor to abstract physical host resources such as memory to provide resource isolation across multiple virtual machines, and avoid resource exhaustion. In cases where advanced processors are deployed, such as Intel processors with EPT support, abstraction takes place within the CPU. Otherwise, this process occurs within the hypervisor itself.

vSphere employs the following memory management techniques:

Allocation of memory resources greater than those physically available to the virtual machine is known as memory over-commitment.

Identical memory pages that are shared across virtual machines are merged with a feature known as transparent page sharing. Duplicate pages return to the host free memory pool for reuse.

ESXi stores pages, which would otherwise be swapped out to disk through host swapping, are located in a compressed cache in the main memory.

Memory ballooning relieves host resource exhaustion. This process requests free pages to be allocated from the virtual machine to the host for reuse.

Hypervisor swapping causes the host to force arbitrary virtual machine pages out to disk.

Additional information can be obtained from the Understanding Memory Resource Management in VMware vSphere 5.0 White Paper.

Virtual machine memory concepts

Figure 26 shows the memory settings in the virtual machine.

Plan virtual machine memory allocations

Page 90: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 6: VSPEX Solution Implementation

90 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Guest VM(unused memory)

Guest VM(unused memory)

Guest VM(memory in use)

Configuredmemory

Reservedmemory

Touchedmemory

Gu

est

mem

ory

Swappable + size of VM swap file

Guest reservation

Figure 26. Virtual machine memory settings

The memory settings are:

Configured memory—Physical memory allocated to the virtual machine at the time of creation

Reserved memory—Memory that is guaranteed to the virtual machine

Touched memory— Memory that is active or in use by the virtual machine

Swappable—Memory de-allocated from the virtual machine if the host is under memory pressure from other virtual machines with ballooning, compression, or swapping

The recommended best practices are:

Do not disable the default memory reclamation techniques. These lightweight processes enable flexibility with minimal impact to workloads.

Intelligently size memory allocation for virtual machines. Over-allocation wastes resources, while under-allocation causes performance impacts that can affect other virtual machine sharing resources.

Over-committing can lead to resource exhaustion if the hypervisor cannot procure memory resources. In severe cases when hypervisor swapping is encountered, virtual machine performance might be adversely affected. Creating performance baselines for your virtual machine workloads assists in this process.

Refer to Interpreting esxtop Statistics for more information on the esxstop tool.

Page 91: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 6: VSPEX Solution Implementation

91 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Install and configure Microsoft SQL Server databases

Table 20 describes how to set up and configure a Microsoft SQL Server database for the solution. At the end of this chapter, you will have installed the SQL Server on a virtual machine with the databases required by VMware vCenter configured for use.

Table 20. Tasks for SQL Server database setup

Task Description reference

Create a virtual machine for SQL Server.

Create a virtual machine to host SQL Server. Verify that the virtual server meets the hardware and software requirements.

http://msdn.microsoft.com

Install Microsoft Windows on the virtual machine.

Install Microsoft Windows Server 2012 R2 on the virtual machine created to host SQL Server.

http://technet.microsoft.com

Install SQL Server. Install SQL Server on the virtual machine designated for that purpose.

http://technet.microsoft.com

Configure database for VMware vCenter.

Create the database required for the vCenter server on the appropriate datastore.

Preparing vCenter Server Databases

Configure database for VMware Update Manager.

Create the database required for Update Manager on the appropriate datastore.

Preparing the Update Manager Database

Create the virtual machine with enough computing resources on one of the ESXi servers designated for infrastructure virtual machines. Use the datastore designated for the shared infrastructure.

Note: The customer environment might already contain a SQL Server instance for this role. In this case, refer to Configure database for VMware vCenter.

SQL Server service runs on Microsoft Windows. Install the required Windows version on the virtual machine, and select the appropriate network, time, and authentication settings.

Install SQL Server on the virtual machine with the SQL Server installation media.

One of the installable components in the SQL Server installer is SQL Server Management Studio (SSMS). Install this component directly on SQL Server and on the administrator console.

Overview

Create a virtual machine for SQL Server

Install Microsoft Windows on the virtual machine

Install SQL Server

Page 92: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 6: VSPEX Solution Implementation

92 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

In many implementations, you may want to store data files in locations other than the default path. Complete the following steps to change the default path for storing data files:

1. Right-click the server object in SSMS and select Database Properties.

2. Change the default data and log directories for new databases created on the server.

Note: For high availability, install SQL Server on a Microsoft Failover Cluster, or on a virtual machine protected by VMware VMHA clustering. Do not combine these technologies.

To use VMware vCenter in this solution, create a database for the service. The requirements and steps to configure the vCenter Server database correctly are covered in Install and configure VMware vCenter Server.

Note: Do not use the Microsoft SQL Server Express-based database option for this solution.

Create individual login accounts for each service accessing SQL Server database.

To use VMware Update Manager in this solution, create a database for the service. Create individual login accounts for each service accessing a database on SQL Server. Consult your database administrator for your organization’s policy.

Configure database for VMware vCenter

Configure database for VMware Update Manager

Page 93: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 6: VSPEX Solution Implementation

93 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Install and configure VMware vCenter Server

This section provides information on how to configure the VMware vCenter. Complete the tasks in Table 21.

Table 21. Tasks for vCenter configuration

Task Description Reference

Create the vCenter host virtual machine.

Create a virtual machine to be used for VMware vCenter Server.

vSphere Virtual Machine Administration

Install vCenter guest OS.

Install Windows Server 2012 Standard Edition on the vCenter host virtual machine.

Installing Windows Server 2012

Update the virtual machine.

Install VMware Tools, enable hardware acceleration, and allow remote console access.

vSphere Virtual Machine Administration

Create vCenter ODBC connections.

Create the 64-bit vCenter and 32-bit vCenter Update Manager ODBC connections.

vSphere Installation and Setup

Installing and Administering VMware vSphere Update Manager

Install vCenter Server. Install vCenter Server software.

vSphere Installation and Setup

Install vCenter Update Manager.

Install vCenter Update Manager software.

Installing and Administering VMware vSphere Update Manager

Create a virtual data center.

Create a virtual datacenter. vCenter Server and Host Management

Apply vSphere license keys.

Type the vSphere license keys in the vCenter licensing menu.

vSphere Installation and Setup

Add ESXi hosts. Connect vCenter to ESXi hosts.

vCenter Server and Host Management

Configure vSphere clustering.

Create a vSphere cluster and move the ESXi hosts into it.

vSphere Resource Management

Perform array ESXi host discovery.

Perform ESXi host discovery from the XtremIO GUI console.

XtremIO Storage Array User Guide

Install the vCenter Update Manager plug-in.

Install the vCenter Update Manager plug-in on the administration console.

Installing and Administering VMware vSphere Update Manager

Overview

Page 94: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 6: VSPEX Solution Implementation

94 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Task Description Reference

Create a virtual machine in vCenter.

Create a virtual machine using vCenter.

vSphere Virtual Machine Administration

Perform partition alignment, and assign file allocation unite size.

Using diskpart.exe to perform partition alignment, assign drive letters, and assign file allocation unit size of virtual machine’s disk drive.

Creating and Deploying Virtual Machines in VMM

Create a template virtual machine.

Create a template virtual machine from the existing virtual machine.

Create a customization specification at this time.

vSphere Virtual Machine Administration

Deploy virtual machines from the template virtual machine.

Deploy the virtual machines from the template virtual machine.

vSphere Virtual Machine Administration

To deploy the VMware vCenter Server as a virtual machine on an ESXi server installed as part of this solution, connect directly to an infrastructure ESXi server using the vSphere Client.

Create a virtual machine on the ESXi server with the customer guest OS configuration, using the infrastructure server datastore presented from the storage array.

The memory and processor requirements for the vCenter Server depend on the number of ESXi hosts and virtual machines managed. The requirements are described in the vSphere Installation and Setup Guide.

Install the guest OS on the vCenter host virtual machine. VMware recommends using Windows Server 2012 Standard Edition.

Before installing vCenter Server and vCenter Update Manager, create the Open Database Connectivity (ODBC) connections required for database communication. These ODBC connections use SQL Server authentication for database authentication. Appendix B provides a place to record SQL Server login information.

Install vCenter Server by using the VMware VIMSetup installation media. Use the customer-provided username, organization, and vCenter license key when installing vCenter.

To perform license maintenance, log in to vCenter Server and select Administration > Licensing from the vSphere Client menu. Use the vCenter License console to enter the license keys for the ESXi hosts. The keys can then be applied to the ESXi hosts as they are imported into vCenter.

Create the vCenter host virtual machine

Install vCenter guest OS

Create vCenter ODBC connections

Install vCenter Server

Apply vSphere license keys

Page 95: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 6: VSPEX Solution Implementation

95 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Provisioning a virtual machine

Create a virtual machine in vCenter to use as a virtual machine template by following these steps:

1. Install the virtual machine.

2. Install the software.

3. Change the Windows and application settings.

Refer to vSphere Virtual Machine Administration for more information on creating a virtual machine.

Perform disk partition alignment on virtual machines with operating systems prior to Windows Server 2008. Align the disk drive with an offset of 1,024 KB, and format the disk drive with a file allocation unit (cluster) size of 8 KB.

Refer to Disk Partition Alignment Best Practices for SQL Server to perform partition alignment, assign drive letters, and assign the file allocation unit size using diskpart.exe.

Convert a virtual machine to a template. Create a customization specification when creating the template.

Refer to vSphere Virtual Machine Administration to create the template and specification.

Refer to vSphere Virtual Machine Administration to deploy the virtual machines with the virtual machine template and the customization specification.

Summary

This chapter presents the required steps to deploy and configure the various aspects of the VSPEX solution using the XtremIO all-flash array, which includes both the physical and logical components. After performing these steps, the VSPEX solution is fully functional.

Create a virtual machine in vCenter

Perform partition alignment, and assign file allocation unit size

Create a template virtual machine

Deploy virtual machines from the template virtual machine

Page 96: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 6: VSPEX Solution Implementation

96 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Page 97: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 7: Verifying the Solution

97 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Chapter 7 Verifying the Solution

This appendix presents the following topics:

Overview .................................................................................................................. 98

Post-install checklist ............................................................................................... 99

Deploy and test a single virtual server ..................................................................... 99

Verify the redundancy of the solution components .................................................. 99

Page 98: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 7: Verifying the Solution

98 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Overview

This chapter provides a list of items to review and tasks to perform after configuring the solution. The goal of this chapter is to verify the configuration and functionality of specific aspects of the solution, and ensure that the configuration meets core availability requirements.

Complete the tasks listed in Table 22.

Table 22. Tasks for testing the installation

Task Description Reference

Post-install checklist

Verify that sufficient virtual ports exist on each vSphere host virtual switch.

vSphere Networking

Verify that each vSphere host has access to the required datastores and VLANs.

vSphere Storage Guide

vSphere Networking

Verify that the vMotion interfaces are configured correctly on all vSphere hosts.

vSphere Networking

Deploy and test a single virtual server.

Deploy a single virtual machine using the vSphere interface.

vCenter Server and Host Management

vSphere Virtual Machine Management

Verify redundancy of the solution components.

Restart each storage processor in turn, and ensure that LUN connectivity is maintained.

Steps shown below

Disable each of the redundant switches in turn and verify that the vSphere host, virtual machine, and storage array connectivity remains intact.

Vendor documentation

On a vSphere host that contains at least one virtual machine, enable maintenance mode and verify that the virtual machine can successfully migrate to an alternate host.

vCenter Server and Host Management

Page 99: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 7: Verifying the Solution

99 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Post-install checklist

The following configuration items are critical to the functionality of the solution.

On each vSphere server, verify the following items prior to deployment into production:

The vSwitch that hosts the client VLANs is configured with sufficient ports to accommodate the maximum number of virtual machines that it may host.

All required virtual machine port groups are configured, and each server has access to the required VMware datastores.

An interface is configured correctly for vMotion using the information in the vSphere Networking guide.

Deploy and test a single virtual server

Deploy a virtual machine to verify that the solution functions as expected. Verify that the virtual machine is joined to the applicable domain, has access to the expected networks, and that it is possible to log in to it.

Verify the redundancy of the solution components

To ensure that the various components of the solution maintain availability requirements, test specific scenarios related to maintenance or hardware failures.

The steps apply to the XtremIO environments. Complete the following steps to restart each XtremIO storage controller in turn and verify that connectivity to VMware datastores is maintained throughout each restart:

1. Log in to XtremIO XMS CLI console with administrator credentials.

2. Power off storage controller 1 using the following command:

deactivate-storage-controller sc-id=1

power-off sc-id=1

3. Activate storage controller 1 using the following command:

power-on sc-id=1

activate-storage-controller sc-id=1

4. When the cycle completes, change the sc-id=2 to verify another storage controller using the same command as the above.

5. On the host side, enable maintenance mode and verify that you can successfully migrate a virtual machine to an alternate host.

Page 100: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 7: Verifying the Solution

100 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Page 101: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 8: System Monitoring

101 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Chapter 8 System Monitoring

This chapter presents the following topics:

Overview ................................................................................................................ 102

Key areas to monitor .............................................................................................. 102

XtremIO resource monitoring guidelines ................................................................ 105

Page 102: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 8: System Monitoring

102 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Overview

System monitoring of a VSPEX environment is no different from monitoring any core IT system; it is a relevant and essential component of administration. The monitoring levels involved in a highly virtualized infrastructure, such as a VSPEX environment, are somewhat more complex than in a purely physical infrastructure, as the interaction and interrelationships between various components can be subtle and nuanced. However, those experienced in administering virtualized environments should be familiar with the key concepts and focus areas. The key differentiators are monitoring at scale and the ability to monitor end-to-end systems and workflows.

Several business needs require proactive, consistent monitoring of the environment:

Stable, predictable performance

Sizing and capacity needs

Availability and accessibility

Elasticity—the dynamic addition, subtraction, and modification of workloads

Data protection

If self-service provisioning is enabled in the environment, the ability to monitor the system is more critical because clients can generate virtual machines and workloads dynamically. This can adversely affect the entire system.

This chapter provides the basic knowledge necessary to monitor the key components of a VSPEX Proven Infrastructure environment. Additional resources are included at the end of this chapter.

Key areas to monitor

VSPEX Proven Infrastructures provide end-to-end solutions and require system monitoring of three discrete, but highly interrelated areas:

Servers, both virtual machines, and clusters

Networking

Storage

This chapter focuses primarily on monitoring key components of the storage infrastructure, the XtremIO array, but also briefly describes other components.

Page 103: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 8: System Monitoring

103 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

When a workload is added to a VSPEX deployment, server and networking resources are consumed. As more workloads are added, modified, or removed, resource availability and, more importantly, capabilities change, which affect all other workloads running on the platform. Customers should fully understand their workload characteristics on all key components before deploying them on a VSPEX platform. This process is a requirement to correctly size resource utilization against the defined reference virtual machine.

Deploy the first workload, and then measure the end-to-end resource consumption and platform performance. This removes the guesswork from sizing activities and ensures initial assumptions were valid. As more workloads are deployed, reevaluate resource consumption and performance levels to determine cumulative load and the impact on existing virtual machines and their application workloads. Adjust resource allocation accordingly to ensure that any oversubscription is not negatively impacting overall system performance. Run these assessments consistently to ensure the platform as a whole, and the virtual machines themselves, operate as expected.

The following components comprise the critical areas that affect overall system performance.

The key server resources to monitor include:

Processors

Memory

Disk (local and SAN)

Networking

Monitor these areas from both a physical host level (the hypervisor host level) and from a virtual level (from within the guest virtual machine). Depending on your OS, tools are available to monitor and capture this data. For example, if your VSPEX deployment uses ESXi servers as the hypervisor, you can use the esxtop utility to monitor and log these metrics. Windows Server 2012 guests can use the Perfmon utility. Follow your vendor’s guidance to determine performance thresholds for specific deployment scenarios, which can vary greatly depending on the application.

Detailed information about these tools is available from the following resources:

http://technet.microsoft.com/en-us/library/cc749115.aspx

http://download3.vmware.com/vmworld/2006/adc0199.pdf

Each VSPEX Proven Infrastructure provides a guaranteed level of performance based on the number of reference virtual machines deployed and their defined workload.

Performance baseline

Servers

Page 104: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 8: System Monitoring

104 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Ensure that there is adequate bandwidth for networking communications. This includes monitoring network loads at the server and virtual machine level, the fabric (switch) level, and the storage level. From the server and virtual machine level, the monitoring tools mentioned previously provide sufficient metrics to analyze flows into and out of the servers and guests. Key items to track include aggregate throughput or bandwidth, latencies, and IOPS sizes. Capture additional data from network card or HBA utilities.

Tools that monitor switching infrastructure vary by vendor. Key items to monitor include port utilization, aggregate fabric utilization, processor utilization, queue depths and inter switch link (ISL) utilization. Networking storage protocols are discussed in the following section.

Monitoring the storage aspect of a VSPEX implementation is crucial to maintaining the overall health and performance of the system. Fortunately, the tools provided with XtremIO storage provide an easy, yet powerful insight into how the underlying storage components are operating. For both block and file protocols, monitor the following key areas:

Capacity

Hardware elements

X-Brick

Storage controllers

SSDs

Cluster elements

Clusters

Volumes

Initiator groups

Additional considerations (primarily from a tuning perspective) include:

I/O size

Workload characteristics

These factors are outside the scope of this document; however storage tuning is an essential component of performance optimization. EMC offers additional guidance in the EMC XtremIO Storage Array User Guide.

Networking

Storage

Page 105: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 8: System Monitoring

105 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

XtremIO resource monitoring guidelines

Monitor XtremIO arrays with the XMS GUI console, which is accessible by opening an HTTPS session to the XMS IP address. XtremIO is an all-flash array storage platform that provides block storage access through a single entity.

This section explains how to use the XtremIO GUI to monitor block storage resource usage that includes the list elements. Click Dashboard to view performance counters in the dashboard workplace.

Efficiency

You can monitor the cluster efficiency status from the Efficiency section of the Storage pane in the Dashboard workspace, as shown in Figure 27.

Figure 27. Monitoring efficiency

The Efficiency section displays the following data:

Overall Efficiency—The disk space saved by the XtremIO storage array, calculated as:

𝑇𝑜𝑡𝑎𝑙 𝑝𝑟𝑜𝑣𝑖𝑠𝑖𝑜𝑛𝑒𝑑 𝑐𝑎𝑝𝑎𝑐𝑖𝑡𝑦

𝑈𝑛𝑖𝑞𝑢𝑒 𝑑𝑎𝑡𝑎 𝑜𝑛 𝑆𝑆𝐷

Data Reduction Ratio—The inline data deduplication and compression ratio, calculated as:

𝐷𝑎𝑡𝑎 𝑤𝑟𝑖𝑡𝑡𝑒𝑛 𝑡𝑜 𝑡ℎ𝑒 𝑎𝑟𝑟𝑎𝑦

𝑃ℎ𝑦𝑠𝑖𝑐𝑎𝑙 𝑐𝑎𝑝𝑎𝑐𝑖𝑡𝑦 𝑢𝑠𝑒𝑑

Deduplication Ratio—The real-time Inline data deduplication ratio, calculated as:

𝐷𝑎𝑡𝑎 𝑤𝑟𝑖𝑡𝑡𝑒𝑛 𝑡𝑜 𝑡ℎ𝑒 𝑎𝑟𝑟𝑎𝑦

𝑈𝑛𝑖𝑞𝑢𝑒 𝑑𝑎𝑡𝑎 𝑜𝑛 𝑆𝑆𝐷

Compression Ratio—The real-time inline compression ratio, calculated as:

Monitoring the storage

Page 106: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 8: System Monitoring

106 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

𝑈𝑛𝑖𝑞𝑢𝑒 𝑑𝑎𝑡𝑎 𝑜𝑛 𝑆𝑆𝐷

𝑃ℎ𝑦𝑠𝑖𝑐𝑎𝑙 𝑐𝑎𝑝𝑎𝑐𝑖𝑡𝑦 𝑢𝑠𝑒𝑑

Thin Provisioning Savings—Used disk space compared to allocated disk space.

Volume capacity

You can monitor the volume capacity status in the Volume Capacity section of the Storage pane in the Dashboard workspace, as shown in Figure 28.

Figure 28. Volume capacity

The Volume Capacity section displays the following data:

Total disk space defined by the volumes

Physical space used

Logical space used

Hover the mouse pointer over the Volume Capacity bar to display a ToolTip with detailed information.

Physical capacity

You can monitor the physical capacity status from the Physical Capacity section of the Storage pane in the Dashboard workspace, as shown in Figure 29.

Figure 29. Physical capacity

The Physical Capacity section displays the following data:

Total physical capacity

Used physical capacity

Hover the mouse pointer over the Physical Capacity bar to display a ToolTip with detailed information.

Page 107: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 8: System Monitoring

107 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Complete the following steps to monitor the cluster performance from the GUI:

1. From the menu, click Dashboard to display the Dashboard workspace.

2. In the Performance pane, select the parameters you want to view:

a. Select the measurement unit of the display by clicking one of the following tabs:

i. Bandwidth—MB per second (MB/s)

ii. IOPS—Input/Out operations per second

iii. Latency—Microseconds (μs). Applies only to the activity history graph

b. Select the item to be monitored from the Item Selector:

i. Block Size

ii. Initiator Groups

iii. Volumes

c. Set the Activity History time frame by selecting one of the following periods from the Time Period Selector:

i. Last Hour

ii. Last 6 Hours

iii. Last 24 Hours

iv. Last 3 Days

v. Last Week

Figure 30 shows the Performance GUI.

Figure 30. Monitoring IOPS performance

Monitoring the performance

Page 108: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 8: System Monitoring

108 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Note: You can also monitor the performance through the CLI. Refer to the XtremIO Storage Array User Guide for more information.

Monitor the X-Bricks

To quickly view the X-Brick name and any associated alerts, hover the mouse pointer over the X-Brick in the Hardware pane of the Dashboard workspace.

To view details of the displayed X-Brick in the Hardware workspace, hover the mouse pointer over different parts of the component to view the parameters and associated alerts:

1. Click Show Front to view the front of the X-Brick.

2. Click Show Back to view the back of the X-Brick.

3. Click Show Cable Connectivity to view the X-Brick cable connections.

Figure 31 shows the data and management cable connectivity.

Figure 31. Data and management cable connectivity

4. Click X-Brick Properties to display the X-Brick Properties dialog box, as shown in Figure 32.

Monitoring the hardware elements

Page 109: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 8: System Monitoring

109 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Figure 32. Viewing X-Brick properties

Monitor the storage controllers

Complete the following steps to view the storage controller information from the GUI:

1. From the menu, click Hardware.

2. In the left (rack) pane, select the X-Brick for the storage controller to be monitored.

3. In the right (X-Brick) pane, click X-Brick Properties.

4. View the details of the selected X-Brick’s two storage controllers in the lower panes of the dialog box.

Monitor the SSDs

Complete the following steps to view the SSDs information from the GUI:

1. From the menu bar, click Hardware.

2. In the left (Rack) pane, select the X-Brick for the storage controller to be monitored.

3. Click X-Brick Properties.

4. View the details of SSDs for the selected X-Brick, as shown in Figure 33.

Page 110: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Chapter 8: System Monitoring

110 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Figure 33. Monitoring SSDs

In addition to the available monitoring services provided by the XtremIO storage array, you can monitor various elements by defining cluster monitors tailored to your needs. Table 23 displays the parameters that can be monitored (depending on the selected monitor type).

Table 23. Advanced monitor parameters

Parameters Description

Read-IOPS by block, 512 B, 1 KB, 2 KB, 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, GT1MB

Write-IOPS by block, 512 B, 1 KB, 2 KB, 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, GT1MB

IOPS Total of Read and Write IOPS. by block, 512 B, 1 KB, 2 KB, 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, GT1MB

Read-BW (MB/s) by block, 512 B, 1 KB, 2 KB, 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, GT1MB

Write-BW (MB/s) by block, 512 B, 1 KB, 2 KB, 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, GT1MB

BW (MB/s) Total bandwidth of Read and Write combined. by block, 512 B, 1 KB, 2 KB, 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, GT1MB

Write-Latency(μsec) 512 B, 1 KB, 2 KB, 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, GT1MB

Read-Latency(μsec) 512 B, 1 KB, 2 KB, 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, GT1MB

Average-Latency(μsec) The average of Read and Write latency. 512 B, 1 KB, 2 KB, 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, GT1MB

SSD-Space-In-Use SSD space in use

Endurance-Remaining-% Percentage of SSD remaining endurance

Memory-Usage-% Percentage of memory usage

Memory-In-Use (MB) Memory-In-Use (MB)

CPU (%) Percentage of used CPU

For detailed information on using the advanced monitor feature, refer to the EMC XtremIO Storage Array User Guide.

Advanced monitoring

Page 111: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Appendix A: Reference Documentation

Appendix A Reference Documentation

This appendix presents the following topics:

EMC documentation ............................................................................................... 112

Other documentation ............................................................................................. 112

Page 112: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Appendix A: Reference Documentation

112 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

EMC documentation

The following documents, available on EMC Online Support, provide additional and relevant information. If you do not have access to a document, contact your EMC representative.

EMC XtremIO Storage Array User Guide

EMC XtremIO Storage Array Operations Guide

EMC XtremIO Storage Array Site Preparation Guide

EMC XtremIO Storage Array Security Configuration Guide

EMC XtremIO Storage Array RESTful API Guide

EMC XtremIO Storage Array Release Notes

EMC XtremIO Simple Support Matrix

EMC Host Connectivity with Q-Logic Fibre Channel and iSCSI Host Bus Adapters (HBAs) and Fibre Channel over Ethernet Converged Network Adapters (CNAs) for the Linux Environment

EMC Host Connectivity with Emulex Fibre Channel and iSCSI HBAs and Converged Network Adapters (CNAs) for the Linux Environment

EMC Host Connectivity with QLogic Fibre Channel and iSCSI Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment

EMC Host Connectivity with Emulex Fibre Channel and iSCSI Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment

EMC Host Connectivity with Q-Logic Fibre Channel and iSCSI Host Bus Adapters (HBAs) and Fibre Channel over Ethernet Converged Network Adapters (CNAs) for the Solaris Environment

EMC Host Connectivity with Emulex Fibre Channel and iSCSI Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) for the Solaris Environment

Other documentation

The following documents, located on the VMware website, provide additional and relevant information:

vSphere Networking

vSphere Storage Guide

vSphere Virtual Machine Administration

vSphere Installation and Setup

vCenter Server and Host Management

vSphere Resource Management

Installing and Administering VMware vSphere Update Manager

Page 113: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Appendix A: Reference Documentation

113 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

vSphere Storage APIs for Array Integration (VAAI) Plug-in

Interpreting esxtop Statistics

Understanding Memory Resource Management in VMware vSphere 5.0

For documentation on Microsoft products, refer to the Microsoft websites:

Microsoft Developer Network

Microsoft TechNet

Page 114: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Appendix A: Reference Documentation

114 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Page 115: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Appendix B: Customer Configuration Worksheet

115 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Appendix B Customer Configuration Worksheet

This appendix presents the following topics:

Customer configuration worksheet ........................................................................ 116

Page 116: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Appendix B: Customer Configuration Worksheet

116 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Customer configuration worksheet

Before you start the configuration, gather some customer-specific network and host configuration information. The following tables provide essential information on assembling the required network and host address, numbering, and naming information. This worksheet can also be used as a “leave behind” document for future reference.

Table 24. Common server information

Server name Purpose Primary IP address

Domain Controller

DNS Primary

DNS Secondary

DHCP

NTP

SMTP

SNMP

vCenter Console

SQL Server

Table 25. ESXi server information

Server name Purpose Primary IP address

Private net (storage) addresses

VMkernel IP address

ESXi

host 1

ESXi

host 2

Table 26. X-Brick information

Array name

Admin account

XtremIO Management Server IP

Storage Controller 1 management IP

Page 117: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Appendix B: Customer Configuration Worksheet

117 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Array name

Storage Controller 2 management IP

SC1 IPMI IP

SC2 IPMI IP

Datastore name

Block FC WWPN

iSCSI IQN

iSCSI Server IP

Table 27. Network infrastructure information

Name Purpose IP address Subnet mask Default gateway

Ethernet Switch 1

Ethernet Switch 2

Table 28. VLAN information

Name Network purpose VLAN ID Allowed subnets

Virtual machine networking

ESXi Management

iSCSI storage network

vMotion

Table 29. Service accounts

Account Purpose Password (optional, secure appropriately)

Windows Server administrator

root ESXi root

Array administrator

vCenter administrator

SQL Server administrator

Page 118: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Appendix B: Customer Configuration Worksheet

118 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Page 119: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Appendix C: Server Resource Component Worksheet

119 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Appendix C Server Resource Component Worksheet

This appendix presents the following topics:

Server resources component worksheet ................................................................ 120

Page 120: EMC VSPEX PRIVATE CLOUD - kazakhstan.emc.com · Contents EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines 5 Proven Infrastructure Guide VMware vSphere storage

Appendix C: Server Resource Component Worksheet

120 EMC VSPEX PRIVATE CLOUD: VMware vSphere for up to 700 Virtual Machines Proven Infrastructure Guide

Server resources component worksheet

Table 30 provides a blank worksheet to record the server resource totals.

Table 30. Blank worksheet for server resource totals

Server resources Storage resources

Application CPU

(Virtual CPUs)

Memory (GB)

IOPS Capacity (GB)

Reference virtual machines

Resource requirements

N/A

Equivalent reference virtual machines

Resource requirements

N/A

Equivalent reference virtual machines

Resource requirements

N/A

Equivalent reference virtual machines

Resource requirements

N/A

Equivalent reference virtual machines

Total equivalent reference virtual machines

Server customization

Server component totals N/A

Storage customization

Storage component totals N/A

Storage component equivalent reference virtual machines

N/A

Total equivalent reference virtual machines - storage