106
Proven Solutions Guide EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure for VMware View 5.0 by using EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and VMware View Composer 2.7. This document focuses on sizing and scalability, and highlights new features introduced in EMC VNX, VMware vSphere, and VMware View. January 2012 EMC ® INFRASTRUCTURE FOR VMWARE ® VIEW™ 5.0 EMC VNX™ Series (NFS), VMware vSphere™ 5.0, VMware View 5.0, and VMware View Composer 2.7 Simplify management and decrease TCO Guarantee a quality desktop experience Minimize the risk of virtual desktop deployment

EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Proven Solutions Guide

EMC Solutions Group Abstract

This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure for VMware View 5.0 by using EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and VMware View Composer 2.7. This document focuses on sizing and scalability, and highlights new features introduced in EMC VNX, VMware vSphere, and VMware View.

January 2012

EMC® INFRASTRUCTURE FOR VMWARE® VIEW™ 5.0 EMC VNX™ Series (NFS), VMware vSphere™ 5.0, VMware View 5.0, and VMware View Composer 2.7

• Simplify management and decrease TCO

• Guarantee a quality desktop experience

• Minimize the risk of virtual desktop deployment

Page 2: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

2 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Copyright © 2012 EMC Corporation. All Rights Reserved.

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

The information in this publication is provided “as is.” EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.

VMware, ESX, VMware vCenter, VMware View, and VMware vSphere are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions.

All other trademarks used herein are the property of their respective owners.

Part Number: h8306.1

Page 3: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Table of contents

3 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Table of contents

1 Executive Summary .................................................................................... 14

Introduction to the EMC VNX series .................................................................... 14

Introduction ........................................................................................................................ 14

Software suites available .................................................................................................... 15

Software packages available .............................................................................................. 15

Business case ..................................................................................................................... 15

Solution overview ............................................................................................................... 15

Key results and recommendations ...................................................................................... 16

2 Introduction ............................................................................................... 17

Document overview ........................................................................................... 17

Use case definition ............................................................................................................. 17

Purpose .............................................................................................................................. 18

Scope ................................................................................................................................. 18

Not in scope ........................................................................................................................ 18

Audience ............................................................................................................................ 18

Prerequisites ...................................................................................................................... 18

Terminology ........................................................................................................................ 19

Reference architecture ....................................................................................... 19

Corresponding reference architecture ................................................................................. 19

Reference architecture diagram .......................................................................................... 19

Configuration .................................................................................................... 20

Hardware resources ............................................................................................................ 20

Software resources ............................................................................................................. 22

3 VMware View Infrastructure ........................................................................ 23

VMware View 5.0 .............................................................................................. 23

Introduction ........................................................................................................................ 23

Deploying VMware View components ................................................................................. 23

Page 4: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Table of contents

4 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

View Manger Connection Server ......................................................................................... 24

View Composer 2.7 ............................................................................................................. 24

View Composer linked clones ............................................................................................. 24

vSphere 5.0 infrastructure ................................................................................. 25

vSphere 5.0 overview ......................................................................................................... 25

vSphere cluster ................................................................................................................... 25

Windows infrastructure ..................................................................................... 26

Introduction ........................................................................................................................ 26

Microsoft Active Directory ................................................................................................... 26

Microsoft SQL Server .......................................................................................................... 26

DNS server .......................................................................................................................... 26

DHCP server ........................................................................................................................ 26

4 Storage Design ........................................................................................... 27

EMC VNX series storage architecture .................................................................. 27

Introduction ........................................................................................................................ 27

Storage layout .................................................................................................................... 28

Storage layout overview ...................................................................................................... 28

File system layout ............................................................................................................... 29

EMC VNX FAST Cache .......................................................................................................... 30

VSI for VMware vSphere ...................................................................................................... 31

vCenter Server storage layout ............................................................................................. 31

VNX shared file systems ..................................................................................................... 31

Roaming profiles and folder redirection .............................................................................. 32

EMC VNX for File Home Directory feature ............................................................................. 32

Profile export ...................................................................................................................... 32

Capacity .............................................................................................................................. 32

5 Network Design .......................................................................................... 34

Considerations ................................................................................................. 34

Network layout overview ..................................................................................................... 34

Logical design considerations ............................................................................................ 35

Page 5: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Table of contents

5 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Link aggregation ................................................................................................................. 35

VNX for File network configuration ..................................................................... 35

Data Mover ports ................................................................................................................ 35

LACP configuration on the Data Mover ................................................................................ 36

Data Mover interfaces ......................................................................................................... 36

Enable jumbo frames on Data Mover interface .................................................................... 36

ESXi network configuration ................................................................................ 37

NIC teaming ........................................................................................................................ 37

Increase the number of vSwitch virtual ports ...................................................................... 38

Enable jumbo frames for the VMkernel port used for NFS ................................................... 38

Cisco Nexus 5020 configuration ........................................................................ 40

Overview ............................................................................................................................. 40

Cabling ............................................................................................................................... 40

Enable jumbo frames on Nexus switch ............................................................................... 40

vPC for Data Mover ports ..................................................................................................... 41

Cisco 6509 configuration ................................................................................... 42

Overview ............................................................................................................................. 42

Cabling ............................................................................................................................... 42

Server uplinks ..................................................................................................................... 42

6 Installation and Configuration ..................................................................... 43

Installation overview ......................................................................................... 43

VMware View components ................................................................................. 44

VMware View installation overview ..................................................................................... 44

VMware View setup............................................................................................................. 44

VMware View desktop pool configuration ........................................................................... 44

Storage components ......................................................................................... 48

Storage pools ..................................................................................................................... 48

NFS active threads per Data Mover ..................................................................................... 48

Page 6: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Table of contents

6 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

NFS performance fix ............................................................................................................ 49

Enable FAST Cache.............................................................................................................. 49

VNX Home Directory feature ................................................................................................ 50

7 Virtual Desktop Antivirus ............................................................................ 52

McAfee MOVE Architecture and Sizing ................................................................ 52

MOVE Components ............................................................................................................. 52

How MOVE Works ............................................................................................................... 53

MOVE Sizing ....................................................................................................................... 54

McAfee MOVE Test Environment ........................................................................ 55

Configuration Overview ....................................................................................................... 55

MOVE agent ........................................................................................................................ 55

VMware DRS Rules .............................................................................................................. 56

MOVE Antivirus Offload Servers .......................................................................................... 57

ePO Configuration ............................................................................................................... 58

8 Testing and Validation ................................................................................ 63

Validated environment profile............................................................................ 63

Profile characteristics ......................................................................................................... 63

Use cases ........................................................................................................................... 64

Login VSI ............................................................................................................................. 64

Login VSI launcher .............................................................................................................. 65

FAST Cache configuration ................................................................................................... 65

Boot storm results ............................................................................................. 65

Test methodology ............................................................................................................... 65

Pool individual disk load .................................................................................................... 66

Pool .................................................................................................................................... 66

LUN load-replica ................................................................................................................. 66

Pool .................................................................................................................................... 67

LUN load-linked clone ......................................................................................................... 67

Storage processor IOPS ...................................................................................................... 68

Storage processor utilization .............................................................................................. 68

Page 7: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Table of contents

7 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

FAST Cache IOPS ................................................................................................................. 68

Data Mover CPU utilization ................................................................................................. 69

Data Mover NFS load ........................................................................................................... 70

ESXi CPU load ..................................................................................................................... 70

ESXi disk response time ..................................................................................................... 71

Antivirus results ................................................................................................ 71

Test methodology ............................................................................................................... 71

Pool individual disk load .................................................................................................... 71

Pool LUN load-replica ......................................................................................................... 72

Pool LUN load-Linked clone ................................................................................................ 73

Storage processor IOPS ...................................................................................................... 73

Storage processor utilization .............................................................................................. 74

FAST Cache IOPS ................................................................................................................. 74

Data Mover CPU utilization ................................................................................................. 75

Data Mover NFS load ........................................................................................................... 75

ESXi CPU load ..................................................................................................................... 76

ESXi disk response time ..................................................................................................... 76

Patch install results ........................................................................................... 77

Test methodology ............................................................................................................... 77

Pool individual disk load .................................................................................................... 77

Pool LUN load-replica ......................................................................................................... 78

Pool LUN load-linked clone ................................................................................................. 79

Storage processor IOPS ...................................................................................................... 79

Storage processor utilization .............................................................................................. 80

FAST Cache IOPS ................................................................................................................. 80

Data Mover CPU utilization ................................................................................................. 81

Data Mover NFS load ........................................................................................................... 81

ESXi CPU load ..................................................................................................................... 81

ESXi disk response time ..................................................................................................... 82

Login VSI results ............................................................................................... 83

Test methodology ............................................................................................................... 83

Page 8: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Table of contents

8 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Pool individual disk load .................................................................................................... 83

Pool LUN load-replica ......................................................................................................... 83

Pool LUN load-linked clone ................................................................................................. 84

Storage processor IOPS ...................................................................................................... 84

Storage processor utilization .............................................................................................. 85

FAST Cache IOPS ................................................................................................................. 85

Data Mover CPU utilization ................................................................................................. 86

Data Mover NFS load ........................................................................................................... 87

ESXi CPU load ..................................................................................................................... 87

ESXi disk response time ..................................................................................................... 88

Recompose results ............................................................................................ 88

Test methodology ............................................................................................................... 88

Pool individual disk load .................................................................................................... 88

Pool LUN load-replica ......................................................................................................... 89

Pool LUN load-linked clone ................................................................................................. 89

Storage processor IOPS ...................................................................................................... 90

Storage processor utilization .............................................................................................. 90

FAST Cache IOPS ................................................................................................................. 91

Data Mover CPU utilization ................................................................................................. 92

Data Mover NFS load ........................................................................................................... 92

ESXi CPU load ..................................................................................................................... 92

ESXi disk response time ..................................................................................................... 93

Refresh results .................................................................................................. 94

Test methodology ............................................................................................................... 94

Pool individual disk load .................................................................................................... 94

Pool LUN load-Replica ......................................................................................................... 94

Pool LUN load-linked clone ................................................................................................. 95

Storage processor IOPS ...................................................................................................... 95

Storage processor utilization .............................................................................................. 96

FAST Cache IOPS ................................................................................................................. 96

Data Mover CPU utilization ................................................................................................. 97

Data Mover NFS load ........................................................................................................... 98

Page 9: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Table of contents

9 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

ESXi CPU load ..................................................................................................................... 98

ESXi disk response time ..................................................................................................... 99

FAST Cache benefits .......................................................................................... 99

Case study .......................................................................................................................... 99

McAfee MOVE results ...................................................................................... 101

Login VSI testing ............................................................................................................... 101

Storage processor IOPS .................................................................................................... 101

Storage processor utilization ............................................................................................ 102

Data Mover CPU utilization ............................................................................................... 103

ESXi CPU load ................................................................................................................... 103

ESXi disk response time ................................................................................................... 103

MOVE Findings .................................................................................................................. 104

9 Conclusion ............................................................................................... 105

Summary ........................................................................................................ 105

References ...................................................................................................... 105

Supporting documents ..................................................................................................... 105

VMware documents .......................................................................................................... 105

Microsoft documents ........................................................................................................ 106

McAfee documents ........................................................................................................... 106

Page 10: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

List of Tables

10 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

List of Tables Table 1. Terminology ............................................................................................................... 19 Table 2. VMware View—Solution hardware .............................................................................. 20 Table 3. VMware View—Solution software ............................................................................... 22 Table 4. VNX5300—File systems .............................................................................................. 31 Table 5. ESXi—Port groups in vSwitch0 and vSwitch1 .............................................................. 37 Table 6. McAfee MOVE—Antivirus Offload Server sizing ........................................................... 55 Table 7. VMware View—Environment profile ............................................................................ 63

Page 11: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

List of Figures

11 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

List of Figures Figure 1. VMware View—Reference architecture ........................................................................ 20 Figure 2. VMware View–Linked clones ...................................................................................... 24 Figure 3. VMware View–Logical representation of linked clone and replica disk........................ 25 Figure 4. VNX5300–Storage layout ........................................................................................... 28 Figure 5. VNX5300–NFS file system layout ................................................................................ 29 Figure 6. VNX5300–CIFS file system layout ............................................................................... 30 Figure 7. Active Directory–UNC path for roaming profiles .......................................................... 32 Figure 8. VMware View–Network layout overview ...................................................................... 34 Figure 9. VNX5300–Ports of the two Data Movers ..................................................................... 35 Figure 10. ESXi–vSwitch configuration ........................................................................................ 37 Figure 11. ESXi—Load balancing policy ....................................................................................... 37 Figure 12. ESXi—vSwitch virtual ports ......................................................................................... 38 Figure 13. ESXi–vSwitch MTU setting .......................................................................................... 39 Figure 14. ESXi–VMkernel port MTU setting ................................................................................ 40 Figure 15. VMware View–Select Automated Pool ........................................................................ 44 Figure 16. VMware View–Select View Composer linked clones ................................................... 45 Figure 17. VMware View–Select Provision Settings ..................................................................... 46 Figure 18. VMware View - vCenter Settings.................................................................................. 46 Figure 19. VMware View–Select Datastores ................................................................................ 47 Figure 20. VMware View–vCenter Settings .................................................................................. 47 Figure 21. VMware View–Guest Customization ........................................................................... 48 Figure 22. VNX5300–Fifteen 200 GB Thick LUNs ......................................................................... 48 Figure 23. VNX5300–FAST Cache tab .......................................................................................... 50 Figure 24. VNX5300–Enable FAST Cache .................................................................................... 50 Figure 25. VNX5300–Home Directory MMC snap-in ..................................................................... 51 Figure 26. VNX5300–Sample Home Directory user folder properties ........................................... 51 Figure 27. McAfee MOVE—Architecture ....................................................................................... 53 Figure 28. McAfee MOVE—VMware DRS Virtual Machine and Host DRS Groups ........................... 56 Figure 29. McAfee MOVE—VMware DRS rule ............................................................................... 57 Figure 30. McAfee MOVE—Server NLB cluster and desktop cluster 1 ........................................... 58 Figure 31. McAfee MOVE—ePO System Tree view ........................................................................ 58 Figure 32. McAfee MOVE—Synchronization Settings-ePO group pool A ....................................... 59 Figure 33. McAfee MOVE—ePO Assigned Policies – Pool A .......................................................... 59 Figure 34. McAfee MOVE agent Policy–General settings .............................................................. 60 Figure 35. McAfee MOVE agent policy–Scan Items ...................................................................... 60 Figure 36. McAfee MOVE—Pool A systems .................................................................................. 61 Figure 37. Boot storm—Disk IOPS for a single SAS drive .............................................................. 66 Figure 38. Boot storm—Replica LUN IOPS and response time ...................................................... 67 Figure 39. Boot storm—Linked clone LUN IOPS and response time ............................................. 67 Figure 40. Boot storm—Storage processor total IOPS .................................................................. 68 Figure 41. Boot storm—Storage processor utilization .................................................................. 68 Figure 42. Boot storm—FAST Cache IOPS .................................................................................... 69 Figure 43. Boot storm—Data Mover CPU utilization ..................................................................... 69 Figure 44. Boot storm—Data Mover NFS load .............................................................................. 70 Figure 45. Boot storm—ESXi CPU load ......................................................................................... 70 Figure 46. Boot storm—Average Guest Millisecond/Command counter ....................................... 71 Figure 47. Antivirus—Disk I/O for a single SAS drive ................................................................... 72 Figure 48. Antivirus—Replica LUN IOPS and response time ......................................................... 72 Figure 49. Antivirus—Linked clone LUN IOPS and response time ................................................. 73

Page 12: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

List of Figures

12 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 50. Antivirus—Storage processor IOPS ............................................................................. 73 Figure 51. Antivirus—Storage processor utilization ..................................................................... 74 Figure 52. Antivirus—FAST Cache IOPS........................................................................................ 74 Figure 53. Antivirus—Data Mover CPU utilization ........................................................................ 75 Figure 54. Antivirus—Data Mover NFS load ................................................................................. 75 Figure 55. Antivirus—ESXi CPU load ............................................................................................ 76 Figure 56. Antivirus—Average Guest Millisecond/Command counter .......................................... 77 Figure 57. Patch install—Disk IOPS for a single SAS drive ........................................................... 78 Figure 58. Patch install—Replica LUN IOPS and response time .................................................... 78 Figure 59. Patch install—Linked clone LUN IOPS and response time ........................................... 79 Figure 60. Patch install—Storage processor IOPS ........................................................................ 79 Figure 61. Patch install—Storage processor utilization ................................................................ 80 Figure 62. Patch install—FAST Cache IOPS .................................................................................. 80 Figure 63. Patch install—Data Mover CPU utilization ................................................................... 81 Figure 64. Patch install—Data Mover NFS load ............................................................................ 81 Figure 65. Patch install—ESXi CPU load ...................................................................................... 82 Figure 66. Patch install—Average Guest Millisecond/Command counter ..................................... 82 Figure 67. Login VSI—Disk IOPS for a single SAS drive ................................................................ 83 Figure 68. Login VSI—Replica LUN IOPS and response time ........................................................ 84 Figure 69. Login VSI—Linked clone LUN IOPS and response time ................................................ 84 Figure 70. Login VSI—Storage processor IOPS ............................................................................. 85 Figure 71. Login VSI—Storage processor utilization .................................................................... 85 Figure 72. Login VSI—FAST Cache IOPS ....................................................................................... 86 Figure 73. Login VSI—Data Mover CPU utilization ........................................................................ 86 Figure 74. Login VSI—Data Mover NFS load ................................................................................. 87 Figure 75. Login VSI — ESXi CPU load .......................................................................................... 87 Figure 76. Login VSI—Average Guest Millisecond/Command counter .......................................... 88 Figure 77. Recompose—Disk IOPS for a single SAS drive ............................................................ 89 Figure 78. Recompose—Replica LUN IOPS and response time ..................................................... 89 Figure 79. Recompose—Linked clone LUN IOPS and response time ............................................ 90 Figure 80. Recompose—Storage processor IOPS ......................................................................... 90 Figure 81. Recompose—Storage processor utilization ................................................................. 91 Figure 82. Recompose—FAST Cache IOPS ................................................................................... 91 Figure 83. Recompose—Data Mover CPU utilization .................................................................... 92 Figure 84. Recompose—Data Mover NFS load ............................................................................. 92 Figure 85. Recompose—ESXi CPU load ........................................................................................ 93 Figure 86. Recompose—Average Guest Millisecond/Command counter ...................................... 93 Figure 87. Refresh—Disk IOPS for a single SAS drive ................................................................... 94 Figure 88. Refresh—Replica LUN IOPS and response time ........................................................... 95 Figure 89. Refresh—Linked clone LUN IOPS and response time ................................................... 95 Figure 90. Refresh—Storage processor IOPS ............................................................................... 96 Figure 91. Refresh—Storage processor utilization ....................................................................... 96 Figure 92. Refresh—FAST Cache IOPS.......................................................................................... 97 Figure 93. Refresh—Data Mover CPU utilization .......................................................................... 97 Figure 94. Refresh—Data Mover NFS load ................................................................................... 98 Figure 95. Refresh—ESXi CPU load .............................................................................................. 98 Figure 96. Refresh—Average Guest Millisecond/Command counter ............................................ 99 Figure 97. FAST Cache boot storm—Average latency comparison ............................................. 100 Figure 98. FAST Cache antivirus scan—Scan time comparison .................................................. 100 Figure 99. FAST Cache patch storm—Average latency comparison ............................................ 101 Figure 100. McAfee MOVE—Storage processor IOPS comparison ................................................ 102

Page 13: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

List of Figures

13 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 101. McAfee MOVE—Storage processor utilization comparison ........................................ 102 Figure 102. McAfee MOVE—Data Mover utilization comparison .................................................. 103 Figure 103. McAfee MOVE—ESXi CPU load comparison ............................................................... 103 Figure 104. McAfee MOVE—ESXi Disk Response Time (GAVG) comparison .................................. 104

Page 14: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 1: Executive Summary

14 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

1 Executive Summary

This chapter summarizes the proven solution described in this document and includes the following sections:

• Introduction to the EMC VNX series

• Business case

• Solution overview

• Key results and recommendations

Introduction to the EMC VNX series

The EMC® VNX™ series delivers uncompromising scalability and flexibility for the mid-tier user while providing market-leading simplicity and efficiency to minimize total cost of ownership. Customers can benefit from the new VNX features such as:

• Next-generation unified storage, optimized for virtualized applications.

• Extended cache by using Flash drives with FAST Cache and Fully Automated Storage Tiering for Virtual Pools (FAST VP) that can be optimized for the highest system performance and lowest storage cost simultaneously on both block and file.

• Multiprotocol support for file, block, and object with object access through EMC Atmos™ Virtual Edition (Atmos VE).

• Simplified management with EMC Unisphere™ for a single management framework for all NAS, SAN, and replication needs.

• Up to three times improvement in performance with the latest Intel Xeon multicore processor technology, optimized for Flash.

• 6 Gb/s SAS back end with the latest drive technologies supported:

3.5” 100 GB and 200 GB Flash, 3.5” 300 GB, and 600 GB 15k or 10k rpm SAS, and 3.5” 1 TB, 2 TB and 3 TB 7.2k rpm NL-SAS

2.5” 100 GB and 200 GB Flash, 300 GB, 600 GB and 900 GB 10k rpm SAS

• Expanded EMC UltraFlex™ I/O connectivity—Fibre Channel (FC), Internet Small Computer System Interface (iSCSI), Common Internet File System (CIFS), network file system (NFS) including parallel NFS (pNFS), Multi-Path File System (MPFS), and Fibre Channel over Ethernet (FCoE) connectivity for converged networking over Ethernet.

The VNX series includes five new software suites and three new software packs that make it easier and simpler to attain the maximum overall benefits.

Introduction

Page 15: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 1: Executive Summary

15 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

• VNX FAST Suite—Automatically optimizes for the highest system performance and the lowest storage cost simultaneously (FAST VP is not part of the FAST Suite for the VNX5100).

• VNX Local Protection Suite—Practices safe data protection and repurposing.

• VNX Remote Protection Suite—Protects data against localized failures, outages and disasters.

• VNX Application Protection Suite—Automates application copies and proves compliance.

• VNX Security and Compliance Suite—Keeps data safe from changes, deletions, and malicious activity.

• VNX Total Efficiency Pack—Includes all five software suites (not available for the VNX5100).

• VNX Total Protection Pack—Includes local, remote and application protection suites

• VNX Total Value Pack—Includes all three protection software suites and the Security and Compliance Suite (the VNX5100 exclusively supports this package).

Customers require a scalable, tiered, and highly available infrastructure to deploy their virtual desktop environment. There are several new technologies available to assist them in architecting a virtual desktop solution, but the customers need to know how best to use these technologies to maximize their investment, support service-level agreements, and reduce their desktop total cost of ownership.

The purpose of this solution is to build a replica of a common customer virtual desktop infrastructure (VDI) environment, and validate the environment for performance, scalability, and functionality. Customers will achieve:

• Increased control and security of their global, mobile desktop environment, typically their most at-risk environment.

• Better end-user productivity with a more consistent environment.

• Simplified management with the environment contained in the data center.

• Better support of service-level agreements and compliance initiatives.

• Lower operational and maintenance costs.

This solution demonstrates how to use an EMC VNX platform to provide storage resources for a robust VMware® View™ 5.0 environment and Windows 7 virtual desktops.

Planning and designing the storage infrastructure for VMware View is a critical step as the shared storage must be able to absorb large bursts of input/output (I/O) that occur throughout the course of a day. These large I/O bursts can lead to periods of erratic and unpredictable virtual desktop performance. Users can often adapt to slow performance, but unpredictable performance will quickly frustrate them.

Software suites available

Software packages available

Business case

Solution overview

Page 16: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 1: Executive Summary

16 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

To provide predictable performance for a VDI environment, the storage must be able to handle peak I/O load from clients without resulting in high response times. Designing for this workload involves deploying several disks to handle brief periods of extreme I/O pressure, which is expensive to implement. This solution uses EMC VNX FAST Cache to reduce the number of disks required.

This solution also examines the design, implementation, and performance characteristics of the latest antivirus solutions that are designed specifically for the virtual desktop environments. The McAfee Management for Optimized Virtual Environments antivirus platform, also known as McAfee MOVE, was installed in the VMware View 5.0 lab environment to test its performance and determine the advantages when compared with the traditional host-based antivirus solutions.

EMC VNX FAST Cache provides measurable benefits in a desktop virtualization environment. It not only reduces the response time for both read and write workloads, but also effectively supports more virtual desktops on fewer drives, and greater IOPS density with a lower drive requirement. Chapter 7: Testing and Validation provides more details.

The McAfee MOVE antivirus solution greatly reduces the memory and processor load associated with antivirus scanning. By offloading the virus scanning tasks to dedicated MOVE servers running the McAfee VirusScan platform, the virtual desktops are able to maintain a more consistent level of performance with fewer overall resources.

Key results and recommendations

Page 17: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 2: Introduction

17 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

2 Introduction

EMC's commitment to consistently maintain and improve quality is led by the Total Customer Experience (TCE) program, which is driven by Six Sigma methodologies. As a result, EMC has built Customer Integration Labs in its Global Solutions Centers to reflect realworld deployments in which TCE use cases are developed and executed. These use cases provide EMC with an insight into the challenges that are currently facing its customers.

This Proven Solutions Guide summarizes a series of best practices that were discovered or validated during testing of the EMC infrastructure for VMware View 5.0 solution by using the following products:

• EMC VNX series

• VMware View Manager 5.0

• VMware View Composer 2.7

• VMware vSphere™ 5.0

This chapter includes the following sections:

• Document overview

• Reference architecture

• Prerequisites and supporting documentation

• Terminology

Document overview

The following eight use cases are examined in this solution:

• Boot storm

• Antivirus scan

• Microsoft security patch install

• Login storm

• User workload simulated with Login Consultants Login VSI 3 tool

• View recompose

• View refresh

• McAfee MOVE

Use case definition

Page 18: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 2: Introduction

18 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Chapter 7: Testing and Validation contains the test definitions and results for each use case. Chapter 8: Virtual Desktop Antivirus contains information related to the McAfee MOVE antivirus architecture, deployment, and test results.

The purpose of this solution is to provide a virtualized infrastructure for virtual desktops powered by VMware View 5.0, VMware vSphere 5.0, EMC VNX series (NFS), VNX FAST Cache, and storage pools.

This solution includes all the components required to run this environment such as the infrastructure hardware, software platforms including Microsoft Active Directory, and the required VMware View configuration.

Information in this document can be used as the basis for a solution build, white paper, best practices document, or training.

This Proven Solutions Guide contains the results observed from testing the EMC Infrastructure for VMware View 5.0 solution. The objectives of this testing are to establish:

• A reference architecture of validated hardware and software that permits easy and repeatable deployment of the solution.

• The best practices for storage configuration that provides optimal performance, scalability, and protection in the context of the midtier enterprise market.

• The performance of the latest antivirus technologies that were designed for virtual desktop environments.

Implementation instructions are beyond the scope of this document. Information on how to install and configure VMware View 5.0 components, vSphere 5.0, and the required EMC products is outside the scope of this document. References to supporting documentation for these products are provided where applicable.

The intended audience for this Proven Solutions Guide is:

• Internal EMC personnel

• EMC partners

• Customers

It is assumed that the reader has a general knowledge of the following products:

• VMware vSphere 5.0

• VMware View 5.0

• EMC VNX series

• Cisco Nexus and Catalyst switches

Purpose

Scope

Not in scope

Audience

Prerequisites

Page 19: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 2: Introduction

19 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Table 1 lists the terms that are frequently used in this paper.

Table 1. Terminology

Term Definition

EMC VNX FAST Cache A feature that enables the use of Flash drive as an expanded cache layer for the array.

Linked clone A virtual desktop created by VMware View Composer from a writeable snapshot paired with a read-only replica of a master image.

Login VSI A third-party benchmarking tool developed by Login Consultants that simulates realworld VDI workloads. Login VSI uses an AutoIT script and determines the maximum system capacity based on the response time of the users.

McAfee MOVE An antivirus software package optimized for virtual desktops that offloads virus scanning operations from the clients to dedicated servers.

Replica A read-only copy of a master image that is used to deploy linked clones.

VMware View Composer

Integrates effectively with VMware View Manager to provide advanced image management and storage optimization.

Reference architecture

This Proven Solutions Guide has a corresponding Reference Architecture document that is available on EMC Online Support website and EMC.com. The EMC Infrastructure For VMware View 5.0, EMC VNX Series (Nfs), VMware vSphere 5.0, VMware View 5.0, and VMware View Composer 2 — Reference Architecture provides more details.

If you do not have access to these documents, contact your EMC representative.

The reference architecture and the results in this Proven Solution Guide are valid for 1,000 Windows 7 virtual desktops conforming to the workload described in the Validated environment profile section.

Figure 1 shows the reference architecture of the midsize solution.

Terminology

Corresponding reference architecture

Reference architecture diagram

Page 20: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 2: Introduction

20 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 1. VMware View—Reference architecture

Configuration

Table 2 lists the hardware used to validate the solution.

Table 2. VMware View—Solution hardware

Hardware Quantity Configuration Notes

EMC VNX5300 1 Two Data Movers (active/passive)

Three disk-array enclosures (DAEs) configured with:

• Twenty five 300 GB, 15k rpm 3.5-in. SAS disks

• Seventeen 2 TB, 7,200 rpm 3.5-in. NL-SAS disks

• Three 100 GB, 3.5-in. Flash drives

VNX shared storage

Hardware resources

Page 21: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 2: Introduction

21 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Hardware Quantity Configuration Notes

Intel-based servers 10 • Memory: 72 GB of RAM

• CPU: Two Intel Xeon X5550 with 2.77 GHz quad core processors

• Internal storage: One 73 GB internal SAS disk

• External storage: VNX5300 (NFS)

• NIC: Quad-port Broadcom BCM5709 1000Base-T adapters

8 servers—Virtual desktops ESXi cluster 1

2 servers—ESXi cluster to host infrastructure virtual machines

Intel-based servers 7 • Memory: 72 GB of RAM

• CPU: Two Intel Xeon X5650 with 2.77 GHz hex-core processors

• Internal storage: One 73 GB internal SAS disk

• External storage: VNX5300 (NFS)

• NIC: Quad-port Broadcom BCM5709 1000Base-T adapters

Virtual desktop ESXI cluster 2

Cisco Catalyst 6509

2 • WS-6509-E switch

• WS-x6748 1-gigabit line cards

• WS-SUP720-3B supervisor

1-gigabit host connections distributed over two line cards

Cisco Nexus 5020 2 Forty 10-gigabit ports Redundant LAN A/B configuration

Page 22: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 2: Introduction

22 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Table 3 lists the software used to validate the solution.

Table 3. VMware View—Solution software

Software Configuration

VNX5300 (shared storage, file systems)

VNX OE for File Release 7.0.40.0

VNX OE for Block Release 31 (05.31.000.5.502)

VSI for VMware vSphere: Unified Storage Management

Version 5.0.0.61

VSI for VMware vSphere: Storage Viewer Version 5.0

Cisco Nexus

Cisco Nexus 5020 Version 4.2(1)N1(1)

ESXi servers

ESXi ESXi 5.0.0 (474610)

EMC vSphere Storage APIs for Array Integration (VAAI) Plug-in

Version 1.0-10

vCenter Server

OS Windows 2008 R2 SP1

VMware vCenter Server 5.0.0 (455964)

VMware View Manager 5.0.0 (481677)

VMware View Composer 2.7

Virtual desktops

Note: This software is used to generate the test load.

OS MS Windows 7 Enterprise SP1 (32-bit)

VMware tools 8.6.0 build-425873

Microsoft Office Office Enterprise 2007 (Version 12.0.6562.5003)

Internet Explorer 8.0.7601.17514

Adobe Reader 9.1.0

McAfee VirusScan 8.7 Enterprise

McAfee MOVE Antivirus 2.0.0

Adobe Flash Player 11

Bullzip PDF Printer 6.0.0.865

Login VSI (VDI workload generator) 3.0 Professional Edition

Software resources

Page 23: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 3: VMware View Infrastructure

23 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

3 VMware View Infrastructure

This chapter describes the general design and layout instructions that apply to the specific components used during the development of this solution. This chapter includes the following sections:

• VMware View 5.0

• vSphere 5.0 infrastructure

• Windows infrastructure

VMware View 5.0

VMware View delivers rich and personalized virtual desktops as a managed service from a virtualization platform built to deliver the entire desktop, including the operating system, applications, and user data. With VMware View 5.0, administrators can virtualize the operating system, applications, and user data, and deliver modern desktops to end users. VMware View 5.0 provides centralized, automated management of these components with increased control and cost savings. VMware View 5.0 improves business agility while providing a flexible high-performance desktop experience for end users across a variety of network conditions.

This solution is deployed by using a single VMware View Manager Server instance that is capable of scaling up to 2,000 virtual desktops. Deployments of up to 10,000 virtual desktops are possible by using multiple View Manager servers.

The core elements of a VMware View 5.0 implementation are:

• VMware View Manager Connection Server 5.0

• VMware View Composer 2.7

• VMware vSphere 5.0

Additionally, the following components are required to provide the infrastructure for a VMware View 5.0 deployment:

• Microsoft Active Directory

• Microsoft SQL Server

• DNS server

• Dynamic Host Configuration Protocol (DHCP) server

Introduction

Deploying VMware View components

Page 24: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 3: VMware View Infrastructure

24 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

The View Manager Connection Server is the central management location for virtual desktops and has the following key roles:

• Broker connections between the users and the virtual desktops

• Control the creation and retirement of virtual desktop images

• Assign users to desktops

• Control the state of the virtual desktops

• Control access to the virtual desktops

View Composer 2.7 works directly with vCenter Server to deploy, customize, and maintain the state of the virtual desktops when using linked clones. The tiered storage capabilities of View Composer 2.7 enable the read-only replica and the linked clone disk images to be on the dedicated storage. This allows for superior scaling in large configurations.

VMware View with View Composer 2.7 uses the concept of linked clones to quickly provision virtual desktops. This solution uses the tiered storage feature of View Composer to build linked clones and place their replica images on separate datastores as shown in Figure 2.

Figure 2. VMware View—Linked clones

The operating system reads all the common data from the read-only replica, and the unique data that is created by the operating system or user, which is stored on the linked clone. A logical representation of this relationship is shown in Figure 3.

View Manger Connection Server

View Composer 2.7

View Composer linked clones

Page 25: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 3: VMware View Infrastructure

25 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 3. VMware View–Logical representation of linked clone and replica disk

vSphere 5.0 infrastructure

VMware vSphere 5.0 is the market-leading virtualization hypervisor that is used across thousands of IT environments around the world. VMware vSphere 5.0 can transform or virtualize computer hardware resources, including the CPUs, RAM, hard disks, and network controllers to create fully functional virtual machines, each of which runs their own operating systems and applications just like physical computers.

The high-availability features in VMware vSphere 5.0 along with VMware Distributed Resource Scheduler (DRS) and Storage vMotion® enable seamless migration of virtual desktops from one ESXi server to another with minimal or no disruption to the customers.

Three vSphere clusters are deployed in this solution.

The View 5.0 clusters consist of two different ESXi 5.0 server configurations. Cluster A consists of eight dual quad-core ESXi servers to support 500 desktops, resulting in around 62 to 63 virtual machines per ESXi server. Cluster B consists of seven dual hex-core ESXi 5.0 servers to support 500 additional desktops, resulting in around 71 to 72 virtual machines per ESXi server. Each cluster has access to the same four datastores for desktop provisioning for a total of 250 virtual machines per datastore.

The infrastructure cluster consists of two ESXi 5.0 servers and stores the following virtual machines:

• Windows 2008 R2 SP1 domain controller—Provides DNS, Active Directory, and DHCP services.

• SQL Server 2008 SP2 on Windows 2008 R2 SP—Provides databases for vCenter Server and View Composer and other services in the environment.

• vCenter Server on Windows 2008 R2 SP1—Provides management services for the VMware clusters and View Composer.

• View 5.0 on Windows 2008 R2 SP1—Provides services to manage the virtual desktops.

• Windows 7 Key Management Service (KMS)—Provides a method to activate Windows 7 desktops.

vSphere 5.0 overview

vSphere cluster

Page 26: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 3: VMware View Infrastructure

26 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Windows infrastructure

Microsoft Windows provides the infrastructure that is used to support the virtual desktops and includes the following components:

• Microsoft Active Directory

• Microsoft SQL Server

• DNS server

• DHCP server

The Windows domain controllers run the Active Directory service that provides the framework to manage and support the virtual desktop environment. Active Directory performs the following functions:

• Manages the identities of users and their information

• Applies group policy objects

• Deploys software and updates

Microsoft SQL Server is a relational database management system (RDBMS). A dedicated SQL Server 2008 SP2 is used to provide the required databases to vCenter Server and View Composer.

DNS is the backbone of Active Directory and provides the primary name resolution mechanism for Windows servers and clients.

In this solution, the DNS role is enabled on the domain controllers.

The DHCP server provides the IP address, DNS server name, gateway address, and other information to the virtual desktops.

In this solution, the DHCP role is enabled on one of the domain controllers. The DHCP scope is configured with an IP range that is large enough to support 1,000 virtual desktops.

Introduction

Microsoft Active Directory

Microsoft SQL Server

DNS server

DHCP server

Page 27: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 4: Storage Design

27 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

4 Storage Design

This chapter describes the storage design that applies to the specific components of this solution.

EMC VNX series storage architecture

The EMC VNX series is a dedicated network server optimized for file and block access that delivers high-end features in a scalable and easy-to-use package.

The VNX series delivers a single-box block and file solution that offers a centralized point of management for distributed environments. This makes it possible to dynamically grow, share, and cost-effectively manage multiprotocol file systems and provide multiprotocol block access. Administrators can take advantage of simultaneous support for NFS and CIFS protocols by enabling Windows and Linux/UNIX clients to share files by using the sophisticated file-locking mechanisms of VNX for File and VNX for Block for high-bandwidth or for latency-sensitive applications.

This solution uses file-based storage to leverage the benefits that each of the following provides:

• File-based storage over the NFS protocol is used to store the VMDK files for all virtual desktops. This has the following benefit:

Unified Storage Management plug-in provides seamless integration with VMware vSphere to simplify the provisioning of datastores or virtual machines.

• EMC vSphere Storage APIs for Array Integration (VAAI) plug-in for ESXi supports the vSphere 5 VAAI primitives for NFS on the EMC VNX platform.

• File-based storage over the Common Internet File System (CIFS) protocol is used to store user data and roaming profiles. This has the following benefits:

Redirection of user data and roaming profiles to a central location for easy backup and administration.

Single instancing and compression of unstructured user data to provide the highest storage utilization and efficiency.

This section explains the configuration of the storage provisioned over NFS for the ESXi cluster to store the VMDK images and the storage provisioned over CIFS to redirect user data and roaming profiles.

Introduction

Page 28: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 4: Storage Design

28 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 4 shows the storage layout of the disks in the reference architecture.

Figure 4. VNX5300–Storage layout

The following storage configurations were used in the solution:

• Four SAS disks (0_0 to 0_3) are used for the VNX OE.

• Disks 0_6, 1_5, and 1_6 are hot spares. These disks are denoted as Hot Spare in the storage layout diagram.

• Two Flash drives (0_4 and 0_5) are used for EMC VNX FAST Cache. There are no user-configurable LUNs on these drives.

• Fifteen SAS disks (2_0 to 2_14) in a RAID 5 storage pool (Storage Pool 2) are used to store linked clones and replicas. FAST Cache is enabled for the entire pool. Six NFS file systems are created and presented to the ESXi servers as datastores.

• Sixteen NL-SAS disks (0_7 to 0_14 and 1_7 to 1_14) are configured in a RAID 6 (6+2) storage pool (Storage Pool 3) and used to store user data and roaming profiles. FAST Cache is enabled for the entire pool. Two VNX file systems are created and presented as Windows file shares.

• Five SAS disks (1_0 to 1_4) in a RAID 5 storage pool (Storage Pool 1) are used to store infrastructure virtual machines. A 1 TB LUN is carved out of the pool to form an NFS file system. The file system is presented to the ESXi servers as a datastore.

Storage layout

Storage layout overview

Page 29: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 4: Storage Design

29 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 5 shows the layout of the file systems.

Figure 5. VNX5300–NFS file system layout

Fifteen LUNs of 200 GB each are carved out of a storage pool configured with 15 SAS drives. The LUNs are presented to VNX for File as dvols that belong to a system- defined pool. Six file systems are then carved out of an Automatic Volume Management (AVM) system pool and are presented to the ESXi servers as datastores. File systems 1 and 2 are used to store replicas. File systems 3 to 6 are used to store the linked clones. A total of 1,000 desktops are created and each replica is responsible for 500 linked clones.

Starting from VNX for File version 7.0.35.3, AVM is enhanced to intelligently stripe across dvols that belong to the same block-based storage pool. There is no need to manually create striped volumes and add them to user-defined file-based pools.

Like the NFS file systems, the CIFS file systems are provisioned from an AVM system pool to store user home directories and user roaming profiles. The two file systems are grouped in the same storage pool because their I/O profiles are sequential.

File system layout

Page 30: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 4: Storage Design

30 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 6 shows the layout of the CIFS file systems.

Figure 6. VNX5300–CIFS file system layout

Twenty LUNs of 360 GB each are carved out of the RAID 6 storage pool configured with 16 NL-SAS-drives. Sixteen drives are used because the block-based storage pool internally creates 6+2 RAID 6 groups. Therefore, the number of NL-SAS drives used is a multiple of eight. Likewise, twenty LUNs are used because AVM stripes across five dvols, so the number of dvols is a multiple of five.

FAST Cache is enabled on both storage pools that are used to store the NFS and CIFS file systems.

VNX Fully Automated Storage Tiering (FAST) Cache, a part of the VNX FAST Suite, uses Flash drives as an expanded cache layer for the array. VNX5300 is configured with two 100 GB Flash drives in a RAID 1 configuration for a 93 GB read/write-capable cache. This is the minimum amount of FAST Cache. Larger configurations are supported for scaling beyond 1,000 desktops.

FAST Cache is an array-wide feature available for both file and block storage. FAST Cache works by examining 64 KB chunks of data in FAST Cache-enabled objects on the array. Frequently accessed data is copied to the FAST Cache and subsequent accesses to the data chunk are serviced by FAST Cache. This enables immediate promotion of very active data to the Flash drives. The use of Flash drives dramatically improves the response times for very active data and reduces data hot spots that can occur within the LUN.

FAST Cache is an extended read/write cache that enables VMware View to deliver consistent performance at Flash-drive speeds by absorbing read-heavy activities such as boot storms and antivirus scans, and write-heavy workloads, such as operating systems patches and application updates. This extended read/write cache is an ideal

EMC VNX FAST Cache

Page 31: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 4: Storage Design

31 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

caching mechanism for View Composer because the base desktop image and other active user data are so frequently accessed that the data is serviced directly from the Flash drives without accessing the slower drives at the lower storage tier.

EMC Virtual Storage Integrator (VSI) for VMware vSphere is a plug-in to the vSphere client that provides a single management interface for managing EMC storage within the vSphere environment. Features can be added and removed from VSI independently, which provides flexibility for customizing VSI user environments. Features are managed by using the VSI Feature Manager. VSI provides a unified user experience that allows new features to be introduced rapidly in response to changing customer requirements.

The following VSI features were used during the validation testing:

• Storage Viewer (SV)—Extends the vSphere client to facilitate the discovery and identification of EMC VNX storage devices that are allocated to VMware ESXi hosts and virtual machines. SV presents the underlying storage details to the virtual data center administrator, merging the data of several different storage mapping tools into a few seamless vSphere client views.

• Unified Storage Management—Simplifies storage administration of the EMC VNX platforms. It enables VMware administrators to provision new NFS and VMFS datastores, and RDM volumes seamlessly within vSphere client.

The EMC VSI for VMware vSphere product guides available on the EMC Online Support website provide more information.

FS1 and FS2—Each of the 50 GB datastores stores a replica that is responsible for 500 linked clone desktops. The input/output to these LUNs is strictly read-only except during operations that require copying a new replica into the datastore.

FS3, FS4, FS5, and FS6—Each of these 500 GB datastores accommodates 250 virtual desktops. This allows each desktop to grow to a maximum average size of approximately 2 GB. The pool of desktops created in View Manager is balanced across these datastores.

Virtual desktops use two VNX shared file systems, one for user profiles and the other to redirect user storage. Each file system is exported to the environment through a CIFS share.

Table 4 lists the file systems used for user profiles and redirected user storage.

Table 4. VNX5300—File systems

File system Use Size

profiles_fs Profile data of the users 2 TB

userdata1_fs User data 4 TB

VSI for VMware vSphere

vCenter Server storage layout

VNX shared file systems

Page 32: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 4: Storage Design

32 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Local user profiles are not recommended in a VDI environment. One reason for this is that a performance penalty is incurred when a new local profile is created when a user logs in to a new desktop image. Solutions such as roaming profiles and folder redirection enable user data to be stored centrally on a network location that resides on a CIFS share hosted by the EMC VNX array. This reduces the performance impact during user logon, while allowing user data to roam with the profiles.

The EMC VNX for File Home Directory feature uses the userdata1_fs file system to automatically map the H: drive of each virtual desktop to the users’ own dedicated subfolder on the share. This ensures that each user has exclusive rights to a dedicated home drive share. This share is created by the File Home Directory feature, and does not need to be created manually. The Home Directory feature automatically maps this share for each user.

The Documents folder for each user is also redirected to this share. This allows users to recover the data in the Documents folder by using the VNX Snapshots for File. The file system is set at an initial size of 1 TB, and extends itself automatically when more space is required.

The profiles_fs file system is used to store user roaming profiles. It is exported through CIFS. The Universal/Uniform Naming Convention (UNC) path to the export is configured in Active Directory for roaming profiles as shown in Figure 7.

Figure 7. Active Directory–UNC path for roaming profiles

The file systems leverage EMC Virtual Provisioning™ and compression to provide flexibility and increased storage efficiency. If single instancing and compression are enabled, unstructured data such as user documents typically leads to a 50 percent reduction in consumed storage.

The VNX file systems for user profiles and documents are configured as follows:

• profiles_fs is configured to consume 2 TB of space. With 50 percent space saving, each profile can grow up to 4 GB in size. The file system extends if more space is required.

Roaming profiles and folder redirection

EMC VNX for File Home Directory feature

Profile export

Capacity

Page 33: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 4: Storage Design

33 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

• userdata1_fs is configured to consume 4 TB of space. With 50 percent space saving, each user is able to store 8 GB of data. The file system extends if more space is required.

Page 34: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 5: Network Design

34 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

5 Network Design

This chapter describes the network design used in this solution and contains the following sections:

• Considerations

• VNX for File network configuration

• Cisco Nexus 5020 configuration

• Cisco Catalyst 6509 configuration

Considerations

Figure 8 shows the 10 Gb Ethernet connectivity between the two Cisco Nexus 5020 switches and the EMC VNX storage. The uplink Ethernet ports from the Nexus switches can be used to connect to a 10 Gb or 1 Gb external LAN. In this solution, a 1 Gb LAN through Cisco Catalyst 6509 switches is used to extend Ethernet connectivity to the desktop clients, VMware View components, and the Windows Server infrastructure.

Figure 8. VMware View–Network layout overview

Network layout overview

Page 35: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 5: Network Design

35 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

This validated solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high availability, and security.

The IP scheme for the virtual desktop network must be designed with enough IP addresses in one or more subnets for the DHCP server to assign them to each virtual desktop.

VNX platforms provide network high availability or redundancy by using link aggregation. This is one of the methods used to address the problem of link or switch failure.

Link aggregation enables multiple active Ethernet connections to appear as a single link with a single MAC address, and potentially multiple IP addresses.

In this solution, Link Aggregation Control Protocol (LACP) is configured on VNX, combining two 10 GbE ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links.

VNX for File network configuration

EMC VNX5300 consists of two Data Movers. The Data Movers can be configured in an active/active or an active/passive configuration. In the active/passive configuration, the passive Data Mover serves as a failover device for the active Data Mover. In this solution, the Data Movers operate in the active/passive mode.

The VNX5300 Data Movers are configured with two 10-gigabit interfaces on a single I/O module. Link Aggregation Control Protocol (LACP) is used to configure ports fxg-1-0 and fxg-1-1 to support virtual machine traffic, home folder access, and external access for roaming profiles.

Figure 9 shows the back of two VNX5300 Data Movers that include two 10-gigabit fiber Ethernet (fxg) ports each in I/O expansion slot 1.

Figure 9. VNX5300–Ports of the two Data Movers

Logical design considerations

Link aggregation

Data Mover ports

Page 36: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 5: Network Design

36 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

To configure the link aggregation that uses fxg-1-0 and fxg-1-1 on Data Mover 2, run the following command:

$ server_sysconfig server_2 -virtual -name <Device Name> -create trk –option "device=fxg-1-0,fxg-1-1 protocol=lacp"

To verify if the ports are channeled correctly, run the following command:

$ server_sysconfig server_2 -virtual -info lacp1 server_2: *** Trunk lacp1: Link is Up *** *** Trunk lacp1: Timeout is Short *** *** Trunk lacp1: Statistical Load Balancing is IP *** Device Local Grp Remote Grp Link LACP Duplex Speed -------------------------------------------------------------- fxg-1-0 10000 4480 Up Up Full 10000 Mbs fxg-1-1 10000 4480 Up Up Full 10000 Mbs

The remote group number must match for both ports and the LACP status must be “Up.” Verify if appropriate speed and duplex are established as expected.

It is recommended to create two Data Mover interfaces and IP addresses on the same subnet with the VMkernel port on the ESXi servers. Half of the NFS datastores are accessed by using one IP address and the other half by using the second IP. This allows the VMkernel traffic to be load balanced among the ESXi NIC teaming members. The following command shows an example of assigning two IP addresses to the same virtual interface named lacp1:

$ server_ifconfig server_2 -all server_2: lacp1-1 protocol=IP device=lacp1 inet=192.168.16.2 netmask=255.255.255.0 broadcast=192.168.16.255 UP, Ethernet, mtu=9000, vlan=276, macaddr=0:60:48:1b:76:92 lacp1-2 protocol=IP device=lacp1 inet=192.168.16.3 netmask=255.255.255.0 broadcast=192.168.16.255 UP, Ethernet, mtu=9000, vlan=276, macaddr=0:60:48:1b:76:93

To enable jumbo frames for the link aggregation interface, run the following command to increase the MTU size:

$ server_ifconfig server_2 lacp1-1 mtu=9000

To verify if the MTU size is set correctly, run the following command:

$ server_ifconfig server_2 lacp1-1 server_2: lacp1 protocol=IP device=lacp1 inet=192.168.16.2 netmask=255.255.255.0 broadcast=192.168.16.255 UP, Ethernet, mtu=9000, vlan=276, macaddr=0:60:48:1b:76:92

LACP configuration on the Data Mover

Data Mover interfaces

Enable jumbo frames on Data Mover interface

Page 37: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 5: Network Design

37 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

ESXi network configuration

All network interfaces on the ESXi servers in this solution use 1 Gb Ethernet connections. All virtual desktops are assigned an IP address by using a DHCP server. The Intel-based servers use four onboard Broadcom Gb Ethernet Controllers for all the network connections. Figure 10 shows the vSwitch configuration in vCenter Server.

Figure 10. ESXi–vSwitch configuration

Virtual switches vSwitch0 and vSwitch1 use two physical network interface cards (NICs) each. Table 5 lists the configured port groups in vSwitch0 and vSwitch1.

Table 5. ESXi—Port groups in vSwitch0 and vSwitch1

Virtual switch

Configured port groups

Used for

vSwitch0 Service console VMkernel port used for ESXi host management

vSwitch0 VLAN277 Network connection for virtual desktops, LAN traffic

vSwitch1 NFS NFS datastore traffic

The NIC teaming load balancing policy for the vSwitches needs to be set to Route based on IP hash as shown in Figure 11.

Figure 11. ESXi—Load balancing policy

NIC teaming

Page 38: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 5: Network Design

38 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

By default, a vSwitch is configured with 24 or 120 virtual ports (depending on the ESXi version), which may not be sufficient in a VDI environment. On the ESXi servers that host the virtual desktops, each virtual desktop consumes one port. Set the number of ports based on the number of virtual desktops that will run on each ESXi server as shown in Figure 12.

Note: Reboot the ESXi server for the changes to take effect.

Figure 12. ESXi—vSwitch virtual ports

If an ESXi server fails or needs to be placed in the maintenance mode, other ESXi servers within the cluster must accommodate the additional virtual desktops that are migrated from the ESXi server that goes offline. Consider the worst-case scenario when the maximum number of virtual ports per vSwitch is determined. If there are not enough virtual ports, the virtual desktops will not be able to obtain an IP address from the DHCP server.

For a VMkernel port to access the NFS datastores by using jumbo frames, the MTU size for the vSwitch to which the VMkernel port belongs and the VMkernel port itself must be set accordingly.

The MTU size is set from the properties page of both the vSwitch and the VMkernel port. Figure 13 and Figure 14 show how a vSwitch and a VMkernel port are configured to support jumbo frames.

Increase the number of vSwitch virtual ports

Enable jumbo frames for the VMkernel port used for NFS

Page 39: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 5: Network Design

39 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 13. ESXi–vSwitch MTU setting

Page 40: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 5: Network Design

40 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 14. ESXi–VMkernel port MTU setting

The MTU values of the vSwitch and the VMkernel support must be set to 9,000 to enable jumbo frame support for NFS traffic between the ESXi hosts and the NFS datastores.

Cisco Nexus 5020 configuration

The two forty-port Cisco Nexus 5020 switches provide redundant high-performance, low-latency 10-gigabit Ethernet, delivered by a cut-through switching architecture for 10-gigabit Ethernet server access in next-generation data centers.

In this solution, the VNX Data Mover cabling is spread across two Nexus 5020 switches to provide redundancy and load balancing of the network traffic.

The following excerpt of the switch configuration shows the commands that are required to enable jumbo frames at the switch level because per-interface MTU is not supported:

policy-map type network-qos jumbo class type network-qos class-default

Overview

Cabling

Enable jumbo frames on Nexus switch

Page 41: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 5: Network Design

41 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

mtu 9216 system qos service-policy type network-qos jumbo

Because the Data Mover connections for the two 10-gigabit network ports are spread across two Nexus switches and LACP is configured for the two Data Mover ports, virtual Port Channel (vPC) must be configured on both switches.

The following excerpt is an example of the switch configuration pertaining to the vpc setup for one of the Data Mover ports. The configuration on the peer Nexus switch is mirrored for the second Data Mover port:

n5k-1# show running-config … feature vpc … vpc domain 2 peer-keepalive destination <peer-nexus-ip> … interface port-channel3 description channel uplink to n5k-2 switchport mode trunk vpc peer-link spanning-tree port type network interface port-channel4 switchport mode trunk vpc 4 switchport trunk allowed vlan 275-277 … interface Ethernet1/4 description 1/4 vnx dm2 fxg-1-0 switchport mode trunk switchport trunk allowed vlan 275-277 channel-group 4 mode active interface Ethernet1/5 description 1/5 uplink to n5k-2 1/5 switchport mode trunk channel-group 3 mode active interface Ethernet1/6 description 1/6 uplink to n5k-2 1/6 switchport mode trunk channel-group 3 mode active To verify if the vPC is configured correctly, run the following command on both the switches. The output should look like this: n5k-1# show vpc Legend: (*) - local vPC is down, forwarding via vPC peer-link vPC domain id : 2 Peer status : peer adjacency formed ok vPC keep-alive status : peer is alive Configuration consistency status: success vPC role : secondary Number of vPCs configured : 1

vPC for Data Mover ports

Page 42: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 5: Network Design

42 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Peer Gateway : Disabled Dual-active excluded VLANs : - vPC Peer-link status ------------------------------------------------------------------ id Port Status Active vlans -- ---- ------ ----------------------------------------------- 1 Po3 up 1,275-277 vPC status ------------------------------------------------------------------ id Port Status Consistency Reason Active vlans ------ ----------- ------ ----------- --------------- ----------- 4 Po4 up success success 275-277

Cisco 6509 configuration

The 9-slot Cisco Catalyst 6509-E switch provides high port densities that are ideal for many wiring closet, distribution, and core network deployments as well as data center deployments.

In this solution, the ESXi server cabling is evenly spread across two WS-x6748 1 Gb line cards to provide redundancy and load balancing of the network traffic.

The server uplinks to the switch are configured in a port channel group to increase the utilization of server network resources and to provide redundancy. The vSwitches are configured to load balance the network traffic based on IP hash.

The following is an example of the configuration for one of the server ports:

description 8/10 9048-43 rtpsol189-1 switchport switchport trunk encapsulation dot1q switchport trunk allowed vlan 276,516-527 switchport mode trunk mtu 9216 no ip address spanning-tree portfast trunk channel-group 23 mode on

Overview

Cabling

Server uplinks

Page 43: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 6: Installation and Configuration

43 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

6 Installation and Configuration

This chapter describes how to install and configure this solution and includes the following sections:

• Installation overview

• VMware View components

• Storage components

Installation overview

This section provides an overview of the configuration of the following components:

• Desktop pools

• Storage pools

• FAST Cache

• VNX Home Directory

The installation and configuration steps for the following components are available on the VMware website:

• VMware View Connection Server 5.0

• VMware View Composer 2.7

• VMware ESXi 5.0

• VMware vSphere 5.0

The installation and configuration steps for the following components are not covered:

• Microsoft System Center Configuration Manager (SCCM) 2007 R3

• Microsoft Active Directory, DNS, and DHCP

• Microsoft SQL Server 2008 SP2

Page 44: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 6: Installation and Configuration

44 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

VMware View components

The VMware View Installation document available on the VMware website has detailed procedures on how to install View Connection Server and View Composer 2.7. No special configuration instructions are required for this solution.

The vSphere Installation and Setup Guide available on the VMware website contains detailed procedures that describe how to install and configure vCenter Server and ESXi. As a result, these subjects are not covered in further detail in this paper. No special configuration instructions are required for this solution.

Before deploying the desktop pools, ensure that the following steps from the VMware View Installation document have been completed:

• Prepare Active Directory

• Install View Composer 2.7 on the vCenter Server

• Install the View Connection Server

• Add the vCenter Server instance to View Manager

VMware supports a maximum of 1,000 desktops per replica image, which requires creating a unique pool for every 1,000 desktops. In this solution, two persistent automated desktop pools were used.

To create one of the persistent automated desktop pools as configured for this solution, complete the following steps:

1. Log in to the VMware View Administration page, which is located at https://server/admin where “server” is the IP address or DNS name of the View Manager server.

2. Click the Pools link in the left pane.

3. Click Add under the Pools banner. The Add Pool page appears.

4. Under Pool Definition, click Type. The Type page appears on the right pane.

5. Select Automated Pool as shown in Figure 15.

Figure 15. VMware View–Select Automated Pool

VMware View installation overview

VMware View setup

VMware View desktop pool configuration

Page 45: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 6: Installation and Configuration

45 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

6. Click Next. The User Assignment page appears.

7. Select Dedicated and ensure that Enable automatic assignment is selected.

8. Click Next. The vCenter Server page appears.

9. Select View Composer linked clones and select a vCenter Server that supports View Composer as shown in Figure 16.

Figure 16. VMware View–Select View Composer linked clones

10. Click Next. The Pool Identification page appears.

11. Enter the required information.

12. Click Next. The Pool Settings page appears.

13. Make the required changes.

14. Click Next. The View Composer Disks page appears.

15. Select Do not redirect Windows profile.

16. Click Next. The Provisioning Settings page appears.

17. Perform the following as shown in Figure 17:

a. Select Use a naming pattern.

b. In the Naming Pattern field, type the naming pattern.

c. In the Max number of desktops field, type the number of desktops to provision.

Page 46: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 6: Installation and Configuration

46 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 17. VMware View–Select Provision Settings

18. Click Next. The vCenter Settings page appears.

19. Perform the following as shown in Figure 18:

a. Click Browse to select a default image, a folder for the virtual machines, the cluster hosting the virtual desktops, and the resource pool to store the desktops.

Figure 18. VMware View - vCenter Settings

b. In the Datastores field, click Browse. The Select Datastores page appears.

20. Select Use different datastore for View Composer replica disks and in the Use For list box, select Replica disks or Linked clones as shown in Figure 19.

Page 47: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 6: Installation and Configuration

47 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 19. VMware View–Select Datastores

21. Click OK. The vCenter Settings page appears as shown in Figure 20.

Figure 20. VMware View–vCenter Settings

22. Verify the settings, and then click Next. The Guest Customization page appears.

23. Perform the following:

a. In the Domain list box, select the domain.

b. In the AD container field, click Browse, and then select the AD container.

c. Select Use QuickPrep as shown in Figure 21.

Page 48: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 6: Installation and Configuration

48 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 21. VMware View–Guest Customization

24. Click Next. The Ready to Complete page appears.

25. Verify the settings for the pool, and then click Finish to start the deployment of the virtual desktops.

Storage components

Storage pools in the EMC VNX OE support heterogeneous drive pools. In this solution, a RAID 5 storage pool was configured from 15 SAS drives. Fifteen 200 GB thick LUNs were created from this storage pool as shown in Figure 22. FAST Cache was enabled for the pool.

Figure 22. VNX5300–Fifteen 200 GB Thick LUNs

The default number of threads dedicated to serve NFS requests is 384 per Data Mover on the VNX. Some use cases such as the scanning of desktops might require more number of NFS active threads. It is recommended to increase the number of active NFS threads to the maximum of 2048 on each Data Mover. The nthreads parameter can be set by using the following command:

# server_param server_2 –facility nfs –modify nthreads –value 2048

Storage pools

NFS active threads per Data Mover

Page 49: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 6: Installation and Configuration

49 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Reboot the Data Mover for the change to take effect.

Type the following command to confirm the value of the parameter:

# server_param server_2 -facility nfs -info nthreads server_2 : name = nthreads facility_name = nfs default_value = 384 current_value = 2048 configured_value = 2048 user_action = reboot DataMover change_effective = reboot DataMover range = (32,2048) description = Number of threads dedicated to serve nfs requests This param represents number of threads dedicated to serve nfs requests. Any changes made to this param will be applicable after reboot only

VNX file software contains a performance fix that significantly reduces NFS write latency. The minimum software patch required for the fix is 7.0.13.0. In addition to the patch upgrade, the performance fix only takes effect when the NFS file system is mounted by using the uncached option as shown below:

# server_mount server_2 -option uncached fs1 /fs1

The uncached option can be verified by using the following command:

# server_mount server_2 server_2 : root_fs_2 on / uxfs,perm,rw root_fs_common on /.etc_common uxfs,perm,ro home_nl on /home_nl uxfs,perm,rw profiles_nl on /profiles_nl uxfs,perm,rw infrastructure on /infrastructure uxfs,perm,rw,uncached fs1 on /fs1 uxfs,perm,rw,uncached fs2 on /fs2 uxfs,perm,rw,uncached fs3 on /fs3 uxfs,perm,rw,uncached fs4 on /fs4 uxfs,perm,rw,uncached fs5 on /fs5 uxfs,perm,rw,uncached fs6 on /fs6 uxfs,perm,rw,uncached

FAST Cache is enabled as an array-wide feature in the system properties of the array in EMC Unisphere™. Click the FAST Cache tab, then click Create and select the Flash drives to create the FAST Cache. There are no user-configurable parameters for FAST Cache.

NFS performance fix

Enable FAST Cache

Page 50: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 6: Installation and Configuration

50 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 23. VNX5300–FAST Cache tab

To enable FAST Cache for any LUN in a pool, navigate to the Storage Pool Properties page in Unisphere, and then click the Advanced tab. Select Enabled to enable FAST Cache as shown in Figure 24.

Figure 24. VNX5300–Enable FAST Cache

The VNX Home Directory installer is available on the NAS Tools and Applications CD for each VNX OE for File release, and can be downloaded from the EMC Online Support website.

VNX Home Directory feature

Page 51: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 6: Installation and Configuration

51 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

After the VNX Home Directory feature is installed, use the Microsoft Management Console (MMC) snap-in to configure the feature. A sample configuration is shown in Figure 25 and Figure 26.

Figure 25. VNX5300–Home Directory MMC snap-in

For any user account that ends with a suffix between 1 and 1,000, the sample configuration shown in Figure 26 automatically creates a user home directory in the following location and maps the H: drive to this path:

\userdata1_fs file system in the format \userdata1_fs\<domain>\<user>

Each user has exclusive rights to the folder.

Figure 26. VNX5300–Sample Home Directory user folder properties

Page 52: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 7: Virtual Desktop Antivirus

52 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

7 Virtual Desktop Antivirus

This chapter provides an introduction to the new antivirus solutions for virtual desktop deployments. The Antivirus results section explains that performing a scheduled antivirus scan of the virtual desktop environment places a significant load on the infrastructure. Though it is possible to design a solution that can accommodate this load, antivirus vendors and virtualization vendors are looking for more intelligent ways to handle antivirus protection within virtual desktop environments.

During late 2010, McAfee released the first version of their dedicated virtual desktop antivirus solution, McAfee Management for Optimized Virtual Environments (MOVE).

The testing was performed with McAfee MOVE 2.0. MOVE works in tandem with dedicated servers running McAfee VirusScan.VirusScan 8.8 was used for this testing as it is required for MOVE 2.0.

This chapter explains the following topics:

• The infrastructure requirements of a McAfee MOVE solution including the recommended configurations

• Operation details of McAfee MOVE

• The architecture of the McAfee MOVE solution used in the test environment

• The impact of MOVE on the virtual desktop performance

McAfee MOVE Architecture and Sizing

The McAfee MOVE Antivirus solution consists of multiple components and each component plays different roles in the overall solution. The following are the roles:

• McAfee ePolicy Orchestrator Server (ePO) 4.6—Enables centralized management of the McAfee software products that comprise the MOVE solution. ePO can be installed on Windows Server version 2003 R2 SP2 or later. McAfee recommends using a dedicated server to manage more than 250 clients.

• McAfee MOVE Antivirus Offload Server—The MOVE Antivirus Offload Server manages the scanning of files from the virtual desktop environment. McAfee VirusScan 8.8 is installed on the MOVE server to perform actual virus scans. The number of MOVE servers required is dependent on the aggregate number of CPU cores present in the hypervisors that host the virtual desktops. The actual sizing requirements are included in this chapter. McAfee MOVE server requires Windows Server 2008 SP2 or Windows Server 2008 R2 SP1.

MOVE Components

Page 53: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 7: Virtual Desktop Antivirus

53 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

• McAfee MOVE Antivirus agent—The McAfee MOVE agent is preinstalled on the virtual desktop master image to enforce the antivirus scanning policies as configured within McAfee ePolicy Orchestrator. The agent communicates with the MOVE Antivirus Server to determine how a file is scanned based on the ePO policies. The McAfee MOVE Antivirus agent supports Windows XP SP3, Windows 7, and Windows Server versions 2003 R2 SP2 or later.

• McAfee VirusScan 8.8—VirusScan 8.8 is an antivirus software package used for traditional host-based virus scanning. It is installed on the McAfee MOVE Antivirus Offload Server and other servers in the VMware View test environment.

• McAfee ePolicy Orchestrator (ePO) agent—The McAfee ePO agent is used to manage a number of different McAfee products. In this solution, ePO is used to manage servers and desktops running either the McAfee MOVE Antivirus agent or McAfee VirusScan 8.8. The ePO agent communicates with the ePO server for management, reporting, and McAfee software deployment tasks. The McAfee ePO agent is preinstalled on the virtual desktop master image.

The benefit of the McAfee MOVE solution is that it offloads the scanning of files to a dedicated server, the MOVE Antivirus Offload Server. The MOVE Antivirus Offload Server maintains a cache of the files that are scanned. This eliminates the need to scan the files again regardless of the virtual desktop client that makes the scanning request. The traditional host-based antivirus solutions maintain a similar cache of scanned files for the individual host, but not for all the hosts. Figure 27 provides an overview of how the different components of the McAfee MOVE solution interact with each another.

`

Figure 27. McAfee MOVE—Architecture

How MOVE Works

Page 54: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 7: Virtual Desktop Antivirus

54 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

The virtual desktop client runs the McAfee MOVE client and the ePO agent. The ePO agent enables remote management of the MOVE client by the ePO server. The MOVE agent identifies files that need to be scanned and requests scanning from the MOVE Antivirus Offload Server.

The McAfee MOVE Antivirus Offload Server runs the MOVE Server software, VirusScan 8.8, and the ePO agent. The MOVE Antivirus Offload Server services the file scanning requests from the MOVE clients, determines if the file has been scanned before, and performs the virus scan operations, if required. The ePO agent is used for remote management of the VirusScan 8.8 antivirus platform.

The ePO server runs the ePolicy Orchestrator software, which is the management platform for the components that comprise the McAfee MOVE solution. The policies configured within ePO control the parameters on which MOVE operates, both in terms of product configuration and policies that govern the files that are scanned.

One important task when McAfee MOVE is installed is to determine the number of MOVE Antivirus Offload Servers that are required. The number of servers required is dependent on the aggregate number of CPU cores, including hyper-threading present in the hypervisors that host the virtual desktops.

McAfee recommends the following configuration for each MOVE Antivirus Offload Server:

• Windows Server 2008 SP2 or Windows Server 2008 R2 SP1

• 4 vCPUs

• 4 GB of RAM

McAfee recommends leveraging Microsoft network load balancing (NLB) services to distribute the scanning workload across the MOVE Antivirus Offload Servers. NLB enables the creation of a single virtual IP that is used in place of the dedicated IPs associated with the individual MOVE servers. This single IP distributes traffic to multiple McAfee MOVE servers depending on the NLB settings and accessibility of the server. The process to configure Microsoft Windows NLB for Windows Server version 2008 and later is described in the article Network Load Balancing—Deployment Guide available on the Microsoft TechNet website.

The McAfee MOVE Antivirus 2.0 —Deployment Guide available on the McAfee website recommends one MOVE Antivirus Offload Server for every 40 vCPUs in the hypervisor cluster, including those created by enabling CPU hyper-threading. If MOVE Antivirus Offload Servers are installed on the same hypervisors that host the virtual desktops, 10 percent of the vCPUs within the hypervisor cluster must be allocated for their use. This means that the hypervisors that host the MOVE Antivirus Offload Servers are able to host fewer virtual desktops than may have been otherwise planned for. A minimum of two MOVE Antivirus Offload Servers are recommended at all times for redundancy whether the hypervisor cluster requires them or not based on the sizing calculations.

MOVE Sizing

Page 55: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 7: Virtual Desktop Antivirus

55 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Table 6 shows how the number of MOVE Antivirus Offload Servers required increases as the number of vCPUs in the hypervisor cluster increases.

Table 6. McAfee MOVE—Antivirus Offload Server sizing

Hypervisors per cluster

Cores per cluster

vCPU per cluster (hyper-threading)

vCPU required for offload scan servers for a cluster (10% of vCPU)

Number of McAfee MOVE Antivirus Offload Servers required

2 16 32 3.2 2

8 64 128 12 3

10 80 160 16 4

20 160 320 32 8

35 280 560 56 14

These figures should be applied on a per-hypervisor cluster basis. If more clusters are created, additional McAfee MOVE Antivirus Offload Servers should be deployed and dedicated to the new cluster.

McAfee MOVE Test Environment

McAfee MOVE was deployed in the test environment based on the sizing recommendations provided in the McAfee MOVE Antivirus 2.0— Deployment Guide available on the McAfee website. Based on the recommendations in the guide, two MOVE Antivirus Offload Servers were deployed on desktop Cluster 1 (128 vCPUs) and four MOVE Antivirus Offload Servers were deployed on desktop Cluster 2 (168 vCPUs).

The MOVE agent and ePO agents are installed on the master desktop image prior to the deployment of the virtual desktops. Both the components can be installed after the virtual desktops are deployed. However, consider the growth of linked-clone persistent disks.

After MOVE and ePO agents are installed on the virtual desktop master image, additional steps are required to prepare the image for deployment.

Prior to any redeployment of the virtual desktop master image or when the McAfee Framework service starts to prepare for deployment before the shutdown of the virtual desktop, perform the following steps:

1. Stop the McAfee Framework service.

2. Delete the value of the registry key agentGUID in the location determined by the virtual desktop operating system:

Configuration Overview

MOVE agent

Page 56: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 7: Virtual Desktop Antivirus

56 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

a. 32-bit Windows operating systems HKEY_LOCAL_MACHINE\SOFTWARE\Network Associates\ePolicy Orchestrator\Agent (32-bit)

b. 64-bit Windows operating systems HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Network Associates\ePolicy Orchestrator\Agent (64-bit)

3. Power off the workstation and deploy as necessary.

The next time the agent service starts, the virtual desktop generates a new AgentGUID value that is managed by the McAfee ePolicy Orchestrator.

McAfee recommends disabling the VMware Distributed Resource Scheduler (DRS) for the virtual MOVE Antivirus Offload Server guests as scanning activities are interrupted if a DRS-initiated vMotion occurs. To accomplish and still retain DRS enabled for the virtual desktops, a DRS rule was created for each MOVE Antivirus Offload Server that binds the server to a specific hypervisor. To create the DRS rules, create virtual machine and host DRS groups. Figure 28 shows the DRS groups as they appear in the DRS Groups Manager tab after they were created. To bind a specific virtual server to a specific hypervisor, individual DRS group was created for each hypervisor and each virtual server. These rules and groups were created on a per-cluster basis, in this case virtual desktop Cluster 1.

Figure 28. McAfee MOVE—VMware DRS Virtual Machine and Host DRS Groups

After the DRS groups were configured, DRS rules were created to bind the MOVE Antivirus Offload Servers to a specific hypervisor. Figure 29 shows a completed DRS

VMware DRS Rules

Page 57: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 7: Virtual Desktop Antivirus

57 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

rule that binds VDI-MOVE-01, a MOVE Antivirus Offload Server, to hypervisor RTPSOL220. The Should run on hosts in group option was selected instead of the Must run on hosts in group option to ensure that VMware High Availability (HA) powers on the MOVE Antivirus Offload Server, where a HA event that involves the hypervisor hosting the MOVE Antivirus Offload Server occurs. A DRS rule was created for each MOVE Antivirus Offload Server within the cluster.

Figure 29. McAfee MOVE—VMware DRS rule

The MOVE Antivirus Offload Server software and VirusScan 8.8 was deployed on servers running Windows Server 2008 R2 SP1. The MOVE Antivirus Offload Servers were added to a Microsoft network load balancing (NLB) cluster based on the recommendations from McAfee. As described previously, virtual desktop Cluster 1 received two MOVE Antivirus Offload servers and virtual desktop Cluster 2 received four MOVE Antivirus Offload Servers. An NLB cluster was created within each hypervisor cluster and the MOVE Antivirus Offload Servers were joined to it. Figure 30 shows the Network Load Balancing Manager interface for the MOVE Antivirus Offload Server NLB cluster for desktop Cluster 1. This cluster contains two member servers, VDI-MOVE-01 and VDI-MOVE-02. Figure 29 shows the virtual IP (172.16.0.20) of the NLB cluster that was used by the MOVE clients to access the MOVE Antivirus Offload Servers.

MOVE Antivirus Offload Servers

Page 58: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 7: Virtual Desktop Antivirus

58 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 30. McAfee MOVE—Server NLB cluster and desktop cluster 1

McAfee ePolicy Orchestrator was used to provide a central point of management and reporting for the virtual desktops within the test environment. Figure 31 shows the System Tree, which provides a hierarchal view of the clients that are managed by the ePO server.

Figure 31. McAfee MOVE—ePO System Tree view

ePO clients are placed in different groups within the system tree based on default placement rules and automated placement rules, or placed in different groups manually by the ePO administrator. For testing, ePO was configured to place the virtual desktop computers in the appropriate group based on the organizational unit (OU) they reside within the Active Directory. Figure 32 shows the Synchronization Settings for the ePO group Pool A.

ePO Configuration

Page 59: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 7: Virtual Desktop Antivirus

59 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 32. McAfee MOVE—Synchronization Settings-ePO group pool A

ePO was configured to synchronize the ePO group with the computer accounts in the organizational unit Pool A located in the parent organizational unit desktops. The Pool A desktop computer accounts were placed in that organizational unit by VMware View when desktop Pool A was created. The virtual desktops were placed in different groups because they were located in different hypervisor clusters. Therefore, they used different MOVE Antivirus Offload Servers. Figure 33 shows the Assigned Policies tab for the group Pool A and the policies that are related to the MOVE Client that are assigned to the Pool A ePO group.

Figure 33. McAfee MOVE—ePO Assigned Policies – Pool A

ePO policies were used to control the configuration of McAfee products that support ePO, which includes the MOVE agent.

Page 60: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 7: Virtual Desktop Antivirus

60 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

To configure the MOVE agent on the virtual desktops, the policy details were provided as shown in Figure 34 and Figure 35.

Figure 34. McAfee MOVE agent Policy–General settings

The IP address that appears in the Primary MOVE AV Server field in the policy General tab is the IP address of the MOVE Antivirus Offload Server NLB cluster (previously shown in Figure 30). The IP address must be used because the MOVE agent does not support the use of DNS names. The IP address is used to identify the MOVE Antivirus Offload Server.

Figure 35 shows the second part of the policy that was updated in the Scan Items tab.

Figure 35. McAfee MOVE agent policy–Scan Items

Page 61: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 7: Virtual Desktop Antivirus

61 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

The articles KB Article 1027713 and Anti-Virus Practices for VMware View—Technical Note available on the VMware website, and the McAfee MOVE Antivirus 2.0.0—Deployment Guide available on the McAfee website provide information about files and processes that should be excluded from antivirus scanning. These recommendations are useful because the scanning of these files prevents various aspects of virtual desktops, including the antivirus software to function accurately. These recommendations were incorporated into the path and process exclusion settings in the McAfee MOVE agent policy. The following items are excluded from scanning:

• Processes

o Pcoip_server_win32.exe

o UserProfileManager.exe

o Winlogon.exe

o Wsnm.exe

o Wsnm_jms.exe

o Wssm.exe

• Paths

o McAfee\Common Framework

o Pagefile.sys

o %systemroot%\System32\Spool (replace %systemroot% with actual Windows directory)

o %systemroot%\SoftwareDistribution\Datastore (replace %systemroot% with actual Windows directory)

o %allusersprofile%\NTUser.pol

o %systemroot%\system32\GroupPolicy\registry.pol (replace %systemroot% with actual Windows directory)

After the policies are configured and associated with the appropriate system tree group, the clients starts to report to the ePO server as shown in Figure 36.

Figure 36. McAfee MOVE—Pool A systems

Page 62: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 7: Virtual Desktop Antivirus

62 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

The Managed State column shows if a client is managed by ePO and the Last Communication column shows the last time the clients communicated with the ePO server.

Page 63: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

63 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

8 Testing and Validation

This chapter provides a summary and characterization of the tests performed to validate the solution. The goal of the testing is to characterize the performance of the solution and its component subsystems during the following scenarios:

• Boot storm of all desktops

• McAfee antivirus full scan on all desktops

• Security patch install with Microsoft SCCM 2007 R3 on all desktops

• User workload testing using Login VSI on all desktops

• View recompose

• View refresh

Validated environment profile

Table 7 provides the validated environment profile.

Table 7. VMware View—Environment profile

Profile characteristic Value

Number of virtual desktops 1,000

Virtual desktop OS Windows 7 Enterprise SP1 (32-bit)

CPU per virtual desktop 1 vCPU

Number of virtual desktops per CPU core • Cluster A—7.81

• Cluster B—5.95

RAM per virtual desktop 1 GB

Average storage available for each virtual desktop

2 GB (vmdk and vswap)

Average IOPS per virtual desktop at steady state

9.8

Average peak IOPS per virtual desktop during boot storm

40

Number of datastores used to store linked clones

4

Number of datastores used to store replicas 2

Number of virtual desktops per datastore 250

Profile characteristics

Page 64: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

64 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Profile characteristic Value

Disk and RAID type for datastores RAID 5, 300 GB, 15k rpm, 3.5-in SAS disks

Disk and RAID type for CIFS shares to host roaming user profiles and home directories

RAID 6, 2 TB, 7,200 rpm, 3.5-in NL-SAS disks

Number of VMware clusters 2

Number of ESXi servers in each cluster • Cluster A—8

• Cluster B—7

Number of virtual desktops per cluster 500

Six common use cases were executed to validate whether the solution performed as expected under heavy-load situations.

The following use cases were tested:

• Simultaneous boot of all desktops

• Full antivirus scan of all desktops

• Installation of a security update using SCCM 2007 R3 on all desktops

• Login and steady-state user load simulated using the Login VSI medium workload on all desktops

• Recompose of all desktops

• Refresh of all desktops

In each use case, a number of key metrics are presented showing the overall performance of the solution.

To run a user load on the desktops, the Virtual Session Index (VSI) version 3.0 was used. VSI provided the guidance to gauge the maximum number of users a desktop environment can support. The Login VSI workload is categorized as light, medium, heavy, multimedia, core, and random (also known as workload mashup). A medium workload that was selected for this testing had the following characteristics:

• The workload emulated a medium knowledge worker who used Microsoft Office Suite, Internet Explorer, Java, and Adobe Acrobat Reader.

• After a session started, the medium workload repeated every 12 minutes.

• The response time was measured every 2 minutes during each loop.

• The medium workload opened up to five applications simultaneously.

• The type rate was 160 ms for each character.

• Approximately 2 minutes of idle time was included to simulate realworld users.

Use cases

Login VSI

Page 65: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

65 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Each loop of the medium workload used the following applications:

• Microsoft Outlook 2007—Browsed 10 email messages.

• Microsoft Internet Explorer—On one instance of Internet Explorer (IE), the BBC.co.uk website was opened, another instance browsed Wired.com, Lonelyplanet.com, another instance opened a flash-based 480p video file, and another instance opened a JAVA-based application.

• Microsoft Word 2007—One instance of Microsoft Word 2007 was used to measure the response time, while another instance was used to edit a document.

• Bullzip PDF Printer and Adobe Acrobat Reader—The Word document was printed and the PDF was reviewed.

• Microsoft Excel 2007—A very large Excel worksheet was opened and random operations were performed.

• Microsoft PowerPoint 2007—A presentation was reviewed and edited.

• 7-zip—Using the command line version, the output of the session was zipped.

A Login VSI launcher is a Windows system that launches desktop sessions on target virtual desktops. There are two types of launchers—master and slave. There is only one master in a given test bed, but there can be several slave launchers as required.

The number of desktop sessions a launcher can run is typically limited by CPU or memory resources. By default, the graphics device interface (GDI) limit is not tuned. In such a case, Login consultants recommend using a maximum of 45 sessions per launcher with two CPU cores (or two dedicated vCPUs) and a 2 GB RAM. When the GDI limit is tuned, this limit extends to 60 sessions per two-core machine.

In this validated testing, 1,000 desktop sessions were launched from 32 launchers, with a total of 32 sessions approximately per launcher. Each launcher was allocated two vCPUs and 4 GB of RAM. No bottlenecks were observed on the launchers during the Login VSI tests.

For all tests, FAST Cache was enabled for the storage pools holding the replica and linked clone datastores as well as the user home and roaming profile directories.

Boot storm results

This test was conducted by selecting all the desktops in vCenter Server, and then selecting Power On. Overlays are added to the graphs to show when the last power-on task completed and when the IOPS to the pool LUNs achieved a steady state.

For the boot storm test, all 1,000 desktops were powered on within 3 minutes and achieved a steady state approximately 4 minutes later. All desktops were available for logon in approximately 6 minutes. This section describes the boot storm results for each of the three use cases when powering on the desktop pools.

Login VSI launcher

FAST Cache configuration

Test methodology

Page 66: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

66 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 37 shows the disk IOPS for a single SAS drive in the storage pool. Because the statistics from all the drives in the pool were similar, a single drive is reported for clarity and readability of the graph.

Figure 37. Boot storm—Disk IOPS for a single SAS drive

During peak load, the disk serviced a maximum of 308 IOPS. While that number of IOPS is higher than the optimal workload for SAS drives, it did not impact the performance of the boot storm test and the IOPS levels dropped quickly as the desktops finished booting. The Data Mover cache and FAST Cache both helped to reduce the disk load associated with the boot storm.

Figure 38 shows the replica LUN IOPS and the response time of one of the storage pool LUNs. Because the statistics from each LUN were similar, a single LUN is reported for clarity and readability of the graph.

Pool individual disk load

Pool LUN load-replica

Page 67: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

67 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 38. Boot storm—Replica LUN IOPS and response time

During peak load, the LUN response time did not exceed 1 ms and the datastore serviced nearly 4,700 IOPS.

Figure 39 shows the linked clone LUN IOPS and the response time of one of the storage pool LUNs. Because the statistics from each LUN were similar, a single LUN is reported for clarity and readability of the graph.

Figure 39. Boot storm—Linked clone LUN IOPS and response time

During peak load, the LUN response time did not exceed 6 ms and the datastore serviced 1,896 IOPS.

Pool LUN load-linked clone

Page 68: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

68 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 40 shows the total IOPS serviced by the storage processors during the test.

Figure 40. Boot storm—Storage processor total IOPS

During peak load, the storage processors serviced approximately 40,000 IOPS.

Figure 41 shows the storage processor utilization during the test. The pool-based LUNs were split across both the storage processors to balance the load equally.

Figure 41. Boot storm—Storage processor utilization

The virtual desktops generated high levels of I/O during the peak load of the boot storm test. The storage processor utilization remained below 46 percent.

Figure 42 shows the IOPS serviced from FAST Cache during the boot storm test.

Storage processor IOPS

Storage processor utilization

FAST Cache IOPS

Page 69: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

69 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 42. Boot storm—FAST Cache IOPS

At peak load, FAST Cache serviced almost 36,000 IOPS from the datastores. The FAST Cache hits include IOPS serviced by Flash drives and SP memory cache. If memory cache hits are excluded, the pair of Flash drives alone serviced roughly 14,598 IOPS at peak load. A sizing exercise using EMC's standard performance estimate (180 IOPS) for 15k rpm SAS drives suggests that it takes roughly 81 SAS drives to achieve the same level of performance. However, EMC does not recommend using an 81:2 ratio for SAS to SSD replacement. EMC's recommended ratio is 20:1 because workloads may vary.

Figure 43 shows the Data Mover CPU utilization during the boot storm test.

Figure 43. Boot storm—Data Mover CPU utilization

The Data Mover briefly achieved a CPU utilization of approximately 60 percent during peak load in this test.

Data Mover CPU utilization

Page 70: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

70 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 44 shows the NFS operations per second on the Data Mover during the boot storm test.

Figure 44. Boot storm—Data Mover NFS load

At peak load, there were approximately 80,000 total NFS operations per second.

Figure 45 shows the CPU load from the ESXi servers in the VMware clusters. As each server with the same CPU type had similar results, only the results from one hex-core and one quad-core server are shown in the graph.

Figure 45. Boot storm—ESXi CPU load

The quad-core ESXi server briefly achieved an total CPU utilization of approximately 55 percent during peak load in this test; the hex-core server 33 percent. It is

Data Mover NFS load

ESXi CPU load

Page 71: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

71 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

important to note that hyper-threading was enabled to double the number of logical CPUs.

Figure 46 shows the Average Guest Millisecond/Command counter, which is shown as GAVG in ESXi top. This counter represents the response time for I/O operations initiated to the storage array. For each server CPU type, the average of both datastores hosting the replica storage is shown as Replica LUN - GAVG and the average of all the datastores hosting the linked clone storage is shown as Linked Clone LUN - GAVG in the graph.

Figure 46. Boot storm—Average Guest Millisecond/Command counter

The datastores hosting the linked clone data reached a brief maximum GAVG of 170 ms. The datastores hosting the replica images reached a brief maximum GAVG of 50ms. The overall impact of this brief spike in GAVG values was minimal as all 1,000 desktops attained steady state in less than 8 minutes after the initial power on.

Antivirus results

This test was conducted by scheduling a full scan of all desktops using a custom script to initiate an on demand scan using McAfee VirusScan 8.7i. The full scans were started on all the desktops. The difference between start time and finish time was approximately 2 hours and 25 minutes.

Figure 47 shows the disk I/O for a single SAS drive in the storage pool that stores the virtual desktops. Because the statistics from all drives in the pool were similar, only a single drive is reported for clarity and readability of the graph.

ESXi disk response time

Test methodology

Pool individual disk load

Page 72: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

72 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 47. Antivirus—Disk I/O for a single SAS drive

The peak IOPS serviced by the individual drives was nearly 250 IOPS and the disk response time was within 7 ms. FAST Cache and the Data Mover cache helped to reduce the load on the disks.

Figure 48 shows the replica LUN IOPS and the response time of one of the storage pool LUNs. Because the statistics from the LUNs were similar, a single LUN is reported for clarity and readability of the graph.

Figure 48. Antivirus—Replica LUN IOPS and response time

Pool LUN load-replica

Page 73: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

73 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

During peak load, the LUN response time remained within 3 ms and the datastore serviced over 3,600 IOPS. The majority of the read I/O was served by the FAST Cache and Data Mover cache.

Figure 49 shows the linked clone LUN IOPS and the response time of one of the storage pool LUNs. Because the statistics from the LUNs were similar, only a single LUN is reported for clarity and readability of the graph.

Figure 49. Antivirus—Linked clone LUN IOPS and response time

During peak load, the LUN response time remained within 6 ms and the datastore serviced nearly 380 IOPS.

Figure 50 shows the total IOPS serviced by the storage processor during the test. During peak load, the storage processors serviced over 21,000 IOPS.

Figure 50. Antivirus—Storage processor IOPS

Pool LUN load-Linked clone

Storage processor IOPS

Page 74: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

74 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 51 shows the storage processor utilization during the antivirus scan test.

Figure 51. Antivirus—Storage processor utilization

During peak load, the antivirus scan operations caused moderate CPU utilization. The load was shared between both storage processors during the antivirus scan. EMC VNX5300 had sufficient scalability headroom for this workload.

Figure 52 shows the IOPS serviced from FAST Cache during the test.

Figure 52. Antivirus—FAST Cache IOPS

At peak load, FAST Cache serviced nearly 18,500 IOPS from the datastores. The FAST Cache hits include IOPS serviced by Flash drives and storage processor memory cache. If memory cache hits are excluded, the pair of Flash drives alone serviced

Storage processor utilization

FAST Cache IOPS

Page 75: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

75 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

almost all of the 13,460 IOPS at peak load. A sizing exercise using EMC's standard performance estimate (180 IOPS) for 15k rpm SAS drives suggests that it takes roughly 75 SAS drives to achieve the same level of performance. However, EMC does not recommend using a 75:2 ratio for SAS to SSD replacement. EMC's recommended ratio is 20:1 because workloads may vary.

Figure 53 shows the Data Mover CPU utilization during the antivirus scan test.

Figure 53. Antivirus—Data Mover CPU utilization

The Data Mover briefly achieved a CPU utilization of approximately 73 percent during peak load in this test.

Figure 54 shows the NFS operations per second from the Data Mover during the antivirus scan test.

Figure 54. Antivirus—Data Mover NFS load

Data Mover CPU utilization

Data Mover NFS load

Page 76: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

76 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

At peak load there were approximately 62,000 NFS operations per second.

Figure 55 shows the CPU load from the ESXi servers in the VMware clusters. As each server with the same CPU type had similar results, only the results from one hex-core and one quad-core server are shown in the graph.

Figure 55. Antivirus—ESXi CPU load

The peak CPU load on the ESXi server was 40 percent during this test. It is important to note that hyper-threading was enabled to double the number of logical CPUs.

Figure 56 shows the Average Guest Millisecond/Command counter, which is shown as GAVG in esxtop. This counter represents the response time for I/O operations initiated on the storage array. For each server CPU type, the average of both datastores hosting the replica storage is shown as Replica LUN - GAVG and the average of all the datastores hosting the linked clone storage is shown as Linked Clone LUN - GAVG in the graph.

ESXi CPU load

ESXi disk response time

Page 77: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

77 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 56. Antivirus—Average Guest Millisecond/Command counter

The peak replica LUN GAVG value never exceeded 46 ms whereas the peak linked clone LUN GAVG was 30 ms. The FAST Cache performed an enormous amount of read operations during this test.

Patch install results

This test was performed by pushing a security update to all desktops using Microsoft System Center Configuration Manager (SCCM) 2007 R3. The desktops were divided into five collections of 200 desktops each. The collections were configured to install updates in a 1-minute staggered schedule that was 30 minutes after the patch was downloaded. All patches were installed within six minutes.

Figure 57 shows the disk IOPS for a single SAS drive that is part of the storage pool. Because the statistics from each drive in the pool were similar, the statistics of a single drive are shown for clarity and readability of the graph.

Test methodology

Pool individual disk load

Page 78: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

78 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 57. Patch install—Disk IOPS for a single SAS drive

The drives did not get saturated during the patch download phase. During the patch installation phase, the disk serviced approximately 200 IOPS at peak load while a response time spike of 50 ms was recorded within the 6-minute interval.

Figure 58 shows the replica LUN IOPS and response time of one of the storage pool LUNs. Because the statistics from each LUN in the pool were similar, the statistics of a single LUN are shown for clarity and readability of the graph.

Figure 58. Patch install—Replica LUN IOPS and response time

During patch installation, the peak LUN response time was approximately 9 ms.

Pool LUN load-replica

Page 79: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

79 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 59 shows the linked clone LUN IOPS and response time of one of the storage pool LUNs. Because the statistics from each LUN in the pool were similar, the statistics of a single LUN are shown for clarity and readability of the graph.

Figure 59. Patch install—Linked clone LUN IOPS and response time

During peak load, the LUN response time was below 3.5 ms and the datastore serviced approximately 1450 IOPS.

Figure 60 shows the total IOPS serviced by the storage processor during the test.

Figure 60. Patch install—Storage processor IOPS

During peak load, the storage processors serviced approximately 12,000 IOPS. The load was shared between both storage processors during the patch install operation on each collection of virtual desktops.

Pool LUN load-linked clone

Storage processor IOPS

Page 80: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

80 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 61 shows the storage processor utilization during the test.

Figure 61. Patch install—Storage processor utilization

The patch install operations caused moderate CPU utilization during peak load. The EMC VNX5300 had sufficient scalability headroom for this workload.

Figure 62 shows the IOPS serviced from FAST Cache during the test.

Figure 62. Patch install—FAST Cache IOPS

During patch installation, FAST Cache serviced over 6,000 IOPS from datastores. The FAST Cache hits include IOPS serviced by Flash drives and storage processor memory cache. If memory cache hits are excluded, the pair of Flash drives alone serviced over 4,400 IOPS at peak load. A sizing exercise using EMC's standard performance estimate (180 IOPS) for 15k rpm SAS drives suggests that it takes roughly 25 SAS drives to achieve the same level of performance.

Storage processor utilization

FAST Cache IOPS

Page 81: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

81 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 63 shows the Data Mover CPU utilization during the patch install test.

Figure 63. Patch install—Data Mover CPU utilization

The Data Mover briefly achieved a CPU utilization of approximately 28 percent during peak load in this test.

Figure 64 shows the NFS operations per second from the Data Mover during the patch install test.

Figure 64. Patch install—Data Mover NFS load

At peak load, the Data Mover serviced over 17,900 IOPS.

Figure 65 shows the CPU load from the ESXi servers in the VMware clusters. As each server with the same CPU type had similar results, only the results from one hex-core and one quad-core server are shown.

Data Mover CPU utilization

Data Mover NFS load

ESXi CPU load

Page 82: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

82 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 65. Patch install—ESXi CPU load

The ESXi server CPU load was well within the acceptable limits during the test. It is important to note that hyper-threading was enabled to double the number of logical CPUs. Figure 66 shows the Average Guest Millisecond/Command counter, which is shown as GAVG in esxtop. This counter represents the response time for I/O operations initiated on the storage array. For each server CPU type, the average of both datastores hosting the replica storage is shown as Replica LUN - GAVG and the average of all the datastores hosting the linked clone storage is shown as Linked Clone LUN - GAVG in the graph.

Figure 66. Patch install—Average Guest Millisecond/Command counter

The peak replica LUN GAVG value was briefly 145 ms while the peak linked clone LUN GAVG was approximately 165 ms. FAST Cache performed an enormous amount of I/O operations during this test.

ESXi disk response time

Page 83: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

83 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Login VSI results

This test was conducted by scheduling 1,000 users to connect over Remote desktop connection in approximately a 60-minute window, and starting the Login VSI-medium workload. The workload was run for one hour in a steady state to observe the load on the system.

Figure 67 shows the disk IOPS for a single SAS drive that is part of the storage pool. Because the statistics from each drive in the pool were similar, the statistics of a single drive are shown for clarity and readability of the graph.

Figure 67. Login VSI—Disk IOPS for a single SAS drive

During peak load, the SAS disk serviced over 65 IOPS and the disk response time was less than 7 ms.

Figure 68 shows the Replica LUN IOPS and response time from one of the storage pool LUNs. Because the statistics from each LUN were similar, only a single LUN is reported for clarity and readability of the graph.

Test methodology

Pool individual disk load

Pool LUN load-replica

Page 84: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

84 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 68. Login VSI—Replica LUN IOPS and response time

During peak load, the LUN response time reached a brief maximum of approximately 3.25 ms and the LUN serviced 552 IOPS.

Figure 69 shows the linked clone LUN IOPS and response time from one of the storage pool LUNs. Because the statistics from each LUN were similar, only a single LUN is reported for clarity and readability of the graph.

Figure 69. Login VSI—Linked clone LUN IOPS and response time

During peak load, the LUN response time remained under 2.25 ms and the datastore serviced nearly 621 IOPS.

Figure 70 shows the total IOPS serviced by the storage processor during the test.

Pool LUN load-linked clone

Storage processor IOPS

Page 85: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

85 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 70. Login VSI—Storage processor IOPS

During peak load, the storage processors serviced a maximum of approximately 12,600 IOPS.

Figure 71 shows the storage processor utilization during the test.

Figure 71. Login VSI—Storage processor utilization

The storage processor peak utilization was below 27 percent during the logon storm. The load was shared between both storage processors during the VSI load test.

Figure 72 shows the IOPS serviced from FAST Cache during the test.

Storage processor utilization

FAST Cache IOPS

Page 86: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

86 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 72. Login VSI—FAST Cache IOPS

At peak load, FAST Cache serviced over 10,000 IOPS from the datastores. The FAST Cache hits included IOPS serviced by Flash drives and storage processor memory cache. If memory cache hits are excluded, the pair of Flash drives alone serviced nearly 8,800 IOPS at peak load. A sizing exercise using EMC's standard performance estimate (180 IOPS) for 15k rpm SAS drives suggests that it would take about 49 SAS drives to achieve the same level of performance. However, EMC does not recommend using a 49:2 ratio for SAS to SSD replacement. EMC's recommended ratio is 20:1 because workloads may vary.

Figure 73 shows the Data Mover CPU utilization during the Login VSI test. The Data Mover briefly achieved a CPU utilization of approximately 32 percent during peak load in this test.

Figure 73. Login VSI—Data Mover CPU utilization

Data Mover CPU utilization

Page 87: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

87 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 74 shows the NFS operations per second from the Data Mover during the Login VSI test. At peak load there were over 11,000 NFS operations per second.

Figure 74. Login VSI—Data Mover NFS load

Figure 75 shows the CPU load from the ESXi servers in the VMware clusters. As each server with the same CPU type had similar results, only the results from one hex-core and one quad-core server are shown in the graph.

Figure 75. Login VSI — ESXi CPU load

The CPU load on the ESXi server was less than 50 percent utilization during peak load. It is important to note that hyper-threading was enabled to double the number of logical CPUs.

Data Mover NFS load

ESXi CPU load

Page 88: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

88 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 76 shows the Average Guest Millisecond/Command counter, which is shown as GAVG in ESXi top. This counter represents the response time for I/O operations initiated to the storage array. For each server CPU type, the average of both datastores hosting the replica storage is shown as Replica LUN - GAVG and the average of all the datastores hosting the linked clone storage is shown as Linked Clone LUN - GAVG in the graph.

Figure 76. Login VSI—Average Guest Millisecond/Command counter

The peak replica LUN GAVG value never exceeded 8 ms whereas the peak GAVG of the linked clone LUNs was less than 6.5 ms. The FAST Cache performed an enormous amount of read operations during this test.

Recompose results

This test was conducted by performing a VMware View desktop recompose operation of both desktop pools. An new virtual machine snapshot was taken of the master virtual desktop image to serve as the target for the recompose operation. Overlays are added to the graphs to show when the last power-on task completed and when the IOPS to the pool LUNs achieved a steady state.

A recompose operation deletes the existing virtual desktops and creates new ones. To enhance the readability of the graphs and to show the array behavior during high I/O periods, only those tasks involved in creating new desktops were performed and shown in the graphs. Both desktop recompose operations were initiated simultaneously and took approximately 180 minutes to complete the entire process.

Figure 77 shows the disk IOPS for a single SAS drive that is part of the storage pool. Because the statistics from each drive in the pool were similar, the statistics of a single drive are shown for clarity and readability of the graph.

ESXi disk response time

Test methodology

Pool individual disk load

Page 89: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

89 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 77. Recompose—Disk IOPS for a single SAS drive

During peak load, the SAS disk serviced a brief peak of 224 IOPS and the disk response time was within 7.25 ms.

Figure 78 shows the replica LUN IOPS and response time from one of the storage pool LUNs. Because the statistics from each LUN were similar, only a single LUN is reported for clarity and readability of the graph.

Figure 78. Recompose—Replica LUN IOPS and response time

Copying the new replica images caused heavy sequential-write workloads on the LUN during the initial 20-minute interval. At peak load, the LUN serviced approximately 600 IOPS while the peak response time was less than 2.6 ms.

Figure 79 shows the linked clone LUN IOPS and response time from one of the storage pool LUNs. Because the statistics from each LUN were similar, only a single LUN is reported for clarity and readability of the graph.

Pool LUN load-replica

Pool LUN load-linked clone

Page 90: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

90 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 79. Recompose—Linked clone LUN IOPS and response time

During peak load, the LUN serviced over 780 IOPS while the peak response time was 3.75 ms.

Figure 80 shows the total IOPS serviced by the storage processor during the test.

Figure 80. Recompose—Storage processor IOPS

During peak load, the storage processors serviced over 14,900 IOPS.

Figure 81 shows the storage processor utilization during the test.

Storage processor IOPS

Storage processor utilization

Page 91: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

91 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 81. Recompose—Storage processor utilization

The storage processor utilization peaked at 28 percent during the logon storm. The load was shared between both storage processors during the peak load.

Figure 82 shows the IOPS serviced from FAST Cache during the test.

Figure 82. Recompose—FAST Cache IOPS

At peak load, FAST Cache serviced approximately 8,500 IOPS from the datastores. The FAST Cache hits included IOPS serviced by Flash drives and storage processor memory cache. If memory cache hits are excluded, the pair of Flash drives alone serviced nearly 1,160 IOPS at peak load. A sizing exercise using EMC's standard performance estimate (180 IOPS) for 15k rpm SAS drives suggests that it takes about 7 SAS drives to achieve the same level of performance.

FAST Cache IOPS

Page 92: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

92 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 83 shows the Data Mover CPU utilization during the recompose test.

Figure 83. Recompose—Data Mover CPU utilization

The Data Mover briefly achieved a CPU utilization of approximately 42 percent during peak load in this test.

Figure 84 shows the NFS operations per second from the Data Mover during the recompose test.

Figure 84. Recompose—Data Mover NFS load

At peak load there were approximately 31,800 NFS operations per second.

Figure 85 shows the CPU load from the ESXi servers in the VMware clusters. As each server with the same CPU type had similar results, only the results from one hex-core and one quad-core server are shown in the graph.

Data Mover CPU utilization

Data Mover NFS load

ESXi CPU load

Page 93: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

93 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 85. Recompose—ESXi CPU load

The CPU load of the quad-core ESXi server reached a peak load of thirty five percent and the hex-core server reached a peak load of twenty eight percent. It is important to note that hyper-threading was enabled to double the number of logical CPUs.

Figure 86 shows the Average Guest Millisecond/Command counter, which is shown as GAVG in esxtop. This counter represents the response time for I/O operations initiated to the storage array. For each server CPU type, the average of both datastores hosting the replica storage is shown as Replica LUN - GAVG and the average of all the datastores hosting the linked clone storage is shown as Linked Clone LUN - GAVG in the graph.

Figure 86. Recompose—Average Guest Millisecond/Command counter

The quad-core server replica LUN GAVG value reached a maximum of 3.5 ms and the linked clone LUN GAVG reached a maximum of 10.5 ms. The hex-core server experienced similar results.

ESXi disk response time

Page 94: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

94 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Refresh results

This test was conducted by selecting a refresh operation for all desktops in both pools from the View Manager administration console. The refresh operations for both pools were initiated at the same time by scheduling the refresh operation within the View administration console. No users were logged in during the test. Overlays are added to the graphs to show when the last power-on task completed and when the IOPS to the pool LUNs achieved a steady state.

Figure 87 shows the disk IOPS for a single SAS drive that is part of the storage pool. Since the statistics from each drive in the pool were similar, the statistics of a single drive are shown for clarity and readability of the graph.

Figure 87. Refresh—Disk IOPS for a single SAS drive

During peak load, the SAS disk briefly serviced 240 IOPS and the disk response time approached 9 ms.

Figure 88 shows the Replica LUN IOPS and response time from one of the storage pool LUNs. Because the statistics from each LUN were similar, only a single LUN is reported for clarity and readability of the graph.

Test methodology

Pool individual disk load

Pool LUN load-Replica

Page 95: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

95 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 88. Refresh—Replica LUN IOPS and response time

During peak load, the LUN response time was approximately 2 ms and the datastore serviced 1,292 IOPS.

Figure 89 shows the linked clone LUN IOPS and response time from one of the storage pool LUNs. Because the statistics from each LUN were similar, only a single LUN is reported for clarity and readability of the graph.

Figure 89. Refresh—Linked clone LUN IOPS and response time

During peak load, the LUN response time remained under 2.25 ms and the datastore serviced over 990 IOPS.

Figure 90 shows the total IOPS serviced by the storage processor during the test.

Pool LUN load-linked clone

Storage processor IOPS

Page 96: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

96 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 90. Refresh—Storage processor IOPS

During peak load, the storage processors serviced over 17,000 IOPS.

Figure 91 shows the storage processor utilization during the test.

Figure 91. Refresh—Storage processor utilization

The storage processor peak utilization was below 34 percent during the refresh test. The load is shared between both storage processors during the test.

Figure 92 shows the IOPS serviced from FAST Cache during the test.

Storage processor utilization

FAST Cache IOPS

Page 97: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

97 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 92. Refresh—FAST Cache IOPS

At peak load, FAST Cache serviced over 10,500 IOPS from the datastores. The FAST Cache hits included IOPS serviced by Flash drives and storage processor memory cache. If memory cache hits are excluded, the pair of Flash drives alone serviced nearly 1,300 IOPS at peak load. A sizing exercise using EMC's standard performance estimate (180 IOPS) for 15k rpm SAS drives suggests that it takes roughly 8 SAS drives to achieve the same level of performance.

Figure 93 shows the Data Mover CPU utilization during the Refresh test. The Data Mover briefly achieved a peak CPU utilization of approximately 38 percent during the test.

Figure 93. Refresh—Data Mover CPU utilization

Data Mover CPU utilization

Page 98: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

98 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 94 shows the NFS operations per second from the Data Mover during the Refresh test.

Figure 94. Refresh—Data Mover NFS load

At peak load there were approximately 29,600 NFS operations per second.

Figure 95 shows the CPU load from the ESXi servers in the VMware clusters. As each server with the same CPU type had similar results, only the results from one hex-core and one quad-core server are shown in the graph.

Figure 95. Refresh—ESXi CPU load

The peak ESXi CPU load was 36 percent for the quad-core server and 21 percent for the hex-core ESXi server. It is important to note that hyper-threading was enabled to double the number of logical CPUs.

Data Mover NFS load

ESXi CPU load

Page 99: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

99 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 96 shows the Average Guest Millisecond/Command counter, which is shown as GAVG in esxtop. This counter represents the response time for I/O operations initiated to the storage array. For each server CPU type, the average of both datastores hosting the replica storage is shown as Replica LUN - GAVG and the average of all the datastores hosting the linked clone storage is shown as Linked Clone LUN - GAVG in the graph.

Figure 96. Refresh—Average Guest Millisecond/Command counter

The peak GAVG value was below 5ms for the replica LUNs and below 10ms for the linked clone LUNs.

FAST Cache benefits

To illustrate the benefits of enabling FAST Cache in a desktop virtualization environment, a test was conducted to compare the performance of the storage array with and without FAST Cache. The non-FAST Cache configuration consisted of 30 SAS drives in a storage pool. The FAST Cache configuration consisted of 15 SAS drives backed by FAST Cache with two Flash drives, displacing 15 SAS drives from the non-FAST Cache configuration for a 15:2 ratio of drive savings. Figure 97, Figure 98, and Figure 99 show how FAST Cache benefits are realized in each use case.

Figure 97 shows that with FAST Cache, the peak host response time during boot storm was reduced by 89 percent when compared to the non-FAST Cache configuration. In addition, the virtual desktops in the FAST Cache configuration reached a steady state in 42 percent less time.

ESXi disk response time

Case study

Page 100: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

100 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 97. FAST Cache boot storm—Average latency comparison

Figure 98 shows that the antivirus scan completed in 143 minutes with FAST Cache enabled as compared to 300 minutes without FAST Cache. With FAST Cache enabled, the overall scan time was reduced by approximately 50 percent, and the peak response time was reduced by 16 percent.

Figure 98. FAST Cache antivirus scan—Scan time comparison

Figure 99 shows that with FAST Cache, the peak host response time during a SCCM initiated patch storm was reduced by 77 percent when compared to the non-FAST Cache configuration.

Page 101: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

101 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 99. FAST Cache patch storm—Average latency comparison

McAfee MOVE results

The McAfee MOVE solution was tested by deploying 1,000 desktops with the MOVE agent installed on the master image. After the desktops were deployed, the status of the virtual desktops appeared as “managed” in the ePO console. The Login VSI tool was used to simulate a user logon storm and a steady-state workload. The test configuration was identical to the configuration used to generate Login VSI results (refer to the Login VSI results section). The virtual desktops were logged in sequentially for one hour. The Login VSI workload was executed for one hour after the last desktop was logged in. A steady-state user load was achieved. The results of this test were compared with the results of the previous Login VSI tests without McAfee MOVE on the virtual desktops.

Figure 100 provides a comparison of the total number of IOPS of both storage processors observed during the Login VSI test.

Login VSI testing

Storage processor IOPS

Page 102: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

102 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 100. McAfee MOVE—Storage processor IOPS comparison

There was no significant difference between the storage processor IOPS observed during either of the Login VSI test. During the test, there was a small increase in IOPS during the logon storm phase because the MOVE Antivirus Offload Server scanned a number of files for the first time. As the logon storm completed, the MOVE Antivirus Offload Server cached the scan results of these files. Subsequently, scanning was not required again on the desktops. This is evident in the IOPS observed during the steady-state phase as the IOPS observed varied by less than two percent.

Figure 101 displays the combined utilization of both storage processors observed during each of the Login VSI tests.

Figure 101. McAfee MOVE—Storage processor utilization comparison

The storage processor utilization was similar for both tests. A higher initial IOPS load was observed but the differences were minimal after the steady-state phase was reached.

Storage processor utilization

Page 103: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

103 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 102 details the data mover utilization observed during the Login VSI tests.

Figure 102. McAfee MOVE—Data Mover utilization comparison

The data mover utilization was comparable between both the Login VSI tests. Both the logon storm and steady-state phases showed similar utilization statistics.

Figure 103 shows the average ESXi CPU load that was observed during the Login VSI tests.

Figure 103. McAfee MOVE—ESXi CPU load comparison

The CPU load results were similar for both Login VSI tests. A slightly higher CPU load was observed during the first half of the logon storm. This can be due to the increased antivirus scanning because the antivirus cache was established during that time. As the MOVE Antivirus Offload Server created a cache of files that had been scanned, the amount of scans that were required decreased with the ESXi Server CPU load. The CPU load observed during the steady-state phase was similar for both the Login VSI tests.

Figure 104 shows the average ESXi disk response time, also referred to as the GAVG, observed during the Login VSI test.

Data Mover CPU utilization

ESXi CPU load

ESXi disk response time

Page 104: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 8: Testing and Validation

104 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

Figure 104. McAfee MOVE—ESXi Disk Response Time (GAVG) comparison

The disk response times observed during both the Login VSI tests (logon storm and steady state phases) were similar for replica and linked clone LUNs.

The McAfee MOVE agent installed on the virtual desktops required less than 29 MB of space and the related services utilized approximately 22 MB of memory with no idle processor time. The McAfee MOVE agent used 75 percent less disk space and 60 percent less memory when compared to the traditional McAfee VirusScan client. This does not include the impact of the VirusScan on-access scanner, which utilized up to 25 percent of CPU time and 220 MB of RAM at random intervals. Since the MOVE agent offloaded this activity to the MOVE Antivirus Offload Server, the impact on the desktops was drastically reduced.

The McAfee MOVE solution had very little impact on the operation of the virtual desktops during the Login VSI test. After the McAfee MOVE Antivirus Offload Server had built up the cache of frequently scanned files, almost no performance difference was observed between the Login VSI tests that were performed without McAfee MOVE enabled and the tests that were performed with MOVE enabled.

MOVE Findings

Page 105: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 9: Conclusion

105 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

9 Conclusion

This chapter includes the following sections:

• Summary

• References

Summary

As shown in Chapter 8:Testing and Validation, EMC VNX FAST Cache provides measurable benefits in a desktop virtualization environment. It not only reduces the response time for both read and write workloads, but also effectively supports more users on fewer drives, and greater IOPS density with a lower drive requirement.

The testing results in the McAfee MOVE results section shows that newer antivirus technologies such as the McAfee MOVE platform can provide a more efficient antivirus protection within a virtual desktop environment than traditional host-based antivirus solutions.

References

The following documents, located on the EMC Online Support website, provide additional and relevant information. Access to these documents depends on your login credentials. If you do not have access to a document, contact your EMC representative:

• EMC Infrastructure for VMware View 5.0, EMC VNX Series (NFS),VMware vSphere 5.0, VMware View 5.0, and VMware View Composer 2.7—Reference Architecture

• EMC Infrastructure For Virtual Desktops Enabled by EMC VNX Series (NFS),VMware vSphere 4.1, VMware View 4.6, and VMware View Composer 2.6—Reference Architecture

• EMC Infrastructure For Virtual Desktops Enabled by EMC VNX Series (NFS),VMware vSphere 4.1, VMware View 4.6, and VMware View Composer 2.6—Proven Solution Guide

• EMC Performance Optimization for Microsoft Windows XP for the Virtual Desktop Infrastructure—Applied Best Practices

• Deploying Microsoft Windows 7 Virtual Desktops with VMware View—Applied Best Practices Guide

The following documents, located on the VMware website, also provide useful information:

Supporting documents

VMware documents

Page 106: EMC INFRASTRUCTURE FOR VMWARE VIEW™ 5...EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure

Chapter 9: Conclusion

106 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, and

VMware View Composer 2.7—Proven Solutions Guide

• VMware View Architecture Planning

• VMware View Installation

• VMware View Administration

• VMware View Security

• VMware View Upgrades

• VMware View Integration

• VMware View Windows XP Deployment Guide

• VMware View Optimization Guide for Windows 7

• vSphere Installation and Setup Guide

• Anti-Virus Practices for VMware View

• VMware KB Article 1027713

The following document, located on the Microsoft website, also provide useful information:

• Network Load Balancing Deployment Guide

The following documents, located on the McAfee website, also provide useful information:

• McAfee MOVE Antivirus 2.0.0 Product Guide

• McAfee MOVE Antivirus 2.0.0 Software Release Notes

• McAfee MOVE Antivirus 2.0.0 Deployment Guide

Microsoft documents

McAfee documents