Upload
bili-bob
View
227
Download
0
Embed Size (px)
Citation preview
8/4/2019 Vmware View4 Emc Ns 120 Vtr
1/55
www.EMC.com
EMC NAS Product ValidationCorporate Headquarters
Hopkinton, MA 01748-91031-508-435-1000www.EMC.com
Deploying Virtual Desktops with VMware View 4.0 in Midsize and
Enterprise Environments
Validation Test Report
EMCCelerra NS-120
Revision 1.0
8/4/2019 Vmware View4 Emc Ns 120 Vtr
2/55
2 Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120
Validation Test Report
Copyright 2009 EMC Corporation. All rights reserved.
Published November, 2009
EMC believes the information in this publication is accurate as of its publication date. The information is
subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION
MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE
INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicablesoftware license.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.
All other trademarks used herein are the property of their respective owners.
Deploying Virtuals Desktops with VMware View 4.0 in Midsize and EnterpriseEnvironments EMC Celerra NS-120
Validation Test Report
8/4/2019 Vmware View4 Emc Ns 120 Vtr
3/55
Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120 3
Validation Test Repor
Contents
About this Document ....................................................................................................................... 8Chapter 1 Physical Architecture ..................................................................................................................... 11
Tested system configuration...........................................................................................................11
Hardware resources ...............................................................................................................12
Software resources.................................................................................................................12
Network architecture .............................................................................................................13
Celerra configuration......................................................................................................................15
VMware ESX configuration...........................................................................................................18
Storage architecture........................................................................................................................ 20
View cluster configuration ....................................................................................................22
Test tools and methods................................................................................................................... 25
Master image .........................................................................................................................25
Creating virtual desktops from VMware linked clones.........................................................26
Test description......................................................................................................................37
Chapter 2 Test Results ....................................................................................................................................38Test results for virtual desktops, ESX servers, and a NS-120 Data Mover ................................... 38
Test description overview......................................................................................................38
Desktops from VMware View Composer linked clones .......................................................38
Conclusions ....................................................................................................................................55
8/4/2019 Vmware View4 Emc Ns 120 Vtr
4/55
Deploying Viirtual Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120 4
Validation Test Report
Figures
Figure 1 Solution architecture...................................................................................................11Figure 2 vCenter representation of virtual switches and virtual NICs ......................................13Figure 3 NIC teaming................................................................................................................14 Figure 4 EMC Celerra NS-120 Data Mover ports ....................................................................15Figure 5 Celerra Manager Network Interfaces page .................................................................15Figure 6 Celerra Manager New Storage Pool page...................................................................16Figure 7 Celerra Manager New File System page ....................................................................16Figure 8 Celerra Manager New iSCSI LUN page.....................................................................17 Figure 9 Celerra Manger - iSCSI Targets page .........................................................................17Figure 10 Celerra Manager - iSCSI LUN Mask page .................................................................18Figure 11 Celerra Manager - New iSCSI Mask page ..................................................................18 Figure 12
vCenter Firewall Properties dialog box for enabling the software iSCSI client.........19
Figure 13 vCenter Add Network Wizard page for creating a network interface ........................20Figure 14 vCenter iSCSI Initiator Properties page for adapter configuration.............................20Figure 15 Storage architecture ....................................................................................................21Figure 16 vCenter view of the test environment infrastructure...................................................24Figure 17 Select New Resource Pool ..........................................................................................26Figure 18 Create and define a resource pool...............................................................................26Figure 19 Create a snapshot ........................................................................................................27Figure 20 Name the virtual machine snapshot ............................................................................27 Figure 21 Add desktop ................................................................................................................28Figure 22 Select desktop type .....................................................................................................28Figure 23 Select Desktop Persistence .........................................................................................29Figure 24 Select VirtualCenter server .........................................................................................29Figure 25 Type desktop ID and name .........................................................................................30Figure 26 Configure desktop settings..........................................................................................30Figure 27 Select provisioning settings ........................................................................................31 Figure 28 Select the parent VM ..................................................................................................31 Figure 29 Select the snapshot for the base image .......................................................................32Figure 30 Select the folder for VM storage.................................................................................32Figure 31 Select the destination cluster.......................................................................................33 Figure 32 Select the resource pool ..............................................................................................33Figure 33 Specify user data storage ............................................................................................34Figure 34 Select the data stores...................................................................................................34Figure 35 Select QuickPrep settings ...........................................................................................35Figure 36 Review all settings ......................................................................................................35Figure 37 View Manager display ................................................................................................36Figure 38 vCenter display of the new desktops ..........................................................................36Figure 39 Comparison of iteration 1 versus iteration 2 response times ......................................39Figure 40 Excel opens by 1,035 VMs on 5 FC building blocks .................................................40Figure 42 Outlook opens by 1,035 VMs on 5 FC building blocks .............................................41Figure 41 Adobe opens by 1,035 VMs on 5 building blocks......................................................41
8/4/2019 Vmware View4 Emc Ns 120 Vtr
5/55
About this Documen
Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120 5
Validation Test Repor
Figure 43 ESX server CPU utilization with 75 VMs on 5 FC building blocks .......................... 42Figure 44 ESX server memory PShare for 75 VMs on a single ESX server ..............................43Figure 45 ESX server disk I/O throughput for the 1,035 VMs on 5 FC building block.............44Figure 46 NS-120 Data Mover CPU utilization.......................................................................... 45Figure 47 NS-120 Data Mover throughput using five RAID 5 building blocks......................... 46Figure 48 NS-120 storage array CPU utilization ........................................................................47Figure 49 NS-120 storage array bandwidth ................................................................................48Figure 50 NS-120 storage array IOPS ........................................................................................ 49Figure 51 NS-120 storage array response time ........................................................................... 50Figure 52 Most active LUN utilization ....................................................................................... 51Figure 53 Most active LUN bandwidth ...................................................................................... 52Figure 54 Most active LUN IOPS...............................................................................................53Figure 55 Most active LUN response time .................................................................................54
8/4/2019 Vmware View4 Emc Ns 120 Vtr
6/55
Deploying Viirtual Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120 6
Validation Test Report
Tables
Table 1 Hardware resources.....................................................................................................12Table 2 Software resources......................................................................................................13Table 3 Storage for building block configuration....................................................................21Table 4 Golden Image VM file system configuration .............................................................22Table 5 Disk volumes ..............................................................................................................22Table 6 Data store capacity for linked clones ..........................................................................22Table 7 Test Environment cluster groups and configuration...................................................23 Table 8 Virtual desktop test program functions.......................................................................37Table 9 Response times for 1,035 virtual desktops on 5 FC building blocks ..........................39 Table 10 Comparison of response times with View 3.0 and View 4.0 environments ..............40 Table 11 Data Mover statistics for 1,035 VMs using five 4+1 RAID 5 disk groups ................44
8/4/2019 Vmware View4 Emc Ns 120 Vtr
7/55
8/4/2019 Vmware View4 Emc Ns 120 Vtr
8/55
About this Document
8 Deploying 1,000 Desktops VMware View in the Midsize and Enterprise EMC Celerra NS-120
Validation Test Report
About this Document
This document provides a detailed summary and characterization of the tests performed to validate an EMC infrastructure for
deploying VMware View 4.0 and vSphere 4.0 with an Enterprise EMC Celerra NS-120. The EMC NAS Product Validation group
developed this solution.
Purpose
This solution was designed to determine the performance characterization of the VMware View 4.0 and vSphere 4.0environment while using EMC Celerra NS-120 for storage. A reasonable amount of tuning was performed to ensure that the
performance measurements achieved were consistent with real-world demand. The test procedure was based on the reference
architecture, validation test report, and the best practices outlined in theDeploying VMware View in the Enterprise EMC
Celerra NS-120 Reference Architecture, Deploying VMware View in the Enterprise EMC Celerra NS-120 Validation
Test Report, Deploying 1,000 Desktops VMware View in Midsize and Enterprise Environments Validation Test Report and
VMware Virtual Desktop Infrastructure Planning for EMC Celerra Best Practices Planningdocuments.
Audience
The audience of this document should have a working knowledge of VMware and VMware View, as well as CLARiiON and
Celerra technologies.
How to use the validation test report
This validation test report is to be used as a reference document to support the EMC infrastructure for deploying VMware View in
the Enterprise EMC Celerra NS-120 solution. It is not intended to be read cover to cover. For specific testing information,
performance and sizing results, refer to the Table of Contents to look up the desired reference material.
Objectives and scope
This document contains the results of testing the VMware View 4.0 and vSphere 4.0 offering by using a Celerra NS-120 storage
system. The objectives of this testing are to:
Establish a reference architecture of validated hardware and software that permits easy and repeatable deployment of virtualdesktops using the Celerra NS-120 system.
Establish the storage best practices for configuring ESX servers and NS-120 systems in a manner that provides optimalperformance, recoverability, and protection.
Related documents
The following documents available on Powerlink provide additional, relevant information. Access to these documents is based on
your login credentials. If you do not have access to the following content, contact your EMC representative:
Deploying VMware View in the Enterprise EMC Celerra NS-120 Reference Architecture
EMC Infrastructure for Deploying VDI in the Enterprise EMC Celerra NS20 Reference Architecture
VMware Virtual Desktop Infrastructure Planning for EMC Celerra Best Practices Planning
8/4/2019 Vmware View4 Emc Ns 120 Vtr
9/55
About this Documen
Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120 9
Validation Test Repor
Deploying VMware View in the Enterprise EMC Celerra NS-120 Validation Test Report
EMC Infrastructure for Deploying VMware VDI EMC Celerra NS20 Validation Test Report
Deploying 1,000 Desktops VMware View in Midsize and Enterprise Environments Validation Test Report
EMC Infrastructure for Deploying VMware View in the Enterprise EMC Celerra Unified Storage Platforms Solution Guide
Deploying VMware View 3 on EMC Celerra Solution Guide
Configuring iSCSI Targets on EMC Celerra Technical Module
The following documents available on the VMware website provide relevant information about VMware View:
Introduction to VMware View Manager
VMware View Manager Administration Guide
VMware View Reference Architecture
Storage Deployment Guide for VMware View
VMware View Windows XP Deployment Guide
VMware View Guide to Profile Virtualization
VMware View Composer Design Considerations
8/4/2019 Vmware View4 Emc Ns 120 Vtr
10/55
About this Document
10 Deploying 1,000 Desktops VMware View in the Midsize and Enterprise EMC Celerra NS-120
Validation Test Report
8/4/2019 Vmware View4 Emc Ns 120 Vtr
11/55
Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120 11
Validation Test Repor
Chapter 1 Physical Architecture
This section provides details about the system configuration used to obtain the performance measurements to support this
solution.
Tested system configuration
Figure 1 shows the solution architecture andprovides a high-level view of the test environment.
Figure 1 Solution architecture
Note:- The testing did not include View Manager connections in the environment because the emphasis was on providing a storage
profile recommendation. Stress testing the View Manager was not included in this test because VMware performs View Managerconnection testing.
- It is recommended to configure virtual machine (VM) user accounts with folder redirection and roaming profiles to redirect alluser data on network-attached storage (NAS) using Common Internet File System (CIFS).
8/4/2019 Vmware View4 Emc Ns 120 Vtr
12/55
Physical Architecture
12 Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120
Validation Test Report
Hardware resources
Table 1 lists the hardware resources used in this tested storage configuration.
Table 1 Hardware resources
Hardware Quantity Configuration Notes
EMC
Celerra
NS-120 with aCLARiiON
CX4-120 array
1 4 disk-array enclosures (DAEs)containing 15 FC 300 GB 15k 2/4GB disks
Celerra shared storagefor file systems andiSCSI LUNs
Dell PowerEdge 1850, 1950, 6850,6950, R905 systems
20 servers (6 four-core and 14 eight-core systems)
Memory: Minimally, 4 GB per core(16 GB, 32 GB, 64 GB)
CPU: Dual Intel Xeon and AMD
Opteron processors
Storage: Local internal disks
ranging in size from 60 GB to 129GB
NIC: 5-6 network controllers (Intel,
Broadcom, NetXtreme adapters)
VMware ESX servershosting virtual desktops
Dell PowerEdge 1850 1 Memory: 4 GB RAM
CPU: Dual 2.8 GHz dual-core
processors
Storage: One 146 GB and one 36
GB disk
NIC: Dual port Intel PRO/1000 MTGB adapters
vCenter Server
Desktops or virtual machines(VMs)
1,035 vCPU: One 2.8 GHz virtualprocessor
vMemory: 1 GB RAM for WindowsXP VMs
Connectivity: vmxnet
Deployed from VMwarelinked clones
Software resources
Table 2 on page 13 lists the software resources used in this tested storage configuration.
8/4/2019 Vmware View4 Emc Ns 120 Vtr
13/55
Physical Architecture
Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120 13
Validation Test Repor
Table 2 Software resources
Software Minimum revision
NS-120 Celerra shared storage, file systems and iSCSI LUNs
NAS/DART: 5.6 Maintenance Update 6 (5.6.46.4)
CLARiiON FLARE
: Release 28 (4.28.000.5.504)ESX servers ESX 4.0.0 Build 164009
View Manager 4.0.0 Build 18952
vCenter server OS: Microsoft Windows Server 2003 Enterprise Edition SP2
(32-bit)
VMware vCenter 4.0.0 Build 162856
vSphere Client 4.0.0 Build 162856
Desktops/VMs OS: Microsoft Windows XP Professional Version SP3 (32-bit)
VMware Tools 4.0.0 Build 164009
AutoIt Version 3.2.10.0 (http://www.autoitscript.com/autoit3/)
Microsoft Office 2002 Revision 11
Internet Explorer 7.0.5730.13
Adobe Reader 8.1.2
Network architecture
The networks used were dedicated 1 gigabit Ethernet (GbE) networks. All virtual desktops were assigned an IP address by using a
Dynamic Host Configuration Protocol (DHCP) server. The ESX servers consisted of five to six Intel GbE controllers. Four
Ethernet controllers were used to create two network devices consisting of two NIC teaming ports. Each Ethernet controller was
placed on a separate subnet for multipathing and load balancing.
Switches
EMC recommends that the switches support GbE connections and the ports on the switches support copper-based media. In this
configuration, VMware virtual switches were set to directly connect physical network cards to their logical equivalent in the
virtual machine. Figure 2 on page 13 shows the representation of an ESX hosts kernel network.
Figure 2 vCenter representation of virtual switches and virtual NICs
8/4/2019 Vmware View4 Emc Ns 120 Vtr
14/55
Physical Architecture
14 Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120
Validation Test Report
The vmnics comprising the virtual switch are configured for NIC teaming (Figure 3).
Figure 3 NIC teaming
Celerra NS-120 Data Mover
The NS-120 contains two Data Movers. The Data Movers can operate independently or they can operate in the active/passive
mode, with the passive Data Mover serving as a failover device for the active Data Mover. In this solution, the Data Movers
operate in the active/passive mode.
The NS-120 Data Mover consists of four GbE controller ports. These four ports were configured as two 2-port link aggregation
devices. Each link aggregation device was placed on a different subnet to create multiple paths for the iSCSI objects. Multiple
iSCSI targets were created and iSCSI sessions were distributed across both logical network interfaces.
Figure 4 shows the ports on the rear of an NS-120 Data Mover.
8/4/2019 Vmware View4 Emc Ns 120 Vtr
15/55
Physical Architecture
Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120 15
Validation Test Repor
Figure 4 EMC Celerra NS-120 Data Mover ports
Ports cge0 and cge1 were set up for link aggregation and support the iSCSI storage traffic. Ports cge2 and cge3 were used for a
second link aggregation device.
To view the network interfaces and link aggregation devices used, type the following command at the Celerra Control Station:# /nas/bin/server_ifconfig server_2 -aserver_2:
iscsi-net2 protocol=IP device=lnk02inet=10.6.119.246 netmask=255.255.255.0 broadcast=10.6.119.255UP, ethernet, mtu=1500, vlan=0, macaddr=0:60:16:1f:ac:12
iscsi-net1 protocol=IP device=lnk01inet=10.6.116.246 netmask=255.255.255.0 broadcast=10.6.116.255UP, ethernet, mtu=1500, vlan=0, macaddr=0:60:16:1f:ac:14
Alternatively, the network interfaces and link aggregation devices can be viewed on the Network Interfaces page on Celerra
Manager.
Figure 5 Celerra Manager Network Interfaces page
Note: As a best practice, dedicate the Data Mover network ports connected to the storage network (as shown in Figure 1 on page11) to storage traffic. However, if the ports are not heavily used, they can be shared with non-storage network traffic. EMCrecommends monitoring the network to avoid bottlenecks.
Celerra configuration
The required Celerra configuration is as follows:
8/4/2019 Vmware View4 Emc Ns 120 Vtr
16/55
Physical Architecture
16 Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120
Validation Test Report
1. To deploy VMware linked clones, a user-defined Celerra storage pool must be created. This can be done with the followingcommand:
/nas/bin/nas_pool -create -name -description 'Storage Pool -volumes , -default_slice_flag y
Alternatively, the storage pool can be created by using the New Storage Pool page on Celerra Manager (Figure 6).
Figure 6 Celerra Manager New Storage Pool page
2. To store linked clones, Celerra storage must be provided through a filesystem and an iSCSI LUN must be created before usingView Composer. This is achieved by the following commands:
To create a file system from the user-defined storage pool and mount it on a Data Mover, use the following commands:
/nas/bin/nas_fs -name -type uxfs -create size= pool= -optionmover=, slice=y
/nas/bin/server_mount
Alternatively, the file system can be created by using the New File System page on Celerra Manager.
Figure 7 Celerra Manager New File System page
8/4/2019 Vmware View4 Emc Ns 120 Vtr
17/55
Physical Architecture
Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120 17
Validation Test Repor
To create an iSCSI LUN that is a data store to the ESX server, use the following command:
/nas/sbin/server_iscsi -lun number -create -size -fs -vp yes
The -vp yes option creates a virtually provisioned iSCSI LUN instead of a regular (thick) LUN. When using Virtual
ProvisioningTM
, closely monitor the file system space that contains virtually provisioned iSCSI LUNs. Use Celerra
Manager or the following commands to determine the used or available space in a file system:
/nas/bin/server_df/nas/bin/nas_fs
Alternatively, the iSCSI LUN can be created by using the New iSCSI LUN page on Celerra Manager.
Figure 8 Celerra Manager New iSCSI LUN page
3. Set the iSCSI mask appropriately to provide access to the LUN for the iSCSI initiator. In this case, the initiator is simply theESX servers iSCSI software adapter name. To set the appropriate LUN mask, use the following Control Station command:
# /nas/bin/server_iscsi -mask -set -initiator -grant
Alternatively, to set the LUN mask on Celerra Manager:
a. Select iSCSI on the left navigation bar. The iSCSI page appears. ClickTargets, and then select an iSCSI target.
Figure 9 Celerra Manger - iSCSI Targets page
8/4/2019 Vmware View4 Emc Ns 120 Vtr
18/55
Physical Architecture
18 Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120
Validation Test Report
The iSCSI Target Properties page appears.
b. ClickLUN Mask, and then clickNew to add a new LUN mask for the selected target.
Figure 10 Celerra Manager - iSCSI LUN Mask page
The New iSCSI Maskpage appears.
c. Select the initiator (ESX iSCSI software initiator) and type the LUN being granted access. Every host in the cluster shouldbe granted access to the iSCSI LUN. When the mask has been properly set, ensure that the iSCSI software adapters on the
ESX hosts are rescanned.
Figure 11 Celerra Manager - New iSCSI Mask page
d. Finally, add the LUN as a data store to an ESX host in the cluster.
VMware ESX configuration
To use VMware View Composer, the ESX server must be configured to enable the software iSCSI client. To provide iSCSI
access:
8/4/2019 Vmware View4 Emc Ns 120 Vtr
19/55
Physical Architecture
Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120 19
Validation Test Repor
1. Connect to each ESX server using the vSphere client and click the Configuration tab.
2. From the Software menu, clickSecurity Profile. The Security Profile page appears.
3. ClickProperties. The Firewall Properties dialog box appears.
4. Select Software iSCSI Client (Figure 12).
Figure 12 vCenter Firewall Properties dialog box for enabling the software iSCSI client
After the configuration is complete, the network configuration must be completed for any implementation. To create the vSwitchused for the iSCSI network traffic by using the Virtual Infrastructure (VI) Client:
Configuration tab > Networking > Add networking > VMkernel > Create a New Switch (Figure 13)
Type a Network Label. For example, iscsi-net1
Type a VLAN tag (Optional)
Provide the IP Address and Subnet Mask
Configuration tab > Storage Adapters > Select the iSCSI Adapter > Properties (Figure 14)
ClickConfigure, and then clickEnable.
Click the Dynamic Discovery tab, and then clickAdd. Type the IP Address and port for each iSCSI target.
Note: If Challenge Handshake Authentication Protocol (CHAP) is enabled on the iSCSI target, it should also be configured byusing the CHAP Authentication tab.
This is achieved using the Add Network Wizard VMkernel Connection Settings dialog box in vCenter.
8/4/2019 Vmware View4 Emc Ns 120 Vtr
20/55
Physical Architecture
20 Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120
Validation Test Report
Figure 13 vCenter Add Network Wizard page for creating a network interface
Figure 14 vCenter iSCSI Initiator Properties page for adapter configuration
Storage architecture
Figure 15 on page 21 shows the NS-120 storage array configuration. Though this is a complete diagram of the storage array, all
disk devices were not used with each test.
8/4/2019 Vmware View4 Emc Ns 120 Vtr
21/55
Physical Architecture
Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120 21
Validation Test Repor
Figure 15 Storage architecture
A CX4-120 is the storage array for a Celerra NS-120. The CX4-120 used for testing had fours DAEs containing 15 Fibre Channel
300 GB 15k 2/4 GB disks.
Testing was done by using a 4+1 RAID 5 disk group only. According to the NAS standard, two LUNs were created for each 4+1
RAID group and each LUN was owned by a different storage processor for load balancing. File systems were created with a user-
defined storage pool to have direct control over file system location space allocation and load balancing.
Table 3 and
Table 4 show the file system configuration requirements for CLARiiON array objects.
Table 3 Storage for building block configuration
CLARiiON array objects Configuration required
Total storage capacity 1 TB
iSCSI LUN (VM) capacity 20 GB
Number of disks used 5
Number of disk volumes used 2
8/4/2019 Vmware View4 Emc Ns 120 Vtr
22/55
Physical Architecture
22 Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120
Validation Test Report
Table 4 Golden Image VM file system configuration
CLARiiON array objects Configuration required
Storage capacity 24 GB
iSCSI LUN capacity 20 GB
Number of disks used 5
Number of disk volumes used 1
Table 5 shows the disk volumes for each file system in this storage configuration.
Table 5 Disk volumes
File system Disk volumes
Golden Image d8
Test log files d9
VMs (Clones and TWSs) d13, d25 (concatenated)d14, d26 (concatenated)
d15, d27 (concatenated)
d16, d28 (concatenated)
d19, d31 (concatenated)
Table 6 shows the capacity of data stores containing various numbers of linked clones created with View Composer.
Table 6 Data store capacity for linked clones
Number of
linked clones
iSCSI LUN
capacity
iSCSI LUN
storage used
225 785.1 GB 367.3 GB
150 525.0 GB 268.1 GB
75 262.0 GB 140.36 GB
A linked clone storage overcommit value of Moderate was used in this configuration, which resulted in a 1.13 GB per virtual
machine being used.
View cluster configuration
Configuring the VMware View 4.0 environment takes careful consideration and planning. The following factors influenced the
grouping and sizing of clusters in the test environment:
The building block size of 225 virtual desktops per 4+1 RAID 5 disk group.
Limit of 255 iSCSI LUNs per device
Grouping of similar CPU types (AMD, Intel) for VMotion and DRS capabilities
With these considerations, the ESX hosts were grouped and configured as described in Table 7.
8/4/2019 Vmware View4 Emc Ns 120 Vtr
23/55
Physical Architecture
Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120 23
Validation Test Repor
Table 7 Test Environment cluster groups and configuration
Cluster nameNumber
of VMsRAID Group Number
Number of
ESX hostsESX description
A-Finance 225 11 3 Dell Power Edge R905, AMD 8 CPU 3.0 GHzB-Sales 225 12 3 Dell Power Edge R905, AMD 8 CPU 3.0 GHz
C-Testing 150 13 2 Dell Power Edge R905, AMD 8 CPU 3.0 GHz
D-Marketing 75 13 1 Dell PowerEdge 6950, AMD 8 CPU 3.0 GHz
E-Support 150 14 6(2) Dell PowerEdge 6850, Intel 4 CPU 3.0 GHz
(4) Dell PowerEdge 1850, Intel 4 CPU 2.8 GHz
F-Human Resources 75 14 1 Dell PowerEdge 6950, AMD 8 CPU 3.0 GHz
G-Development 152 15 4 Dell Power Edge 1950, Intel 8 CPU 3.0 GHz
The following is a snapshot of the test configuration as seen from the vSphere client. Note that the cluster resource pools contain
the 1,035 desktops, which are not shown to conserve page space.
Note: Ensure that the proper DRS settings are enabled for the cluster before powering on the virtual desktops.
8/4/2019 Vmware View4 Emc Ns 120 Vtr
24/55
Physical Architecture
24 Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120
Validation Test Report
Figure 16 vCenter view of the test environment infrastructure
8/4/2019 Vmware View4 Emc Ns 120 Vtr
25/55
Physical Architecture
Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120 25
Validation Test Repor
Test tools and methods
Master image
Before deploying the virtual images, the master/golden image that the desktops will be created from should be tuned and
optimized. Because the master image is used as a template, the time taken to optimize the image will be beneficial in the future.These tunings should include the following:
Apply all service packs and patches to the operating system and all other system software
Install VMware tools
Install the view connection agent
Ensure that full hardware acceleration is configured
Disable any unwanted services. Some of these may include:
Automatic Updates
Remote Registry
Secondary Logon
SSDP Discovery Service
Wireless Zero
Remove/minimize system restore points
Configure MS Folder Redirection/Remote Home Directory if needed
Disable automatic software updates so that an updater will not periodically execute. For example, Adobe updater can be
disabled
Disable automatic desktop cleaner
Run Disk Cleanup
Run Disk Defrag
The following two VMware documents explain additional, important master image optimizations:
Windows XP Deployment Guide located on the VMware website at http://www.vmware.com/files/pdf/XP_guide_vdi.pdf.
Optimized XP Virtual Machine VDI Template Checklist located on the VM/ETC website at
http://vmetc.com/wp-content/files/Optimized%20XP%20Virtual%20Machine%20for%20VDI%20Template%20Checklist.pdf.
8/4/2019 Vmware View4 Emc Ns 120 Vtr
26/55
Physical Architecture
26 Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120
Validation Test Report
Creating virtual desktops from VMware linked clones
The 1,035 virtual desktops used in this testing were created by using VMware View Composer linked clones. EMC Celerra NAS
storage provided the storage for the linked clones. The following sections examine the linked clone deployment technology.
Linked Clones with View Composer
The following procedure shows how to use VMware View Composer to create linked clones serving as virtual desktops. For more
details, refer to theDeploying VMware View 3 on EMC Celerra Solution Guide located on Powerlink at Home > Solutions >
Application Solutions > VMware > View > Virtual Infrastructure > Technical Validation. To create linked clones:
1. If a resource pool does not exist, create a resource pool within an ESX cluster, which will run the virtual machines. From thevSphere client, right-click the desired cluster name, and then select New Resource Pool (Figure 17).
Figure 17 Select New Resource Pool
The Create Resource Pool dialog box appears.
2. Name the resource pool appropriately and modify any virtual machine resources.
Figure 18 Create and define a resource pool
8/4/2019 Vmware View4 Emc Ns 120 Vtr
27/55
Physical Architecture
Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120 27
Validation Test Repor
3. Take a snapshot of the master virtual machine within the cluster. View Composer uses the snapshot to compose the replicathat is used to create the linked clones that constitute the virtual desktops.
To create a snapshot with vSphere, right-click the master virtual machine image from the left navigation bar of the vSphere
client, and then select Snapshot >Take Snapshot.
Figure 19 Create a snapshot
The Take Virtual Machine Snapshot dialog box appears.
4. Name the snapshot.
Figure 20 Name the virtual machine snapshot
8/4/2019 Vmware View4 Emc Ns 120 Vtr
28/55
Physical Architecture
28 Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120
Validation Test Report
5. Use View Manager to create a persistent automated desktop pool. In View Manager, clickDesktops and Pools, and then clickAdd.
Figure 21 Add desktop
The Add Desktop wizard appears.
6. Select the pool type. Figure 22 shows an Automated Desktop Pool selected. ClickNext.
Figure 22 Select desktop type
8/4/2019 Vmware View4 Emc Ns 120 Vtr
29/55
Physical Architecture
Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120 29
Validation Test Repor
7. Select the persistence settings for the desktops in the pool. ClickNext.
Figure 23 Select Desktop Persistence
8. Select the VirtualCenter server that will manage the ESX cluster that hosts the virtual desktops. Select Use linked clonetechnology to create desktops in this pool. ClickNext.
Figure 24 Select VirtualCenter server
8/4/2019 Vmware View4 Emc Ns 120 Vtr
30/55
Physical Architecture
30 Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120
Validation Test Report
9. Type a unique ID and display name for the desktop pool. ClickNext.
Figure 25 Type desktop ID and name
10.Configure the desktop settings. There are many options such as power saving and performance features. ClickNext.
Figure 26 Configure desktop settings
8/4/2019 Vmware View4 Emc Ns 120 Vtr
31/55
Physical Architecture
Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120 31
Validation Test Repor
11.Select the provisioning features. Type the number of desktops to create and the desktop prefix name. The Advanced Settingsfor desktop availability can also be configured. The View Manager Administration Guide provides more details. ClickNext.
Figure 27 Select provisioning settings
12.Select the master virtual image in which a snapshot was taken previously. ClickNext.
Figure 28 Select the parent VM
8/4/2019 Vmware View4 Emc Ns 120 Vtr
32/55
Physical Architecture
32 Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120
Validation Test Report
13.Select the snapshot that represents the state of virtual desktops. This is the base image for the desktop pool. ClickNext.
Figure 29 Select the snapshot for the base image
14.Select the folder to create the desktops, and then clickNext.
Figure 30 Select the folder for VM storage
8/4/2019 Vmware View4 Emc Ns 120 Vtr
33/55
Physical Architecture
Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120 33
Validation Test Repor
15.Select the destination cluster or host to run the virtual desktops, and then clickNext.
Figure 31 Select the destination cluster
16.Select the resource pool to run the virtual desktops, and then clickNext. The resource pool was created in the first step.
Figure 32 Select the resource pool
8/4/2019 Vmware View4 Emc Ns 120 Vtr
34/55
Physical Architecture
34 Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120
Validation Test Report
17.Select where to store user data, whether on a separate disk or on the same disk as the OS. In this example, user data is storedon drive D: and Usedifferent datastores for user data disks and OS disks is selected.
Figure 33 Specify user data storage
18.Select the data stores for the virtual machines. Ensure that every host in the cluster can access the data store (iSCSI LUN),which is achieved by setting the proper iSCSI mask on the LUN when it is created. If every host cannot access the data store,
it is not available for selection. Ensure that the iSCSI adapter is rescanned after the LUN mask is properly set. ClickNext.
Note that the storage overcommit is selected from a drop-down list in the last column. Storage overcommitment determines
how aggressively View Composer allocates virtual machines to the available free space. VMware recommends the
Conservative option.
Figure 34 Select the data stores
8/4/2019 Vmware View4 Emc Ns 120 Vtr
35/55
Physical Architecture
Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120 35
Validation Test Repor
19.Select and enter any QuickPrep settings that are required for the desktops. ClickNext.
Figure 35 Select QuickPrep settings
20.Review the selected options, and then clickFinish.
Figure 36 Review all settings
8/4/2019 Vmware View4 Emc Ns 120 Vtr
36/55
Physical Architecture
36 Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120
Validation Test Report
21.The new desktop pool is visible in View Manager. Click the Desktop Sources tab to view the created virtual machines. Tomonitor the progress of desktop creation, clickEvents.
Figure 37 View Manager display
After the virtual machines are created, access the virtual desktops through the vSphere client.
Figure 38 vCenter display of the new desktops
8/4/2019 Vmware View4 Emc Ns 120 Vtr
37/55
Physical Architecture
Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120 37
Validation Test Repor
Test description
The test program was designed to simulate an average user workload on a typical desktop environment. The program is executed
on each virtual desktop hosted on an ESX server, and it can run on Windows XP and Windows Vista operating systems. The
program has a set of functions that perform interactive operations on desktop applications and are called randomly. The workload
is started on all virtual desktops within a short time window. The number of iterations that each virtual desktop will execute can
be defined. The test harness measures the opens and variations of closes and saves but it does not test read or write performance.
In this test case, an iteration count of two was used. This resulted in a 3-hour 15-minute test completion time when using 1,035
virtual desktops. A maximum delay between application operations, or think time, can be specified. This results in the
applications running randomly. In this test case, the maximum delay is 5 minutes, which means that an application start can vary
from one-tenth of a second to the maximum 5 minutes.
The test configuration uses a building block of 225 virtual desktops per 4+1 RAID 5 disk group. This is a change from the earlier
documented guideline of 250 virtual desktops per building block. This change is because of the additional workload as a result of
adding Microsoft Outlook. The test harness configures Outlook users and mails messages to users on an Internet Message Access
Protocol (IMAP) server running on a virtual desktop with a Linux OS.
Table 8 shows the functions called by the test script.
Table 8 Virtual desktop test program functions
Function Description
MS_WORD_Open() Creates a Microsoft Word document
MS_IE_Open() Opens Microsoft Internet Explorer and a .gif file
MS_PPT_Open() Opens a Microsoft PowerPoint presentation
PKZIP_CompressFiles() Compresses large files and deletes them
MS_EXCEL_OpenModifySave() Opens a blank Microsoft Excel Spreadsheet, writes to it,and then saves it
ADOBE_OpenBrowse() Opens Adobe Reader 8.0 and a PDF file, and thenbrowses a few pages
MS_WORD_Open2() Creates a Microsoft Word document
MS_WORD_ModifySave() Writes to the Microsoft Word document and saves it
ADOBE_Browse() Browses a few pages in a PDF file with Adobe Reader 8.0
MS_WORD_ModifySave2() Writes to the Microsoft Word document and saves it
MS_WORD_SaveClose() Saves the Microsoft Word document, and then closes thewindow
MS_IE_Close() Closes Microsoft Internet Explorer
MS_PPT_ShowClose() Shows a Microsoft PowerPoint presentation and closes it
MS_EXCEL_ModifySaveClose() Modifies the Excel spreadsheet, saves it, and then closes
the window
ADOBE_Close() Closes the PDF file, and then closes the application
MS_Outlook_Open() Opens Microsoft Outlook
MS_Outlook_Close() Closes Microsoft Outlook
MS_WORD_ModifySaveClose() Writes to the Microsoft Word document, saves it again,and then closes the document
8/4/2019 Vmware View4 Emc Ns 120 Vtr
38/55
Test Results
38 Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120
Validation Test Report
Chapter 2 Test Results
Test results for virtual desktops, ESX servers, and a NS-120 Data Mover
This section documents the detailed results of this testing.
Test description overview
The data presented in this section shows the average open and close times of various desktop applications using the basic building
block of 225 virtual desktops per 4+1 RAID 5 building block. For a 1,035-desktop test, there were four complete building blocks
used in addition to a partial one on the fifth building block. Test results were obtained when virtual desktops were deployed with
View Composer linked clones.
Twenty ESX servers were used to host 1,035 virtual desktops. Less than 9.4 virtual desktopss per core were run on the ESXservers. This took advantage of the VMware memory ballooning feature and a light-average desktop workload. In the ensuing test
data graphs, note the following:
The Data Mover CPU utilization on all iterations/passes
The open times with iteration one and subsequent iterations
The average and peak open times
Also, note the detailed analysis of the data collected when the Excel, Adobe, and Outlook applications open. The analysis of
these specific applications was used because they can have large average and peak open times.
Desktops from VMware View Composer linked clonesApplication response time on virtual desktops
Table 9 on page 39 shows the average response time of various applications run on each virtual desktop. Note that the average
response time is much larger for iteration/pass one when compared with the subsequent passes. This is expected even outside of
the test environment. The first time Microsoft Office opens an application, it takes much longer than it does on subsequent opens.
The difference is because of the caching and prefetching activity occurring with the Windows operating system. Generally, the
first time an application is executed, prefetch .pf files are created to improve access times and are thereafter deleted when the
system is rebooted. .
In a complete VMware View implementation, results will vary based on whether View Manager provides a desktop that needs to
be powered on, or whether View Manager provides the same desktop from a persistent pool and the users resume where they
previously ended. In both cases, it depends on the virtual machine power policy that has been implemented. The power policy
settings (to remain on, suspend, or power-off idle virtual machines) will have varying results at both the host and virtual machine
level.
In Figure 39 and Table 9, the data shows that the average time for the opens dramatically decreases in the second iteration. Also,
notice that the average time decreases and exhibits consistent, steady state performance for the second iteration.
8/4/2019 Vmware View4 Emc Ns 120 Vtr
39/55
Test Results
Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120 39
Validation Test Repor
Figure 39 Comparison of iteration 1 versus iteration 2 response times
Table 9 Response times for 1,035 virtual desktops on 5 FC building blocks
8/4/2019 Vmware View4 Emc Ns 120 Vtr
40/55
Test Results
40 Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120
Validation Test Report
Table 10 shows the response times of application operations when executed in a View 3.0 environment and when executed in aView 4.0 environment. In almost all cases, a performance improvement resulted when using View 4.0. Performance improvementsranged from 1 % to 55%.
Table 10 Comparison of response times with View 3.0 and View 4.0 environments
The open behavior is revealed clearly by examining the highlighted entries in Table 9.
Figure 40 Excel opens by 1,035 VMs on 5 FC building blocks
8/4/2019 Vmware View4 Emc Ns 120 Vtr
41/55
Test Results
Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120 41
Validation Test Repor
Figure 42 Outlook opens by 1,035 VMs on 5 FC building blocks
Figure 41 Adobe opens by 1,035 VMs on 5 building blocks
8/4/2019 Vmware View4 Emc Ns 120 Vtr
42/55
Test Results
42 Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120
Validation Test Report
ESX server statistics
The esxtop command was used to obtain server resource utilization statistics on all ESX servers during testing. The following
command line was used:
$ esxtop -s -b -d 30
The following server measurements were collected and charted:
Total processor time
I/O rate with Celerra storage
Memory
PShare
Swap
Memctl
Swapping is minimal when 4 GB of memory per core is used in this environment.
The following graphs show ESX server statistics when running the test with 1,035 virtual machines located on five FC 4+1 RAID
5 building blocks.
Figure 43 shows the average CPU utilization on a single eight-core ESX server. The ESX server hosted 75 virtual machines. As
the graph indicates, the test resulted in a 34 percent average CPU utilization with several instances exceeding 40 percent.
Figure 43 ESX server CPU utilization with 75 VMs on 5 FC building blocks
8/4/2019 Vmware View4 Emc Ns 120 Vtr
43/55
Test Results
Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120 43
Validation Test Repor
Figure 44 shows a memory management feature of the ESX server. The ESX server has a feature called memory overcommit,
which enables virtual machines to configure virtual memory more than the actual physical memory. With this ESX feature, page
sharing enables memory overcommit to occur. As common pages are determined, ESX enables virtual machines to share them
rather than have their own pages.
Figure 44 ESX server memory PShare for 75 VMs on a single ESX server
8/4/2019 Vmware View4 Emc Ns 120 Vtr
44/55
Test Results
44 Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120
Validation Test Report
Figure 45 shows the total disk I/O throughput for the ESX server. The read and write MB/s throughput for iSCSI adapter
vmhba35 was measured with esxtop. A total of 1.1 MB/s indicated a relatively low throughput on this ESX host.
Figure 45 ESX server disk I/O throughput for the 1,035 VMs on 5 FC building block
Celerra NS-120 Data Mover measurements
While the test harness was running on the virtual desktops, the Celerra command server_stats was run against the Data Mover to
measure the performance.
$ /nas/bin/server_stats -summary basic,caches -table net,dvol,fsvol -interval 30 -format csv -titles once -terminationsummary yes
This command collected performance data about the Data Mover every 30 seconds. The load on the Data Mover was low and the
throughput rate on the dvols was also very low. Table 11 provides some of the significant data result statistics.
Table 11 Data Mover statistics for 1,035 VMs using five 4+1 RAID 5 disk groups
Measurement parameter name Average value
Network Input 8044 KiB/s (7.9 MB/s)
Network Output 12064 KiB/s (11.8 MB/s)
Dvol Read 11117 KiB/s (10.9 MB/s)
Dvol Write 7507 KiB/s (7.3 MB/s)
Buffer Cache Hit Rate 100%
CPU Utilization 7%
8/4/2019 Vmware View4 Emc Ns 120 Vtr
45/55
Test Results
Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120 45
Validation Test Repor
Figure 46 shows an average Data Mover CPU utilization of 6.7 percent, with an initial peak of 25 percent. This workload was
easily handled by the NS-120 Data Mover:
Figure 46 NS-120 Data Mover CPU utilization
8/4/2019 Vmware View4 Emc Ns 120 Vtr
46/55
Test Results
46 Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120
Validation Test Report
Figure 47 shows the Data Mover I/O throughput rate of 5 building blocks for 3 hours 15 minutes of testing, where all 1,035
virtual machines were running tests simultaneously. The Control Station command server_stats was used to obtain the
throughput rate of the Data Mover. The throughput rate was obtained by adding the read and write totals for the disk volumes that
were used to host the 1,035 virtual machines. The average throughput rate for the test was 18.2 MB/s with a one-time peak of 108
MB/s. These rates were low and could be easily managed by the Data Mover.
Figure 47 NS-120 Data Mover throughput using five RAID 5 building blocks
NS-120 storage array measurements using linked clones
While the test harness was running on the virtual desktops, the CLARiiON Analyzer was started to collect performance data
about the storage array. The following command was used to set the performance logging options:
$ naviseccli -User -Password -Scope 0 -Address spa analyzer -set nonstop-periodicarchiving 1 -narinterval 60 -rtinterval 60
where:
narinterval 60 Changes the polling interval to 1 minute for performance logging
rinterval 60 Changes the polling interval to 1 minute for realtime chart windows
periodicarchiving 1 Automatically archives files at periods of 156 samples
nonstop Sets performance logging to run nonstop
8/4/2019 Vmware View4 Emc Ns 120 Vtr
47/55
Test Results
Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120 47
Validation Test Repor
After completion, the Analyzer NAR files were merged and analyzed. The following graphs were obtained from the Analyzer
analysis.
Figure 48 shows an average SP A CPU utilization of 19.5 percent with a few peaks in the 40 percent to 50 percent range. The
average SP B CPU utilization was 12.9 percent with a few peaks in the 20 percent to 30 percent range. This workload was easily
handled by the NS-120 storage array.
Figure 48 NS-120 storage array CPU utilization
8/4/2019 Vmware View4 Emc Ns 120 Vtr
48/55
Test Results
48 Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120
Validation Test Report
Figure 49 shows the NS-120 storage bandwidth rate for 3 hours 15 minutes of testing, where all 1,035 virtual desktops were
running tests simultaneously. The average bandwidth rate for SP A was 10.1 MB/s with several peaks in the 20 MB/s to 55 MB/s
range. For SP B, the average bandwidth was 6.6 MB/s with several peaks in the 10 MB/s to 36 MB/s range. These bandwidths
were low and were easily managed by the NS-120 storage array.
Figure 49 NS-120 storage array bandwidth
8/4/2019 Vmware View4 Emc Ns 120 Vtr
49/55
Test Results
Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120 49
Validation Test Repor
Figure 50 shows the NS-120 storage throughput rate for 3 hours 15 minutes of testing, where all 1,035 virtual desktops were
running tests simultaneously. The average throughput rate for SP A was 959.5 IOPS with several peaks in the 1500 IOPS to 3227
IOPS range. For SP B, the average throughput rate was 587.0 IOPS with several peaks in the 600 IOPS to 1647 IO/sec range.
These throughput rates were low and were easily managed by the NS-120 storage array.
Figure 50 NS-120 storage array IOPS
8/4/2019 Vmware View4 Emc Ns 120 Vtr
50/55
Test Results
50 Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120
Validation Test Report
Figure 51shows the NS-120 storage response time for 3 hours 15 minutes of testing, where all 1,035 virtual desktops were
running tests simultaneously. The average response time for SP A was 0.7 ms with a few initial peaks in the 1ms to 4.7 ms range.
For SP B, the average response time was 0.6 ms with a few initial peaks in the 1 ms to 3.5 ms range. These response times were
acceptable and easily handled by the NS-120 storage array.
Figure 51 NS-120 storage array response time
8/4/2019 Vmware View4 Emc Ns 120 Vtr
51/55
Test Results
Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120 51
Validation Test Repor
The following four graphs examine the performance statistics for the busiest LUN that the NS-120 storage array is managing. It
is a single 4+1 RAID 5 LUN consisting of the Celerra disk volume 27.
Figure 52 shows the LUN utilization during the 3-hour 15-minute test. The average LUN utilization was 23.2 percent with an
initial 4-minute peak above 90 percent. This average LUN utilization was acceptable and easily handled by the NS-120 storage
array.
Figure 52 Most active LUN utilization
8/4/2019 Vmware View4 Emc Ns 120 Vtr
52/55
Test Results
52 Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120
Validation Test Report
Figure 53shows the LUN bandwidth during the 3-hour 15-minute test. The average LUN bandwidth was 2.2 MB/s with a peak of
9.7 MB/s. This average LUN bandwidth was acceptable and easily handled by the NS-120 storage array.
Figure 53 Most active LUN bandwidth
8/4/2019 Vmware View4 Emc Ns 120 Vtr
53/55
Test Results
Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120 53
Validation Test Repor
Figure 54shows the total LUN throughput rate during the 3-hour 15-minute test. The average LUN throughput was 201.7 IOPS
with a peak of 690 IOPS. This average LUN throughput rate was acceptable and easily handled by the NS-120 storage array.
Figure 54 Most active LUN IOPS
8/4/2019 Vmware View4 Emc Ns 120 Vtr
54/55
Test Results
54 Deploying Virtuals Desktops with VMware View 4.0 in Midsize and Enterprise Environments EMC Celerra NS-120
Validation Test Report
Figure 55shows the LUN response time during the 3-hour 15-minute test. The average LUN response time was 1.9 ms with an
initial peak of 16 ms. This average response time was acceptable and easily handled by the NS-120 storage array.
Figure 55 Most active LUN response time
8/4/2019 Vmware View4 Emc Ns 120 Vtr
55/55
Test Results
Conclusions
The following conclusions are based on the VMware View testing:
Version 4.0 View and vSphere was confirmed to function and perform well with Celerra NS-120 storage. In fact, aperformance improvement was realized when upgrading the test environment to View 4.0 and vSphere 4.0. This included
upgrading all ESX servers to version 4.0. Open times improved from 1 to 20 percent.
Eight virtual desktops per core with an average workload can be recommended. This agrees with the VMwarerecommendation of eight virtual desktops per core. This recommendation was made by looking at both the CPU utilization on
the ESX server and the virtual desktop application response time. The recommendation of eight virtual desktops per core was
confirmed by successfully running desktop applications on 32 virtual desktops with a four-core ESX server and 64 virtual
desktops with an eight-core ESX server. If the workload is small, take advantage of the VMware memory ballooning feature
and slightly increase the number of virtual desktops per core based on the processor type. This was confirmed by successfully
running tests with 75 virtual desktops on a eight-core (9.4 virtual desktops per core) system.
The amount of 4 GB memory per core is sufficient. With an environment that uses 1 GB memory per Microsoft XP virtualdesktop, this leads to very little swapping by the virtual desktops. Consider the swap space needed when sizing a virtual
desktop.
With Fibre Channel storage, the recommended number of virtual desktops per 4+1 RAID 5 disk group has been slightlyreduced from 250 to 225. This reduction is because of the increased workload when adding Microsoft Outlook. Thus, on a 120
FC disk NS-120 system configured with the NAS CX_All_4Plus1_Raid_5 storage template, 22 unused 4+1 RAID groups will
be created, which should support 4950 virtual desktops. This is dependent on the workload of the virtual desktops, which
should be monitored closely.
Previously, 225 or 250 virtual desktops per 4+1 RAID 5 building block was recommended on FC drives. This was solelybased on the application response time of the test harness used. If the load is higher or if concurrent reboots or virus scans are
anticipated, decrease the number of virtual desktops per building block.
This 4+1 RAID 5 building block recommendation can be used as a guideline for all Celerra systems. These test results can beextrapolated to other Celerra systems. Specifically, the Celerra NS-480 system should provide similar or better results because
it has the same Data Mover that a Celerra NS-120 system has, but with a larger CLARiiON storage array attached.
Performance always varies by load. Monitor the system. Consider all factors in addition to the application response time.Common tasks such as patching, powering on, scanning, defragmenting should be scheduled differently in a VMware View
environment. Staggering these operations on the desktops yields much better performance.